2010-04-01
for decoupled parallel development Ref: Barry Boehm 12 Impacts of Technological Changes in the Cyber Environment on Software/Systems Engineering... Pressman , R.S., Software Engineering: A Practitioner’s Approach, 13 Impacts of Technological Changes in the Cyber Environment on Software/Systems
NASA Technical Reports Server (NTRS)
Krosel, S. M.; Milner, E. J.
1982-01-01
The application of Predictor corrector integration algorithms developed for the digital parallel processing environment are investigated. The algorithms are implemented and evaluated through the use of a software simulator which provides an approximate representation of the parallel processing hardware. Test cases which focus on the use of the algorithms are presented and a specific application using a linear model of a turbofan engine is considered. Results are presented showing the effects of integration step size and the number of processors on simulation accuracy. Real time performance, interprocessor communication, and algorithm startup are also discussed.
Parallel Algorithms for Groebner-Basis Reduction
1987-09-25
22209 ELEMENT NO. NO. NO. ACCESSION NO. 11. TITLE (Include Security Classification) * PARALLEL ALGORITHMS FOR GROEBNER -BASIS REDUCTION 12. PERSONAL...All other editions are obsolete. Productivity Engineering in the UNIXt Environment p Parallel Algorithms for Groebner -Basis Reduction Technical Report
NASA Technical Reports Server (NTRS)
Lawrence, Charles; Putt, Charles W.
1997-01-01
The Visual Computing Environment (VCE) is a NASA Lewis Research Center project to develop a framework for intercomponent and multidisciplinary computational simulations. Many current engineering analysis codes simulate various aspects of aircraft engine operation. For example, existing computational fluid dynamics (CFD) codes can model the airflow through individual engine components such as the inlet, compressor, combustor, turbine, or nozzle. Currently, these codes are run in isolation, making intercomponent and complete system simulations very difficult to perform. In addition, management and utilization of these engineering codes for coupled component simulations is a complex, laborious task, requiring substantial experience and effort. To facilitate multicomponent aircraft engine analysis, the CFD Research Corporation (CFDRC) is developing the VCE system. This system, which is part of NASA's Numerical Propulsion Simulation System (NPSS) program, can couple various engineering disciplines, such as CFD, structural analysis, and thermal analysis. The objectives of VCE are to (1) develop a visual computing environment for controlling the execution of individual simulation codes that are running in parallel and are distributed on heterogeneous host machines in a networked environment, (2) develop numerical coupling algorithms for interchanging boundary conditions between codes with arbitrary grid matching and different levels of dimensionality, (3) provide a graphical interface for simulation setup and control, and (4) provide tools for online visualization and plotting. VCE was designed to provide a distributed, object-oriented environment. Mechanisms are provided for creating and manipulating objects, such as grids, boundary conditions, and solution data. This environment includes parallel virtual machine (PVM) for distributed processing. Users can interactively select and couple any set of codes that have been modified to run in a parallel distributed fashion on a cluster of heterogeneous workstations. A scripting facility allows users to dictate the sequence of events that make up the particular simulation.
Improving Systems Engineering Effectiveness in Rapid Response Development Environments
2012-06-02
environments where large, complex, brownfield systems of systems are evolved through parallel development of new capabilities in response to external, time...license 14. ABSTRACT Systems engineering is often ineffective in development environments where large, complex, brownfield systems of systems are...IEEE Press, Hoboken, NJ, 2008 [18] Boehm, B.: Applying the Incremental Commitment Model to Brownfield Systems Development, Proceedings, CSER 2009
Beyond the Renderer: Software Architecture for Parallel Graphics and Visualization
NASA Technical Reports Server (NTRS)
Crockett, Thomas W.
1996-01-01
As numerous implementations have demonstrated, software-based parallel rendering is an effective way to obtain the needed computational power for a variety of challenging applications in computer graphics and scientific visualization. To fully realize their potential, however, parallel renderers need to be integrated into a complete environment for generating, manipulating, and delivering visual data. We examine the structure and components of such an environment, including the programming and user interfaces, rendering engines, and image delivery systems. We consider some of the constraints imposed by real-world applications and discuss the problems and issues involved in bringing parallel rendering out of the lab and into production.
ERIC Educational Resources Information Center
Lundquist, Carol; Frieder, Ophir; Holmes, David O.; Grossman, David
1999-01-01
Describes a scalable, parallel, relational database-drive information retrieval engine. To support portability across a wide range of execution environments, all algorithms adhere to the SQL-92 standard. By incorporating relevance feedback algorithms, accuracy is enhanced over prior database-driven information retrieval efforts. Presents…
Impact of new computing systems on computational mechanics and flight-vehicle structures technology
NASA Technical Reports Server (NTRS)
Noor, A. K.; Storaasli, O. O.; Fulton, R. E.
1984-01-01
Advances in computer technology which may have an impact on computational mechanics and flight vehicle structures technology were reviewed. The characteristics of supersystems, highly parallel systems, and small systems are summarized. The interrelations of numerical algorithms and software with parallel architectures are discussed. A scenario for future hardware/software environment and engineering analysis systems is presented. Research areas with potential for improving the effectiveness of analysis methods in the new environment are identified.
Using CLIPS in the domain of knowledge-based massively parallel programming
NASA Technical Reports Server (NTRS)
Dvorak, Jiri J.
1994-01-01
The Program Development Environment (PDE) is a tool for massively parallel programming of distributed-memory architectures. Adopting a knowledge-based approach, the PDE eliminates the complexity introduced by parallel hardware with distributed memory and offers complete transparency in respect of parallelism exploitation. The knowledge-based part of the PDE is realized in CLIPS. Its principal task is to find an efficient parallel realization of the application specified by the user in a comfortable, abstract, domain-oriented formalism. A large collection of fine-grain parallel algorithmic skeletons, represented as COOL objects in a tree hierarchy, contains the algorithmic knowledge. A hybrid knowledge base with rule modules and procedural parts, encoding expertise about application domain, parallel programming, software engineering, and parallel hardware, enables a high degree of automation in the software development process. In this paper, important aspects of the implementation of the PDE using CLIPS and COOL are shown, including the embedding of CLIPS with C++-based parts of the PDE. The appropriateness of the chosen approach and of the CLIPS language for knowledge-based software engineering are discussed.
Software environment for implementing engineering applications on MIMD computers
NASA Technical Reports Server (NTRS)
Lopez, L. A.; Valimohamed, K. A.; Schiff, S.
1990-01-01
In this paper the concept for a software environment for developing engineering application systems for multiprocessor hardware (MIMD) is presented. The philosophy employed is to solve the largest problems possible in a reasonable amount of time, rather than solve existing problems faster. In the proposed environment most of the problems concerning parallel computation and handling of large distributed data spaces are hidden from the application program developer, thereby facilitating the development of large-scale software applications. Applications developed under the environment can be executed on a variety of MIMD hardware; it protects the application software from the effects of a rapidly changing MIMD hardware technology.
Synchronization Of Parallel Discrete Event Simulations
NASA Technical Reports Server (NTRS)
Steinman, Jeffrey S.
1992-01-01
Adaptive, parallel, discrete-event-simulation-synchronization algorithm, Breathing Time Buckets, developed in Synchronous Parallel Environment for Emulation and Discrete Event Simulation (SPEEDES) operating system. Algorithm allows parallel simulations to process events optimistically in fluctuating time cycles that naturally adapt while simulation in progress. Combines best of optimistic and conservative synchronization strategies while avoiding major disadvantages. Algorithm processes events optimistically in time cycles adapting while simulation in progress. Well suited for modeling communication networks, for large-scale war games, for simulated flights of aircraft, for simulations of computer equipment, for mathematical modeling, for interactive engineering simulations, and for depictions of flows of information.
Turbomachinery CFD on parallel computers
NASA Technical Reports Server (NTRS)
Blech, Richard A.; Milner, Edward J.; Quealy, Angela; Townsend, Scott E.
1992-01-01
The role of multistage turbomachinery simulation in the development of propulsion system models is discussed. Particularly, the need for simulations with higher fidelity and faster turnaround time is highlighted. It is shown how such fast simulations can be used in engineering-oriented environments. The use of parallel processing to achieve the required turnaround times is discussed. Current work by several researchers in this area is summarized. Parallel turbomachinery CFD research at the NASA Lewis Research Center is then highlighted. These efforts are focused on implementing the average-passage turbomachinery model on MIMD, distributed memory parallel computers. Performance results are given for inviscid, single blade row and viscous, multistage applications on several parallel computers, including networked workstations.
NASA Technical Reports Server (NTRS)
Pratt, Terrence W.
1987-01-01
PISCES 2 is a programming environment and set of extensions to Fortran 77 for parallel programming. It is intended to provide a basis for writing programs for scientific and engineering applications on parallel computers in a way that is relatively independent of the particular details of the underlying computer architecture. This user's manual provides a complete description of the PISCES 2 system as it is currently implemented on the 20 processor Flexible FLEX/32 at NASA Langley Research Center.
A distributed version of the NASA Engine Performance Program
NASA Technical Reports Server (NTRS)
Cours, Jeffrey T.; Curlett, Brian P.
1993-01-01
Distributed NEPP, a version of the NASA Engine Performance Program, uses the original NEPP code but executes it in a distributed computer environment. Multiple workstations connected by a network increase the program's speed and, more importantly, the complexity of the cases it can handle in a reasonable time. Distributed NEPP uses the public domain software package, called Parallel Virtual Machine, allowing it to execute on clusters of machines containing many different architectures. It includes the capability to link with other computers, allowing them to process NEPP jobs in parallel. This paper discusses the design issues and granularity considerations that entered into programming Distributed NEPP and presents the results of timing runs.
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw
1998-01-01
The paper identifies speed, agility, human interface, generation of sensitivity information, task decomposition, and data transmission (including storage) as important attributes for a computer environment to have in order to support engineering design effectively. It is argued that when examined in terms of these attributes the presently available environment can be shown to be inadequate a radical improvement is needed, and it may be achieved by combining new methods that have recently emerged from multidisciplinary design optimization (MDO) with massively parallel processing computer technology. The caveat is that, for successful use of that technology in engineering computing, new paradigms for computing will have to be developed - specifically, innovative algorithms that are intrinsically parallel so that their performance scales up linearly with the number of processors. It may be speculated that the idea of simulating a complex behavior by interaction of a large number of very simple models may be an inspiration for the above algorithms, the cellular automata are an example. Because of the long lead time needed to develop and mature new paradigms, development should be now, even though the widespread availability of massively parallel processing is still a few years away.
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw
1999-01-01
The paper identifies speed, agility, human interface, generation of sensitivity information, task decomposition, and data transmission (including storage) as important attributes for a computer environment to have in order to support engineering design effectively. It is argued that when examined in terms of these attributes the presently available environment can be shown to be inadequate. A radical improvement is needed, and it may be achieved by combining new methods that have recently emerged from multidisciplinary design optimisation (MDO) with massively parallel processing computer technology. The caveat is that, for successful use of that technology in engineering computing, new paradigms for computing will have to be developed - specifically, innovative algorithms that are intrinsically parallel so that their performance scales up linearly with the number of processors. It may be speculated that the idea of simulating a complex behaviour by interaction of a large number of very simple models may be an inspiration for the above algorithms; the cellular automata are an example. Because of the long lead time needed to develop and mature new paradigms, development should begin now, even though the widespread availability of massively parallel processing is still a few years away.
NASA Technical Reports Server (NTRS)
Banks, Daniel W.; Laflin, Brenda E. Gile; Kemmerly, Guy T.; Campbell, Bryan A.
1999-01-01
The paper identifies speed, agility, human interface, generation of sensitivity information, task decomposition, and data transmission (including storage) as important attributes for a computer environment to have in order to support engineering design effectively. It is argued that when examined in terms of these attributes the presently available environment can be shown to be inadequate. A radical improvement is needed, and it may be achieved by combining new methods that have recently emerged from multidisciplinary design optimisation (MDO) with massively parallel processing computer technology. The caveat is that, for successful use of that technology in engineering computing, new paradigms for computing will have to be developed - specifically, innovative algorithms that are intrinsically parallel so that their performance scales up linearly with the number of processors. It may be speculated that the idea of simulating a complex behaviour by interaction of a large number of very simple models may be an inspiration for the above algorithms; the cellular automata are an example. Because of the long lead time needed to develop and mature new paradigms, development should begin now, even though the widespread availability of massively parallel processing is still a few years away.
Environmental concept for engineering software on MIMD computers
NASA Technical Reports Server (NTRS)
Lopez, L. A.; Valimohamed, K.
1989-01-01
The issues related to developing an environment in which engineering systems can be implemented on MIMD machines are discussed. The problem is presented in terms of implementing the finite element method under such an environment. However, neither the concepts nor the prototype implementation environment are limited to this application. The topics discussed include: the ability to schedule and synchronize tasks efficiently; granularity of tasks; load balancing; and the use of a high level language to specify parallel constructs, manage data, and achieve portability. The objective of developing a virtual machine concept which incorporates solutions to the above issues leads to a design that can be mapped onto loosely coupled, tightly coupled, and hybrid systems.
Solving Partial Differential Equations in a data-driven multiprocessor environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gaudiot, J.L.; Lin, C.M.; Hosseiniyar, M.
1988-12-31
Partial differential equations can be found in a host of engineering and scientific problems. The emergence of new parallel architectures has spurred research in the definition of parallel PDE solvers. Concurrently, highly programmable systems such as data-how architectures have been proposed for the exploitation of large scale parallelism. The implementation of some Partial Differential Equation solvers (such as the Jacobi method) on a tagged token data-flow graph is demonstrated here. Asynchronous methods (chaotic relaxation) are studied and new scheduling approaches (the Token No-Labeling scheme) are introduced in order to support the implementation of the asychronous methods in a data-driven environment.more » New high-level data-flow language program constructs are introduced in order to handle chaotic operations. Finally, the performance of the program graphs is demonstrated by a deterministic simulation of a message passing data-flow multiprocessor. An analysis of the overhead in the data-flow graphs is undertaken to demonstrate the limits of parallel operations in dataflow PDE program graphs.« less
A Programming Environment for Parallel Vision Algorithms
1990-04-11
industrial arm on the market , while the unique head was designed by Rochester’s Computer Science and Mechanical Engineering Departments. 9a 4.1 Introduction...R. Constraining-Unification and the Programming Language Unicorn . In Logic Programming, Functions, Relations, and Equations, Degroot and Lind- strom
Parallels in Computer-Aided Design Framework and Software Development Environment Efforts.
1992-05-01
de - sign kits, and tool and design management frameworks. Also, books about software engineer- ing environments [Long 91] and electronic design...tool integration [Zarrella 90], and agreement upon a universal de - sign automation framework, such as the CAD Framework Initiative (CFI) [Malasky 91...ments: identification, control, status accounting, and audit and review. The paper by Dart ex- tracts 15 CM concepts from existing SDEs and tools
Parallel auto-correlative statistics with VTK.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pebay, Philippe Pierre; Bennett, Janine Camille
2013-08-01
This report summarizes existing statistical engines in VTK and presents both the serial and parallel auto-correlative statistics engines. It is a sequel to [PT08, BPRT09b, PT09, BPT09, PT10] which studied the parallel descriptive, correlative, multi-correlative, principal component analysis, contingency, k-means, and order statistics engines. The ease of use of the new parallel auto-correlative statistics engine is illustrated by the means of C++ code snippets and algorithm verification is provided. This report justifies the design of the statistics engines with parallel scalability in mind, and provides scalability and speed-up analysis results for the autocorrelative statistics engine.
Communicating remote sensing concepts in an interdisciplinary environment
NASA Technical Reports Server (NTRS)
Chung, R.
1981-01-01
Although remote sensing is currently multidisciplinary in its applications, many of its terms come from the engineering sciences, particularly from the field of pattern recognition. Scholars from fields such as the social sciences, botany, and biology, may experience initial difficulty with remote sensing terminology, even though parallel concepts exist in their own fields. Some parallel concepts and terminologies from nonengineering fields, which might enhance the understanding of remote sensing concepts in an interdisciplinary situation are identified. Feedbacks which this analogue strategy might have on remote sensing itself are explored.
Providing a parallel and distributed capability for JMASS using SPEEDES
NASA Astrophysics Data System (ADS)
Valinski, Maria; Driscoll, Jonathan; McGraw, Robert M.; Meyer, Bob
2002-07-01
The Joint Modeling And Simulation System (JMASS) is a Tri-Service simulation environment that supports engineering and engagement-level simulations. As JMASS is expanded to support other Tri-Service domains, the current set of modeling services must be expanded for High Performance Computing (HPC) applications by adding support for advanced time-management algorithms, parallel and distributed topologies, and high speed communications. By providing support for these services, JMASS can better address modeling domains requiring parallel computationally intense calculations such clutter, vulnerability and lethality calculations, and underwater-based scenarios. A risk reduction effort implementing some HPC services for JMASS using the SPEEDES (Synchronous Parallel Environment for Emulation and Discrete Event Simulation) Simulation Framework has recently concluded. As an artifact of the JMASS-SPEEDES integration, not only can HPC functionality be brought to the JMASS program through SPEEDES, but an additional HLA-based capability can be demonstrated that further addresses interoperability issues. The JMASS-SPEEDES integration provided a means of adding HLA capability to preexisting JMASS scenarios through an implementation of the standard JMASS port communication mechanism that allows players to communicate.
A high-speed linear algebra library with automatic parallelism
NASA Technical Reports Server (NTRS)
Boucher, Michael L.
1994-01-01
Parallel or distributed processing is key to getting highest performance workstations. However, designing and implementing efficient parallel algorithms is difficult and error-prone. It is even more difficult to write code that is both portable to and efficient on many different computers. Finally, it is harder still to satisfy the above requirements and include the reliability and ease of use required of commercial software intended for use in a production environment. As a result, the application of parallel processing technology to commercial software has been extremely small even though there are numerous computationally demanding programs that would significantly benefit from application of parallel processing. This paper describes DSSLIB, which is a library of subroutines that perform many of the time-consuming computations in engineering and scientific software. DSSLIB combines the high efficiency and speed of parallel computation with a serial programming model that eliminates many undesirable side-effects of typical parallel code. The result is a simple way to incorporate the power of parallel processing into commercial software without compromising maintainability, reliability, or ease of use. This gives significant advantages over less powerful non-parallel entries in the market.
A software architecture for multidisciplinary applications: Integrating task and data parallelism
NASA Technical Reports Server (NTRS)
Chapman, Barbara; Mehrotra, Piyush; Vanrosendale, John; Zima, Hans
1994-01-01
Data parallel languages such as Vienna Fortran and HPF can be successfully applied to a wide range of numerical applications. However, many advanced scientific and engineering applications are of a multidisciplinary and heterogeneous nature and thus do not fit well into the data parallel paradigm. In this paper we present new Fortran 90 language extensions to fill this gap. Tasks can be spawned as asynchronous activities in a homogeneous or heterogeneous computing environment; they interact by sharing access to Shared Data Abstractions (SDA's). SDA's are an extension of Fortran 90 modules, representing a pool of common data, together with a set of Methods for controlled access to these data and a mechanism for providing persistent storage. Our language supports the integration of data and task parallelism as well as nested task parallelism and thus can be used to express multidisciplinary applications in a natural and efficient way.
Creating a Parallel Version of VisIt for Microsoft Windows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whitlock, B J; Biagas, K S; Rawson, P L
2011-12-07
VisIt is a popular, free interactive parallel visualization and analysis tool for scientific data. Users can quickly generate visualizations from their data, animate them through time, manipulate them, and save the resulting images or movies for presentations. VisIt was designed from the ground up to work on many scales of computers from modest desktops up to massively parallel clusters. VisIt is comprised of a set of cooperating programs. All programs can be run locally or in client/server mode in which some run locally and some run remotely on compute clusters. The VisIt program most able to harness today's computing powermore » is the VisIt compute engine. The compute engine is responsible for reading simulation data from disk, processing it, and sending results or images back to the VisIt viewer program. In a parallel environment, the compute engine runs several processes, coordinating using the Message Passing Interface (MPI) library. Each MPI process reads some subset of the scientific data and filters the data in various ways to create useful visualizations. By using MPI, VisIt has been able to scale well into the thousands of processors on large computers such as dawn and graph at LLNL. The advent of multicore CPU's has made parallelism the 'new' way to achieve increasing performance. With today's computers having at least 2 cores and in many cases up to 8 and beyond, it is more important than ever to deploy parallel software that can use that computing power not only on clusters but also on the desktop. We have created a parallel version of VisIt for Windows that uses Microsoft's MPI implementation (MSMPI) to process data in parallel on the Windows desktop as well as on a Windows HPC cluster running Microsoft Windows Server 2008. Initial desktop parallel support for Windows was deployed in VisIt 2.4.0. Windows HPC cluster support has been completed and will appear in the VisIt 2.5.0 release. We plan to continue supporting parallel VisIt on Windows so our users will be able to take full advantage of their multicore resources.« less
Lee, Wei-Po; Hsiao, Yu-Ting; Hwang, Wei-Che
2014-01-16
To improve the tedious task of reconstructing gene networks through testing experimentally the possible interactions between genes, it becomes a trend to adopt the automated reverse engineering procedure instead. Some evolutionary algorithms have been suggested for deriving network parameters. However, to infer large networks by the evolutionary algorithm, it is necessary to address two important issues: premature convergence and high computational cost. To tackle the former problem and to enhance the performance of traditional evolutionary algorithms, it is advisable to use parallel model evolutionary algorithms. To overcome the latter and to speed up the computation, it is advocated to adopt the mechanism of cloud computing as a promising solution: most popular is the method of MapReduce programming model, a fault-tolerant framework to implement parallel algorithms for inferring large gene networks. This work presents a practical framework to infer large gene networks, by developing and parallelizing a hybrid GA-PSO optimization method. Our parallel method is extended to work with the Hadoop MapReduce programming model and is executed in different cloud computing environments. To evaluate the proposed approach, we use a well-known open-source software GeneNetWeaver to create several yeast S. cerevisiae sub-networks and use them to produce gene profiles. Experiments have been conducted and the results have been analyzed. They show that our parallel approach can be successfully used to infer networks with desired behaviors and the computation time can be largely reduced. Parallel population-based algorithms can effectively determine network parameters and they perform better than the widely-used sequential algorithms in gene network inference. These parallel algorithms can be distributed to the cloud computing environment to speed up the computation. By coupling the parallel model population-based optimization method and the parallel computational framework, high quality solutions can be obtained within relatively short time. This integrated approach is a promising way for inferring large networks.
2014-01-01
Background To improve the tedious task of reconstructing gene networks through testing experimentally the possible interactions between genes, it becomes a trend to adopt the automated reverse engineering procedure instead. Some evolutionary algorithms have been suggested for deriving network parameters. However, to infer large networks by the evolutionary algorithm, it is necessary to address two important issues: premature convergence and high computational cost. To tackle the former problem and to enhance the performance of traditional evolutionary algorithms, it is advisable to use parallel model evolutionary algorithms. To overcome the latter and to speed up the computation, it is advocated to adopt the mechanism of cloud computing as a promising solution: most popular is the method of MapReduce programming model, a fault-tolerant framework to implement parallel algorithms for inferring large gene networks. Results This work presents a practical framework to infer large gene networks, by developing and parallelizing a hybrid GA-PSO optimization method. Our parallel method is extended to work with the Hadoop MapReduce programming model and is executed in different cloud computing environments. To evaluate the proposed approach, we use a well-known open-source software GeneNetWeaver to create several yeast S. cerevisiae sub-networks and use them to produce gene profiles. Experiments have been conducted and the results have been analyzed. They show that our parallel approach can be successfully used to infer networks with desired behaviors and the computation time can be largely reduced. Conclusions Parallel population-based algorithms can effectively determine network parameters and they perform better than the widely-used sequential algorithms in gene network inference. These parallel algorithms can be distributed to the cloud computing environment to speed up the computation. By coupling the parallel model population-based optimization method and the parallel computational framework, high quality solutions can be obtained within relatively short time. This integrated approach is a promising way for inferring large networks. PMID:24428926
Computation of Coupled Thermal-Fluid Problems in Distributed Memory Environment
NASA Technical Reports Server (NTRS)
Wei, H.; Shang, H. M.; Chen, Y. S.
2001-01-01
The thermal-fluid coupling problems are very important to aerospace and engineering applications. Instead of analyzing heat transfer and fluid flow separately, this study merged two well-accepted engineering solution methods, SINDA for thermal analysis and FDNS for fluid flow simulation, into a unified multi-disciplinary thermal fluid prediction method. A fully conservative patched grid interface algorithm for arbitrary two-dimensional and three-dimensional geometry has been developed. The state-of-the-art parallel computing concept was used to couple SINDA and FDNS for the communication of boundary conditions through PVM (Parallel Virtual Machine) libraries. Therefore, the thermal analysis performed by SINDA and the fluid flow calculated by FDNS are fully coupled to obtain steady state or transient solutions. The natural convection between two thick-walled eccentric tubes was calculated and the predicted results match the experiment data perfectly. A 3-D rocket engine model and a real 3-D SSME geometry were used to test the current model, and the reasonable temperature field was obtained.
Opus: A Coordination Language for Multidisciplinary Applications
NASA Technical Reports Server (NTRS)
Chapman, Barbara; Haines, Matthew; Mehrotra, Piyush; Zima, Hans; vanRosendale, John
1997-01-01
Data parallel languages, such as High Performance fortran, can be successfully applied to a wide range of numerical applications. However, many advanced scientific and engineering applications are multidisciplinary and heterogeneous in nature, and thus do not fit well into the data parallel paradigm. In this paper we present Opus, a language designed to fill this gap. The central concept of Opus is a mechanism called ShareD Abstractions (SDA). An SDA can be used as a computation server, i.e., a locus of computational activity, or as a data repository for sharing data between asynchronous tasks. SDAs can be internally data parallel, providing support for the integration of data and task parallelism as well as nested task parallelism. They can thus be used to express multidisciplinary applications in a natural and efficient way. In this paper we describe the features of the language through a series of examples and give an overview of the runtime support required to implement these concepts in parallel and distributed environments.
A multi-satellite orbit determination problem in a parallel processing environment
NASA Technical Reports Server (NTRS)
Deakyne, M. S.; Anderle, R. J.
1988-01-01
The Engineering Orbit Analysis Unit at GE Valley Forge used an Intel Hypercube Parallel Processor to investigate the performance and gain experience of parallel processors with a multi-satellite orbit determination problem. A general study was selected in which major blocks of computation for the multi-satellite orbit computations were used as units to be assigned to the various processors on the Hypercube. Problems encountered or successes achieved in addressing the orbit determination problem would be more likely to be transferable to other parallel processors. The prime objective was to study the algorithm to allow processing of observations later in time than those employed in the state update. Expertise in ephemeris determination was exploited in addressing these problems and the facility used to bring a realism to the study which would highlight the problems which may not otherwise be anticipated. Secondary objectives were to gain experience of a non-trivial problem in a parallel processor environment, to explore the necessary interplay of serial and parallel sections of the algorithm in terms of timing studies, to explore the granularity (coarse vs. fine grain) to discover the granularity limit above which there would be a risk of starvation where the majority of nodes would be idle or under the limit where the overhead associated with splitting the problem may require more work and communication time than is useful.
The impact of distributed computing on education
NASA Technical Reports Server (NTRS)
Utku, S.; Lestingi, J.; Salama, M.
1982-01-01
In this paper, developments in digital computer technology since the early Fifties are reviewed briefly, and the parallelism which exists between these developments and developments in analysis and design procedures of structural engineering is identified. The recent trends in digital computer technology are examined in order to establish the fact that distributed processing is now an accepted philosophy for further developments. The impact of this on the analysis and design practices of structural engineering is assessed by first examining these practices from a data processing standpoint to identify the key operations and data bases, and then fitting them to the characteristics of distributed processing. The merits and drawbacks of the present philosophy in educating structural engineers are discussed and projections are made for the industry-academia relations in the distributed processing environment of structural analysis and design. An ongoing experiment of distributed computing in a university environment is described.
Shared direct memory access on the Explorer 2-LX
NASA Technical Reports Server (NTRS)
Musgrave, Jeffrey L.
1990-01-01
Advances in Expert System technology and Artificial Intelligence have provided a framework for applying automated Intelligence to the solution of problems which were generally perceived as intractable using more classical approaches. As a result, hybrid architectures and parallel processing capability have become more common in computing environments. The Texas Instruments Explorer II-LX is an example of a machine which combines a symbolic processing environment, and a computationally oriented environment in a single chassis for integrated problem solutions. This user's manual is an attempt to make these capabilities more accessible to a wider range of engineers and programmers with problems well suited to solution in such an environment.
Interactive Parallel Data Analysis within Data-Centric Cluster Facilities using the IPython Notebook
NASA Astrophysics Data System (ADS)
Pascoe, S.; Lansdowne, J.; Iwi, A.; Stephens, A.; Kershaw, P.
2012-12-01
The data deluge is making traditional analysis workflows for many researchers obsolete. Support for parallelism within popular tools such as matlab, IDL and NCO is not well developed and rarely used. However parallelism is necessary for processing modern data volumes on a timescale conducive to curiosity-driven analysis. Furthermore, for peta-scale datasets such as the CMIP5 archive, it is no longer practical to bring an entire dataset to a researcher's workstation for analysis, or even to their institutional cluster. Therefore, there is an increasing need to develop new analysis platforms which both enable processing at the point of data storage and which provides parallelism. Such an environment should, where possible, maintain the convenience and familiarity of our current analysis environments to encourage curiosity-driven research. We describe how we are combining the interactive python shell (IPython) with our JASMIN data-cluster infrastructure. IPython has been specifically designed to bridge the gap between the HPC-style parallel workflows and the opportunistic curiosity-driven analysis usually carried out using domain specific languages and scriptable tools. IPython offers a web-based interactive environment, the IPython notebook, and a cluster engine for parallelism all underpinned by the well-respected Python/Scipy scientific programming stack. JASMIN is designed to support the data analysis requirements of the UK and European climate and earth system modeling community. JASMIN, with its sister facility CEMS focusing the earth observation community, has 4.5 PB of fast parallel disk storage alongside over 370 computing cores provide local computation. Through the IPython interface to JASMIN, users can make efficient use of JASMIN's multi-core virtual machines to perform interactive analysis on all cores simultaneously or can configure IPython clusters across multiple VMs. Larger-scale clusters can be provisioned through JASMIN's batch scheduling system. Outputs can be summarised and visualised using the full power of Python's many scientific tools, including Scipy, Matplotlib, Pandas and CDAT. This rich user experience is delivered through the user's web browser; maintaining the interactive feel of a workstation-based environment with the parallel power of a remote data-centric processing facility.
SAPNEW: Parallel finite element code for thin shell structures on the Alliant FX/80
NASA Astrophysics Data System (ADS)
Kamat, Manohar P.; Watson, Brian C.
1992-02-01
The results of a research activity aimed at providing a finite element capability for analyzing turbo-machinery bladed-disk assemblies in a vector/parallel processing environment are summarized. Analysis of aircraft turbofan engines is very computationally intensive. The performance limit of modern day computers with a single processing unit was estimated at 3 billions of floating point operations per second (3 gigaflops). In view of this limit of a sequential unit, performance rates higher than 3 gigaflops can be achieved only through vectorization and/or parallelization as on Alliant FX/80. Accordingly, the efforts of this critically needed research were geared towards developing and evaluating parallel finite element methods for static and vibration analysis. A special purpose code, named with the acronym SAPNEW, performs static and eigen analysis of multi-degree-of-freedom blade models built-up from flat thin shell elements.
SAPNEW: Parallel finite element code for thin shell structures on the Alliant FX/80
NASA Technical Reports Server (NTRS)
Kamat, Manohar P.; Watson, Brian C.
1992-01-01
The results of a research activity aimed at providing a finite element capability for analyzing turbo-machinery bladed-disk assemblies in a vector/parallel processing environment are summarized. Analysis of aircraft turbofan engines is very computationally intensive. The performance limit of modern day computers with a single processing unit was estimated at 3 billions of floating point operations per second (3 gigaflops). In view of this limit of a sequential unit, performance rates higher than 3 gigaflops can be achieved only through vectorization and/or parallelization as on Alliant FX/80. Accordingly, the efforts of this critically needed research were geared towards developing and evaluating parallel finite element methods for static and vibration analysis. A special purpose code, named with the acronym SAPNEW, performs static and eigen analysis of multi-degree-of-freedom blade models built-up from flat thin shell elements.
Charon Toolkit for Parallel, Implicit Structured-Grid Computations: Functional Design
NASA Technical Reports Server (NTRS)
VanderWijngaart, Rob F.; Kutler, Paul (Technical Monitor)
1997-01-01
In a previous report the design concepts of Charon were presented. Charon is a toolkit that aids engineers in developing scientific programs for structured-grid applications to be run on MIMD parallel computers. It constitutes an augmentation of the general-purpose MPI-based message-passing layer, and provides the user with a hierarchy of tools for rapid prototyping and validation of parallel programs, and subsequent piecemeal performance tuning. Here we describe the implementation of the domain decomposition tools used for creating data distributions across sets of processors. We also present the hierarchy of parallelization tools that allows smooth translation of legacy code (or a serial design) into a parallel program. Along with the actual tool descriptions, we will present the considerations that led to the particular design choices. Many of these are motivated by the requirement that Charon must be useful within the traditional computational environments of Fortran 77 and C. Only the Fortran 77 syntax will be presented in this report.
Scalable descriptive and correlative statistics with Titan.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thompson, David C.; Pebay, Philippe Pierre
This report summarizes the existing statistical engines in VTK/Titan and presents the parallel versions thereof which have already been implemented. The ease of use of these parallel engines is illustrated by the means of C++ code snippets. Furthermore, this report justifies the design of these engines with parallel scalability in mind; then, this theoretical property is verified with test runs that demonstrate optimal parallel speed-up with up to 200 processors.
Massively Parallel Processing for Fast and Accurate Stamping Simulations
NASA Astrophysics Data System (ADS)
Gress, Jeffrey J.; Xu, Siguang; Joshi, Ramesh; Wang, Chuan-tao; Paul, Sabu
2005-08-01
The competitive automotive market drives automotive manufacturers to speed up the vehicle development cycles and reduce the lead-time. Fast tooling development is one of the key areas to support fast and short vehicle development programs (VDP). In the past ten years, the stamping simulation has become the most effective validation tool in predicting and resolving all potential formability and quality problems before the dies are physically made. The stamping simulation and formability analysis has become an critical business segment in GM math-based die engineering process. As the simulation becomes as one of the major production tools in engineering factory, the simulation speed and accuracy are the two of the most important measures for stamping simulation technology. The speed and time-in-system of forming analysis becomes an even more critical to support the fast VDP and tooling readiness. Since 1997, General Motors Die Center has been working jointly with our software vendor to develop and implement a parallel version of simulation software for mass production analysis applications. By 2001, this technology was matured in the form of distributed memory processing (DMP) of draw die simulations in a networked distributed memory computing environment. In 2004, this technology was refined to massively parallel processing (MPP) and extended to line die forming analysis (draw, trim, flange, and associated spring-back) running on a dedicated computing environment. The evolution of this technology and the insight gained through the implementation of DM0P/MPP technology as well as performance benchmarks are discussed in this publication.
Efficient Parallel Kernel Solvers for Computational Fluid Dynamics Applications
NASA Technical Reports Server (NTRS)
Sun, Xian-He
1997-01-01
Distributed-memory parallel computers dominate today's parallel computing arena. These machines, such as Intel Paragon, IBM SP2, and Cray Origin2OO, have successfully delivered high performance computing power for solving some of the so-called "grand-challenge" problems. Despite initial success, parallel machines have not been widely accepted in production engineering environments due to the complexity of parallel programming. On a parallel computing system, a task has to be partitioned and distributed appropriately among processors to reduce communication cost and to attain load balance. More importantly, even with careful partitioning and mapping, the performance of an algorithm may still be unsatisfactory, since conventional sequential algorithms may be serial in nature and may not be implemented efficiently on parallel machines. In many cases, new algorithms have to be introduced to increase parallel performance. In order to achieve optimal performance, in addition to partitioning and mapping, a careful performance study should be conducted for a given application to find a good algorithm-machine combination. This process, however, is usually painful and elusive. The goal of this project is to design and develop efficient parallel algorithms for highly accurate Computational Fluid Dynamics (CFD) simulations and other engineering applications. The work plan is 1) developing highly accurate parallel numerical algorithms, 2) conduct preliminary testing to verify the effectiveness and potential of these algorithms, 3) incorporate newly developed algorithms into actual simulation packages. The work plan has well achieved. Two highly accurate, efficient Poisson solvers have been developed and tested based on two different approaches: (1) Adopting a mathematical geometry which has a better capacity to describe the fluid, (2) Using compact scheme to gain high order accuracy in numerical discretization. The previously developed Parallel Diagonal Dominant (PDD) algorithm and Reduced Parallel Diagonal Dominant (RPDD) algorithm have been carefully studied on different parallel platforms for different applications, and a NASA simulation code developed by Man M. Rai and his colleagues has been parallelized and implemented based on data dependency analysis. These achievements are addressed in detail in the paper.
NASA Technical Reports Server (NTRS)
Dorney, Suzanne; Dorney, Daniel J.; Huber, Frank; Sheffler, David A.; Turner, James E. (Technical Monitor)
2001-01-01
The advent of advanced computer architectures and parallel computing have led to a revolutionary change in the design process for turbomachinery components. Two- and three-dimensional steady-state computational flow procedures are now routinely used in the early stages of design. Unsteady flow analyses, however, are just beginning to be incorporated into design systems. This paper outlines the transition of a three-dimensional unsteady viscous flow analysis from the research environment into the design environment. The test case used to demonstrate the analysis is the full turbine system (high-pressure turbine, inter-turbine duct and low-pressure turbine) from an advanced turboprop engine.
PROTO-PLASM: parallel language for adaptive and scalable modelling of biosystems.
Bajaj, Chandrajit; DiCarlo, Antonio; Paoluzzi, Alberto
2008-09-13
This paper discusses the design goals and the first developments of PROTO-PLASM, a novel computational environment to produce libraries of executable, combinable and customizable computer models of natural and synthetic biosystems, aiming to provide a supporting framework for predictive understanding of structure and behaviour through multiscale geometric modelling and multiphysics simulations. Admittedly, the PROTO-PLASM platform is still in its infancy. Its computational framework--language, model library, integrated development environment and parallel engine--intends to provide patient-specific computational modelling and simulation of organs and biosystem, exploiting novel functionalities resulting from the symbolic combination of parametrized models of parts at various scales. PROTO-PLASM may define the model equations, but it is currently focused on the symbolic description of model geometry and on the parallel support of simulations. Conversely, CellML and SBML could be viewed as defining the behavioural functions (the model equations) to be used within a PROTO-PLASM program. Here we exemplify the basic functionalities of PROTO-PLASM, by constructing a schematic heart model. We also discuss multiscale issues with reference to the geometric and physical modelling of neuromuscular junctions.
NASA Astrophysics Data System (ADS)
Pizette, Patrick; Govender, Nicolin; Wilke, Daniel N.; Abriak, Nor-Edine
2017-06-01
The use of the Discrete Element Method (DEM) for industrial civil engineering industrial applications is currently limited due to the computational demands when large numbers of particles are considered. The graphics processing unit (GPU) with its highly parallelized hardware architecture shows potential to enable solution of civil engineering problems using discrete granular approaches. We demonstrate in this study the pratical utility of a validated GPU-enabled DEM modeling environment to simulate industrial scale granular problems. As illustration, the flow discharge of storage silos using 8 and 17 million particles is considered. DEM simulations have been performed to investigate the influence of particle size (equivalent size for the 20/40-mesh gravel) and induced shear stress for two hopper shapes. The preliminary results indicate that the shape of the hopper significantly influences the discharge rates for the same material. Specifically, this work shows that GPU-enabled DEM modeling environments can model industrial scale problems on a single portable computer within a day for 30 seconds of process time.
Boyle, Peter A.; Christ, Norman H.; Gara, Alan; Mawhinney, Robert D.; Ohmacht, Martin; Sugavanam, Krishnan
2012-12-11
A prefetch system improves a performance of a parallel computing system. The parallel computing system includes a plurality of computing nodes. A computing node includes at least one processor and at least one memory device. The prefetch system includes at least one stream prefetch engine and at least one list prefetch engine. The prefetch system operates those engines simultaneously. After the at least one processor issues a command, the prefetch system passes the command to a stream prefetch engine and a list prefetch engine. The prefetch system operates the stream prefetch engine and the list prefetch engine to prefetch data to be needed in subsequent clock cycles in the processor in response to the passed command.
NASA Technical Reports Server (NTRS)
Liu, Nan-Suey
2001-01-01
A multi-disciplinary design/analysis tool for combustion systems is critical for optimizing the low-emission, high-performance combustor design process. Based on discussions between then NASA Lewis Research Center and the jet engine companies, an industry-government team was formed in early 1995 to develop the National Combustion Code (NCC), which is an integrated system of computer codes for the design and analysis of combustion systems. NCC has advanced features that address the need to meet designer's requirements such as "assured accuracy", "fast turnaround", and "acceptable cost". The NCC development team is comprised of Allison Engine Company (Allison), CFD Research Corporation (CFDRC), GE Aircraft Engines (GEAE), NASA Glenn Research Center (LeRC), and Pratt & Whitney (P&W). The "unstructured mesh" capability and "parallel computing" are fundamental features of NCC from its inception. The NCC system is composed of a set of "elements" which includes grid generator, main flow solver, turbulence module, turbulence and chemistry interaction module, chemistry module, spray module, radiation heat transfer module, data visualization module, and a post-processor for evaluating engine performance parameters. Each element may have contributions from several team members. Such a multi-source multi-element system needs to be integrated in a way that facilitates inter-module data communication, flexibility in module selection, and ease of integration. The development of the NCC beta version was essentially completed in June 1998. Technical details of the NCC elements are given in the Reference List. Elements such as the baseline flow solver, turbulence module, and the chemistry module, have been extensively validated; and their parallel performance on large-scale parallel systems has been evaluated and optimized. However the scalar PDF module and the Spray module, as well as their coupling with the baseline flow solver, were developed in a small-scale distributed computing environment. As a result, the validation of the NCC beta version as a whole was quite limited. Current effort has been focused on the validation of the integrated code and the evaluation/optimization of its overall performance on large-scale parallel systems.
Support for non-locking parallel reception of packets belonging to a single memory reception FIFO
Chen, Dong [Yorktown Heights, NY; Heidelberger, Philip [Yorktown Heights, NY; Salapura, Valentina [Yorktown Heights, NY; Senger, Robert M [Yorktown Heights, NY; Steinmacher-Burow, Burkhard [Boeblingen, DE; Sugawara, Yutaka [Yorktown Heights, NY
2011-01-27
A method and apparatus for distributed parallel messaging in a parallel computing system. A plurality of DMA engine units are configured in a multiprocessor system to operate in parallel, one DMA engine unit for transferring a current packet received at a network reception queue to a memory location in a memory FIFO (rmFIFO) region of a memory. A control unit implements logic to determine whether any prior received packet destined for that rmFIFO is still in a process of being stored in the associated memory by another DMA engine unit of the plurality, and prevent the one DMA engine unit from indicating completion of storing the current received packet in the reception memory FIFO (rmFIFO) until all prior received packets destined for that rmFIFO are completely stored by the other DMA engine units. Thus, there is provided non-locking support so that multiple packets destined for a single rmFIFO are transferred and stored in parallel to predetermined locations in a memory.
Compute as Fast as the Engineers Can Think! ULTRAFAST COMPUTING TEAM FINAL REPORT
NASA Technical Reports Server (NTRS)
Biedron, R. T.; Mehrotra, P.; Nelson, M. L.; Preston, M. L.; Rehder, J. J.; Rogersm J. L.; Rudy, D. H.; Sobieski, J.; Storaasli, O. O.
1999-01-01
This report documents findings and recommendations by the Ultrafast Computing Team (UCT). In the period 10-12/98, UCT reviewed design case scenarios for a supersonic transport and a reusable launch vehicle to derive computing requirements necessary for support of a design process with efficiency so radically improved that human thought rather than the computer paces the process. Assessment of the present computing capability against the above requirements indicated a need for further improvement in computing speed by several orders of magnitude to reduce time to solution from tens of hours to seconds in major applications. Evaluation of the trends in computer technology revealed a potential to attain the postulated improvement by further increases of single processor performance combined with massively parallel processing in a heterogeneous environment. However, utilization of massively parallel processing to its full capability will require redevelopment of the engineering analysis and optimization methods, including invention of new paradigms. To that end UCT recommends initiation of a new activity at LaRC called Computational Engineering for development of new methods and tools geared to the new computer architectures in disciplines, their coordination, and validation and benefit demonstration through applications.
On parallel hybrid-electric propulsion system for unmanned aerial vehicles
NASA Astrophysics Data System (ADS)
Hung, J. Y.; Gonzalez, L. F.
2012-05-01
This paper presents a review of existing and current developments and the analysis of Hybrid-Electric Propulsion Systems (HEPS) for small fixed-wing Unmanned Aerial Vehicles (UAVs). Efficient energy utilisation on an UAV is essential to its functioning, often to achieve the operational goals of range, endurance and other specific mission requirements. Due to the limitations of the space available and the mass budget on the UAV, it is often a delicate balance between the onboard energy available (i.e. fuel) and achieving the operational goals. One technology with potential in this area is with the use of HEPS. In this paper, information on the state-of-art technology in this field of research is provided. A description and simulation of a parallel HEPS for a small fixed-wing UAV by incorporating an Ideal Operating Line (IOL) control strategy is described. Simulation models of the components in a HEPS were designed in the MATLAB Simulink environment. An IOL analysis of an UAV piston engine was used to determine the most efficient points of operation for this engine. The results show that an UAV equipped with this HEPS configuration is capable of achieving a fuel saving of 6.5%, compared to the engine-only configuration.
Ceramic Technology For Advanced Heat Engines Project
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1990-12-01
Significant accomplishments in fabricating ceramic components for the Department of Energy (DOE), National Aeronautics and Space Administration (NASA), and Department of Defense (DoD) advanced heat engine programs have provided evidence that the operation of ceramic parts in high-temperature engine environments is feasible. However, these programs have also demonstrated that additional research is needed in materials and processing development, design methodology, and data base and life prediction before industry will have a sufficient technology base from which to produce reliable cost-effective ceramic engine components commercially. The objective of the project is to develop the industrial technology base required for reliable ceramicsmore » for application in advanced automotive heat engines. The project approach includes determining the mechanisms controlling reliability, improving processes for fabricating existing ceramics, developing new materials with increased reliability, and testing these materials in simulated engine environments to confirm reliability. Although this is a generic materials project, the focus is on the structural ceramics for advanced gas turbine and diesel engines, ceramic bearings and attachments, and ceramic coatings for thermal barrier and wear applications in these engines. This advanced materials technology is being developed in parallel and close coordination with the ongoing DOE and industry proof of concept engine development programs. To facilitate the rapid transfer of this technology to U.S. industry, the major portion of the work is being done in the ceramic industry, with technological support from government laboratories, other industrial laboratories, and universities. Abstracts prepared for appropriate papers.« less
Distributed parallel messaging for multiprocessor systems
Chen, Dong; Heidelberger, Philip; Salapura, Valentina; Senger, Robert M; Steinmacher-Burrow, Burhard; Sugawara, Yutaka
2013-06-04
A method and apparatus for distributed parallel messaging in a parallel computing system. The apparatus includes, at each node of a multiprocessor network, multiple injection messaging engine units and reception messaging engine units, each implementing a DMA engine and each supporting both multiple packet injection into and multiple reception from a network, in parallel. The reception side of the messaging unit (MU) includes a switch interface enabling writing of data of a packet received from the network to the memory system. The transmission side of the messaging unit, includes switch interface for reading from the memory system when injecting packets into the network.
Applications of Thin Film Thermocouples for Surface Temperature Measurement
NASA Technical Reports Server (NTRS)
Martin, Lisa C.; Holanda, Raymond
1994-01-01
Thin film thermocouples provide a minimally intrusive means of measuring surface temperature in hostile, high temperature environments. Unlike wire thermocouples, thin films do not necessitate any machining of the surface, therefore leaving intact its structural integrity. Thin films are many orders of magnitude thinner than wire, resulting in less disruption to the gas flow and thermal patterns that exist in the operating environment. Thin film thermocouples have been developed for surface temperature measurement on a variety of engine materials. The sensors are fabricated in the NASA Lewis Research Center's Thin Film Sensor Lab, which is a class 1000 clean room. The thermocouples are platinum-13 percent rhodium versus platinum and are fabricated by the sputtering process. Thin film-to-leadwire connections are made using the parallel-gap welding process. Thermocouples have been developed for use on superalloys, ceramics and ceramic composites, and intermetallics. Some applications of thin film thermocouples are: temperature measurement of space shuttle main engine turbine blade materials, temperature measurement in gas turbine engine testing of advanced materials, and temperature and heat flux measurements in a diesel engine. Fabrication of thin film thermocouples is described. Sensor durability, drift rate, and maximum temperature capabilities are addressed.
Super and parallel computers and their impact on civil engineering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamat, M.P.
1986-01-01
This book presents the papers given at a conference on the use of supercomputers in civil engineering. Topics considered at the conference included solving nonlinear equations on a hypercube, a custom architectured parallel processing system, distributed data processing, algorithms, computer architecture, parallel processing, vector processing, computerized simulation, and cost benefit analysis.
The Mercury System: Embedding Computation into Disk Drives
2004-08-20
enabling technologies to build extremely fast data search engines . We do this by moving the search closer to the data, and performing it in hardware...engine searches in parallel across a disk or disk surface 2. System Parallelism: Searching is off-loaded to search engines and main processor can
NASA Technical Reports Server (NTRS)
West, Jeff
2015-01-01
The Space Launch System (SLS) Vehicle consists of a Core Stage with four RS-25 engines and two Solid Rocket Boosters (SRBs). This vehicle is launched from the Launchpad using a Mobile Launcher (ML) which supports the SLS vehicle until its liftoff from the ML under its own power. The combination of the four RS-25 engines and two SRBs generate a significant Ignition Over-Pressure (IOP) and Acoustic Sound environment. One of the mitigations of these environments is the Ignition Over-Pressure/Sound Suppression (IOP/SS) subsystem installed on the ML. This system consists of six water nozzles located parallel to and 24 inches downstream of each SRB nozzle exit plane as well as 16 water nozzles located parallel to and 53 inches downstream of the RS-25 nozzle exit plane. During launch of the SLS vehicle, water is ejected through each water nozzle to reduce the intensity of the transient pressure environment imposed upon the SLS vehicle. While required for the mitigation of the transient pressure environment on the SLS vehicle, the IOP/SS subsystem interacts (possibly adversely) with other systems located on the Launch Pad. One of the other systems that the IOP/SS water is anticipated to interact with is the Hydrogen Burn-Off Igniter System (HBOI). The HBOI system's purpose is to ignite the unburned hydrogen/air mixture that develops in and around the nozzle of the RS-25 engines during engine start. Due to the close proximity of the water system to the HBOI system, the presence of the IOP/SS may degrade the effectiveness of the HBOI system. Another system that the IOP/SS water may interact with adversely is the RS-25 engine nozzles and the SRB nozzles. The adverse interaction anticipated is the wetting, to a significant degree, of the RS-25 nozzles resulting in substantial weight of ice forming and water present to a significant degree upstream of the SRB nozzle exit plane inside the nozzle itself, posing significant additional blockage of the effluent that exits the nozzle upon motor start leading to detrimental effects. The purpose of the CFD simulations were to i) characterize the location of the IOP/SS water after it is ejected from the IOP/SS nozzles, ii) characterize the interaction of the IOP/SS system with the HBOI system and iii) characterize the interaction of the IOP/SS water with the RS-25 nozzles and the SRB nozzles.
NASA Technical Reports Server (NTRS)
OKeefe, Matthew (Editor); Kerr, Christopher L. (Editor)
1998-01-01
This report contains the abstracts and technical papers from the Second International Workshop on Software Engineering and Code Design in Parallel Meteorological and Oceanographic Applications, held June 15-18, 1998, in Scottsdale, Arizona. The purpose of the workshop is to bring together software developers in meteorology and oceanography to discuss software engineering and code design issues for parallel architectures, including Massively Parallel Processors (MPP's), Parallel Vector Processors (PVP's), Symmetric Multi-Processors (SMP's), Distributed Shared Memory (DSM) multi-processors, and clusters. Issues to be discussed include: (1) code architectures for current parallel models, including basic data structures, storage allocation, variable naming conventions, coding rules and styles, i/o and pre/post-processing of data; (2) designing modular code; (3) load balancing and domain decomposition; (4) techniques that exploit parallelism efficiently yet hide the machine-related details from the programmer; (5) tools for making the programmer more productive; and (6) the proliferation of programming models (F--, OpenMP, MPI, and HPF).
Fracture control of H-O engine components. [titanium tin alloy fuel pump impellers
NASA Technical Reports Server (NTRS)
Ryder, J. T.
1977-01-01
An investigation was made to obtain the material characterization and fatigue crack propagation data necessary to establish the salient characteristics of a Ti-6Al-2.5Sn(ELI) alloy fuel pump impeller to be used in a cryogenic service environment. Testing variables considered were: coupon orientation, frequency, load range ratio, and temperature. Data analysis correlated crack propagation data from conventional laboratory coupons with data from a parallel sided rotating disk used to model rotor stresses. Four major design recommendations when bore regions of fuel pump impellers to be operated in cryogenic environments are to be relatively highly stressed are discussed.
NASA Technical Reports Server (NTRS)
Ortega, J. M.
1985-01-01
Synopses are given for NASA supported work in computer science at the University of Virginia. Some areas of research include: error seeding as a testing method; knowledge representation for engineering design; analysis of faults in a multi-version software experiment; implementation of a parallel programming environment; two computer graphics systems for visualization of pressure distribution and convective density particles; task decomposition for multiple robot arms; vectorized incomplete conjugate gradient; and iterative methods for solving linear equations on the Flex/32.
Design and Implementation of a Distributed Version of the NASA Engine Performance Program
NASA Technical Reports Server (NTRS)
Cours, Jeffrey T.
1994-01-01
Distributed NEPP is a new version of the NASA Engine Performance Program that runs in parallel on a collection of Unix workstations connected through a network. The program is fault-tolerant, efficient, and shows significant speed-up in a multi-user, heterogeneous environment. This report describes the issues involved in designing distributed NEPP, the algorithms the program uses, and the performance distributed NEPP achieves. It develops an analytical model to predict and measure the performance of the simple distribution, multiple distribution, and fault-tolerant distribution algorithms that distributed NEPP incorporates. Finally, the appendices explain how to use distributed NEPP and document the organization of the program's source code.
SAPNEW: Parallel finite element code for thin shell structures on the Alliant FX-80
NASA Astrophysics Data System (ADS)
Kamat, Manohar P.; Watson, Brian C.
1992-11-01
The finite element method has proven to be an invaluable tool for analysis and design of complex, high performance systems, such as bladed-disk assemblies in aircraft turbofan engines. However, as the problem size increase, the computation time required by conventional computers can be prohibitively high. Parallel processing computers provide the means to overcome these computation time limits. This report summarizes the results of a research activity aimed at providing a finite element capability for analyzing turbomachinery bladed-disk assemblies in a vector/parallel processing environment. A special purpose code, named with the acronym SAPNEW, has been developed to perform static and eigen analysis of multi-degree-of-freedom blade models built-up from flat thin shell elements. SAPNEW provides a stand alone capability for static and eigen analysis on the Alliant FX/80, a parallel processing computer. A preprocessor, named with the acronym NTOS, has been developed to accept NASTRAN input decks and convert them to the SAPNEW format to make SAPNEW more readily used by researchers at NASA Lewis Research Center.
Roofline model toolkit: A practical tool for architectural and program analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lo, Yu Jung; Williams, Samuel; Van Straalen, Brian
We present preliminary results of the Roofline Toolkit for multicore, many core, and accelerated architectures. This paper focuses on the processor architecture characterization engine, a collection of portable instrumented micro benchmarks implemented with Message Passing Interface (MPI), and OpenMP used to express thread-level parallelism. These benchmarks are specialized to quantify the behavior of different architectural features. Compared to previous work on performance characterization, these microbenchmarks focus on capturing the performance of each level of the memory hierarchy, along with thread-level parallelism, instruction-level parallelism and explicit SIMD parallelism, measured in the context of the compilers and run-time environments. We also measuremore » sustained PCIe throughput with four GPU memory managed mechanisms. By combining results from the architecture characterization with the Roofline model based solely on architectural specifications, this work offers insights for performance prediction of current and future architectures and their software systems. To that end, we instrument three applications and plot their resultant performance on the corresponding Roofline model when run on a Blue Gene/Q architecture.« less
Cloud parallel processing of tandem mass spectrometry based proteomics data.
Mohammed, Yassene; Mostovenko, Ekaterina; Henneman, Alex A; Marissen, Rob J; Deelder, André M; Palmblad, Magnus
2012-10-05
Data analysis in mass spectrometry based proteomics struggles to keep pace with the advances in instrumentation and the increasing rate of data acquisition. Analyzing this data involves multiple steps requiring diverse software, using different algorithms and data formats. Speed and performance of the mass spectral search engines are continuously improving, although not necessarily as needed to face the challenges of acquired big data. Improving and parallelizing the search algorithms is one possibility; data decomposition presents another, simpler strategy for introducing parallelism. We describe a general method for parallelizing identification of tandem mass spectra using data decomposition that keeps the search engine intact and wraps the parallelization around it. We introduce two algorithms for decomposing mzXML files and recomposing resulting pepXML files. This makes the approach applicable to different search engines, including those relying on sequence databases and those searching spectral libraries. We use cloud computing to deliver the computational power and scientific workflow engines to interface and automate the different processing steps. We show how to leverage these technologies to achieve faster data analysis in proteomics and present three scientific workflows for parallel database as well as spectral library search using our data decomposition programs, X!Tandem and SpectraST.
Dual compile strategy for parallel heterogeneous execution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Tyler Barratt; Perry, James Thomas
2012-06-01
The purpose of the Dual Compile Strategy is to increase our trust in the Compute Engine during its execution of instructions. This is accomplished by introducing a heterogeneous Monitor Engine that checks the execution of the Compute Engine. This leads to the production of a second and custom set of instructions designed for monitoring the execution of the Compute Engine at runtime. This use of multiple engines differs from redundancy in that one engine is working on the application while the other engine is monitoring and checking in parallel instead of both applications (and engines) performing the same work atmore » the same time.« less
Large-Scale Distributed Computational Fluid Dynamics on the Information Power Grid Using Globus
NASA Technical Reports Server (NTRS)
Barnard, Stephen; Biswas, Rupak; Saini, Subhash; VanderWijngaart, Robertus; Yarrow, Maurice; Zechtzer, Lou; Foster, Ian; Larsson, Olle
1999-01-01
This paper describes an experiment in which a large-scale scientific application development for tightly-coupled parallel machines is adapted to the distributed execution environment of the Information Power Grid (IPG). A brief overview of the IPG and a description of the computational fluid dynamics (CFD) algorithm are given. The Globus metacomputing toolkit is used as the enabling device for the geographically-distributed computation. Modifications related to latency hiding and Load balancing were required for an efficient implementation of the CFD application in the IPG environment. Performance results on a pair of SGI Origin 2000 machines indicate that real scientific applications can be effectively implemented on the IPG; however, a significant amount of continued effort is required to make such an environment useful and accessible to scientists and engineers.
NASA Technical Reports Server (NTRS)
Dipippo, S.; Prendin, W.; Gasparoni, F.
1994-01-01
In spite of the apparent great differences between deep ocean and space environment, significant similarities can be recognized when considering the possible solutions and technologies enabling the development of remote automatic stations supporting the execution of scientific activities. In this sense it is believed that mutual benefits shall be derived from the exchange of experiences and results between people and organizations involved in research and engineering activities for hostile environments, such as space, deep sea, and polar areas. A significant example of possible technology transfer and common systematic approach is given, which describes in some detail how the solutions and the enabling technologies identified for an Abyssal Benthic Laboratory can be applied for the case of a lunar or planetary station.
A Java-Enabled Interactive Graphical Gas Turbine Propulsion System Simulator
NASA Technical Reports Server (NTRS)
Reed, John A.; Afjeh, Abdollah A.
1997-01-01
This paper describes a gas turbine simulation system which utilizes the newly developed Java language environment software system. The system provides an interactive graphical environment which allows the quick and efficient construction and analysis of arbitrary gas turbine propulsion systems. The simulation system couples a graphical user interface, developed using the Java Abstract Window Toolkit, and a transient, space- averaged, aero-thermodynamic gas turbine analysis method, both entirely coded in the Java language. The combined package provides analytical, graphical and data management tools which allow the user to construct and control engine simulations by manipulating graphical objects on the computer display screen. Distributed simulations, including parallel processing and distributed database access across the Internet and World-Wide Web (WWW), are made possible through services provided by the Java environment.
NASA Technical Reports Server (NTRS)
West, Jeff; Yang, H. Q.
2014-01-01
There are many instances involving liquid/gas interfaces and their dynamics in the design of liquid engine powered rockets such as the Space Launch System (SLS). Some examples of these applications are: Propellant tank draining and slosh, subcritical condition injector analysis for gas generators, preburners and thrust chambers, water deluge mitigation for launch induced environments and even solid rocket motor liquid slag dynamics. Commercially available CFD programs simulating gas/liquid interfaces using the Volume of Fluid approach are currently limited in their parallel scalability. In 2010 for instance, an internal NASA/MSFC review of three commercial tools revealed that parallel scalability was seriously compromised at 8 cpus and no additional speedup was possible after 32 cpus. Other non-interface CFD applications at the time were demonstrating useful parallel scalability up to 4,096 processors or more. Based on this review, NASA/MSFC initiated an effort to implement a Volume of Fluid implementation within the unstructured mesh, pressure-based algorithm CFD program, Loci-STREAM. After verification was achieved by comparing results to the commercial CFD program CFD-Ace+, and validation by direct comparison with data, Loci-STREAM-VoF is now the production CFD tool for propellant slosh force and slosh damping rate simulations at NASA/MSFC. On these applications, good parallel scalability has been demonstrated for problems sizes of tens of millions of cells and thousands of cpu cores. Ongoing efforts are focused on the application of Loci-STREAM-VoF to predict the transient flow patterns of water on the SLS Mobile Launch Platform in order to support the phasing of water for launch environment mitigation so that vehicle determinantal effects are not realized.
NASA Astrophysics Data System (ADS)
Makarov, M.; Shchanikov, S.; Trantina, N.
2017-01-01
We have conducted a research into the major, in terms of their future application, properties of nanoscale objects, based on modelling these objects as free-standing physical elements beyond the structure of an engineering system designed for their integration as well as a part of a system that operates under the influence of the external environment. For the empirical research suggested within the scope of this work, we have chosen a nanoscale electronic element intended to be used while designing information processing systems with the parallel architecture - a memristor. The target function of the research was to provide the maximum fault-tolerance index of a memristor-based system when affected by all possible impacts of the internal destabilizing factors and external environment. The research results have enabled us to receive and classify all the factors predetermining the fault-tolerance index of the hardware implementation of a computing system based on the nanoscale electronic element base.
High performance computing and communications: Advancing the frontiers of information technology
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1997-12-31
This report, which supplements the President`s Fiscal Year 1997 Budget, describes the interagency High Performance Computing and Communications (HPCC) Program. The HPCC Program will celebrate its fifth anniversary in October 1996 with an impressive array of accomplishments to its credit. Over its five-year history, the HPCC Program has focused on developing high performance computing and communications technologies that can be applied to computation-intensive applications. Major highlights for FY 1996: (1) High performance computing systems enable practical solutions to complex problems with accuracies not possible five years ago; (2) HPCC-funded research in very large scale networking techniques has been instrumental inmore » the evolution of the Internet, which continues exponential growth in size, speed, and availability of information; (3) The combination of hardware capability measured in gigaflop/s, networking technology measured in gigabit/s, and new computational science techniques for modeling phenomena has demonstrated that very large scale accurate scientific calculations can be executed across heterogeneous parallel processing systems located thousands of miles apart; (4) Federal investments in HPCC software R and D support researchers who pioneered the development of parallel languages and compilers, high performance mathematical, engineering, and scientific libraries, and software tools--technologies that allow scientists to use powerful parallel systems to focus on Federal agency mission applications; and (5) HPCC support for virtual environments has enabled the development of immersive technologies, where researchers can explore and manipulate multi-dimensional scientific and engineering problems. Educational programs fostered by the HPCC Program have brought into classrooms new science and engineering curricula designed to teach computational science. This document contains a small sample of the significant HPCC Program accomplishments in FY 1996.« less
An Asynchronous Many-Task Implementation of In-Situ Statistical Analysis using Legion.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pebay, Philippe Pierre; Bennett, Janine Camille
2015-11-01
In this report, we propose a framework for the design and implementation of in-situ analy- ses using an asynchronous many-task (AMT) model, using the Legion programming model together with the MiniAero mini-application as a surrogate for full-scale parallel scientific computing applications. The bulk of this work consists of converting the Learn/Derive/Assess model which we had initially developed for parallel statistical analysis using MPI [PTBM11], from a SPMD to an AMT model. In this goal, we propose an original use of the concept of Legion logical regions as a replacement for the parallel communication schemes used for the only operation ofmore » the statistics engines that require explicit communication. We then evaluate this proposed scheme in a shared memory environment, using the Legion port of MiniAero as a proxy for a full-scale scientific application, as a means to provide input data sets of variable size for the in-situ statistical analyses in an AMT context. We demonstrate in particular that the approach has merit, and warrants further investigation, in collaboration with ongoing efforts to improve the overall parallel performance of the Legion system.« less
A versatile software package for inter-subject correlation based analyses of fMRI.
Kauppi, Jukka-Pekka; Pajula, Juha; Tohka, Jussi
2014-01-01
In the inter-subject correlation (ISC) based analysis of the functional magnetic resonance imaging (fMRI) data, the extent of shared processing across subjects during the experiment is determined by calculating correlation coefficients between the fMRI time series of the subjects in the corresponding brain locations. This implies that ISC can be used to analyze fMRI data without explicitly modeling the stimulus and thus ISC is a potential method to analyze fMRI data acquired under complex naturalistic stimuli. Despite of the suitability of ISC based approach to analyze complex fMRI data, no generic software tools have been made available for this purpose, limiting a widespread use of ISC based analysis techniques among neuroimaging community. In this paper, we present a graphical user interface (GUI) based software package, ISC Toolbox, implemented in Matlab for computing various ISC based analyses. Many advanced computations such as comparison of ISCs between different stimuli, time window ISC, and inter-subject phase synchronization are supported by the toolbox. The analyses are coupled with re-sampling based statistical inference. The ISC based analyses are data and computation intensive and the ISC toolbox is equipped with mechanisms to execute the parallel computations in a cluster environment automatically and with an automatic detection of the cluster environment in use. Currently, SGE-based (Oracle Grid Engine, Son of a Grid Engine, or Open Grid Scheduler) and Slurm environments are supported. In this paper, we present a detailed account on the methods behind the ISC Toolbox, the implementation of the toolbox and demonstrate the possible use of the toolbox by summarizing selected example applications. We also report the computation time experiments both using a single desktop computer and two grid environments demonstrating that parallelization effectively reduces the computing time. The ISC Toolbox is available in https://code.google.com/p/isc-toolbox/
A versatile software package for inter-subject correlation based analyses of fMRI
Kauppi, Jukka-Pekka; Pajula, Juha; Tohka, Jussi
2014-01-01
In the inter-subject correlation (ISC) based analysis of the functional magnetic resonance imaging (fMRI) data, the extent of shared processing across subjects during the experiment is determined by calculating correlation coefficients between the fMRI time series of the subjects in the corresponding brain locations. This implies that ISC can be used to analyze fMRI data without explicitly modeling the stimulus and thus ISC is a potential method to analyze fMRI data acquired under complex naturalistic stimuli. Despite of the suitability of ISC based approach to analyze complex fMRI data, no generic software tools have been made available for this purpose, limiting a widespread use of ISC based analysis techniques among neuroimaging community. In this paper, we present a graphical user interface (GUI) based software package, ISC Toolbox, implemented in Matlab for computing various ISC based analyses. Many advanced computations such as comparison of ISCs between different stimuli, time window ISC, and inter-subject phase synchronization are supported by the toolbox. The analyses are coupled with re-sampling based statistical inference. The ISC based analyses are data and computation intensive and the ISC toolbox is equipped with mechanisms to execute the parallel computations in a cluster environment automatically and with an automatic detection of the cluster environment in use. Currently, SGE-based (Oracle Grid Engine, Son of a Grid Engine, or Open Grid Scheduler) and Slurm environments are supported. In this paper, we present a detailed account on the methods behind the ISC Toolbox, the implementation of the toolbox and demonstrate the possible use of the toolbox by summarizing selected example applications. We also report the computation time experiments both using a single desktop computer and two grid environments demonstrating that parallelization effectively reduces the computing time. The ISC Toolbox is available in https://code.google.com/p/isc-toolbox/ PMID:24550818
The effect of selection environment on the probability of parallel evolution.
Bailey, Susan F; Rodrigue, Nicolas; Kassen, Rees
2015-06-01
Across the great diversity of life, there are many compelling examples of parallel and convergent evolution-similar evolutionary changes arising in independently evolving populations. Parallel evolution is often taken to be strong evidence of adaptation occurring in populations that are highly constrained in their genetic variation. Theoretical models suggest a few potential factors driving the probability of parallel evolution, but experimental tests are needed. In this study, we quantify the degree of parallel evolution in 15 replicate populations of Pseudomonas fluorescens evolved in five different environments that varied in resource type and arrangement. We identified repeat changes across multiple levels of biological organization from phenotype, to gene, to nucleotide, and tested the impact of 1) selection environment, 2) the degree of adaptation, and 3) the degree of heterogeneity in the environment on the degree of parallel evolution at the gene-level. We saw, as expected, that parallel evolution occurred more often between populations evolved in the same environment; however, the extent of parallel evolution varied widely. The degree of adaptation did not significantly explain variation in the extent of parallelism in our system but number of available beneficial mutations correlated negatively with parallel evolution. In addition, degree of parallel evolution was significantly higher in populations evolved in a spatially structured, multiresource environment, suggesting that environmental heterogeneity may be an important factor constraining adaptation. Overall, our results stress the importance of environment in driving parallel evolutionary changes and point to a number of avenues for future work for understanding when evolution is predictable. © The Author 2015. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Low latency, high bandwidth data communications between compute nodes in a parallel computer
Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.
2010-11-02
Methods, parallel computers, and computer program products are disclosed for low latency, high bandwidth data communications between compute nodes in a parallel computer. Embodiments include receiving, by an origin direct memory access (`DMA`) engine of an origin compute node, data for transfer to a target compute node; sending, by the origin DMA engine of the origin compute node to a target DMA engine on the target compute node, a request to send (`RTS`) message; transferring, by the origin DMA engine, a predetermined portion of the data to the target compute node using memory FIFO operation; determining, by the origin DMA engine whether an acknowledgement of the RTS message has been received from the target DMA engine; if the an acknowledgement of the RTS message has not been received, transferring, by the origin DMA engine, another predetermined portion of the data to the target compute node using a memory FIFO operation; and if the acknowledgement of the RTS message has been received by the origin DMA engine, transferring, by the origin DMA engine, any remaining portion of the data to the target compute node using a direct put operation.
NASA Technical Reports Server (NTRS)
Sanz, J.; Pischel, K.; Hubler, D.
1992-01-01
An application for parallel computation on a combined cluster of powerful workstations and supercomputers was developed. A Parallel Virtual Machine (PVM) is used as message passage language on a macro-tasking parallelization of the Aerodynamic Inverse Design and Analysis for a Full Engine computer code. The heterogeneous nature of the cluster is perfectly handled by the controlling host machine. Communication is established via Ethernet with the TCP/IP protocol over an open network. A reasonable overhead is imposed for internode communication, rendering an efficient utilization of the engaged processors. Perhaps one of the most interesting features of the system is its versatile nature, that permits the usage of the computational resources available that are experiencing less use at a given point in time.
Parallel processing for scientific computations
NASA Technical Reports Server (NTRS)
Alkhatib, Hasan S.
1991-01-01
The main contribution of the effort in the last two years is the introduction of the MOPPS system. After doing extensive literature search, we introduced the system which is described next. MOPPS employs a new solution to the problem of managing programs which solve scientific and engineering applications on a distributed processing environment. Autonomous computers cooperate efficiently in solving large scientific problems with this solution. MOPPS has the advantage of not assuming the presence of any particular network topology or configuration, computer architecture, or operating system. It imposes little overhead on network and processor resources while efficiently managing programs concurrently. The core of MOPPS is an intelligent program manager that builds a knowledge base of the execution performance of the parallel programs it is managing under various conditions. The manager applies this knowledge to improve the performance of future runs. The program manager learns from experience.
Myria: Scalable Analytics as a Service
NASA Astrophysics Data System (ADS)
Howe, B.; Halperin, D.; Whitaker, A.
2014-12-01
At the UW eScience Institute, we're working to empower non-experts, especially in the sciences, to write and use data-parallel algorithms. To this end, we are building Myria, a web-based platform for scalable analytics and data-parallel programming. Myria's internal model of computation is the relational algebra extended with iteration, such that every program is inherently data-parallel, just as every query in a database is inherently data-parallel. But unlike databases, iteration is a first class concept, allowing us to express machine learning tasks, graph traversal tasks, and more. Programs can be expressed in a number of languages and can be executed on a number of execution environments, but we emphasize a particular language called MyriaL that supports both imperative and declarative styles and a particular execution engine called MyriaX that uses an in-memory column-oriented representation and asynchronous iteration. We deliver Myria over the web as a service, providing an editor, performance analysis tools, and catalog browsing features in a single environment. We find that this web-based "delivery vector" is critical in reaching non-experts: they are insulated from irrelevant effort technical work associated with installation, configuration, and resource management. The MyriaX backend, one of several execution runtimes we support, is a main-memory, column-oriented, RDBMS-on-the-worker system that supports cyclic data flows as a first-class citizen and has been shown to outperform competitive systems on 100-machine cluster sizes. I will describe the Myria system, give a demo, and present some new results in large-scale oceanographic microbiology.
On the conversion of infrared radiation from fission reactor-based photon engine into parallel beam
NASA Astrophysics Data System (ADS)
Gulevich, Andrey V.; Levchenko, Vladislav E.; Loginov, Nicolay I.; Kukharchuk, Oleg F.; Evtodiev, Denis A.; Zrodnikov, Anatoly V.
2002-01-01
The efficiency of infrared radiation conversion from photon engine based on fission reactor into parallel photon beam is discussed. Two different ways of doing that are considered. One of them is to use the parabolic mirror to convert of infrared radiation into parallel photon beam. The another one is based on the use of special lattice consisting of numerous light conductors. The experimental facility and some results are described. .
NASA Technical Reports Server (NTRS)
Ortega, J. M.
1984-01-01
Several short summaries of the work performed during this reporting period are presented. Topics discussed in this document include: (1) resilient seeded errors via simple techniques; (2) knowledge representation for engineering design; (3) analysis of faults in a multiversion software experiment; (4) implementation of parallel programming environment; (5) symbolic execution of concurrent programs; (6) two computer graphics systems for visualization of pressure distribution and convective density particles; (7) design of a source code management system; (8) vectorizing incomplete conjugate gradient on the Cyber 203/205; (9) extensions of domain testing theory and; (10) performance analyzer for the pisces system.
An engineering approach to automatic programming
NASA Technical Reports Server (NTRS)
Rubin, Stuart H.
1990-01-01
An exploratory study of the automatic generation and optimization of symbolic programs using DECOM - a prototypical requirement specification model implemented in pure LISP was undertaken. It was concluded, on the basis of this study, that symbolic processing languages such as LISP can support a style of programming based upon formal transformation and dependent upon the expression of constraints in an object-oriented environment. Such languages can represent all aspects of the software generation process (including heuristic algorithms for effecting parallel search) as dynamic processes since data and program are represented in a uniform format.
Scalable and reusable emulator for evaluating the performance of SS7 networks
NASA Astrophysics Data System (ADS)
Lazar, Aurel A.; Tseng, Kent H.; Lim, Koon Seng; Choe, Winston
1994-04-01
A scalable and reusable emulator was designed and implemented for studying the behavior of SS7 networks. The emulator design was largely based on public domain software. It was developed on top of an environment supported by PVM, the Parallel Virtual Machine, and managed by OSIMIS-the OSI Management Information Service platform. The emulator runs on top of a commercially available ATM LAN interconnecting engineering workstations. As a case study for evaluating the emulator, the behavior of the Singapore National SS7 Network under fault and unbalanced loading conditions was investigated.
Parallel Hybrid Gas-Electric Geared Turbofan Engine Conceptual Design and Benefits Analysis
NASA Technical Reports Server (NTRS)
Lents, Charles; Hardin, Larry; Rheaume, Jonathan; Kohlman, Lee
2016-01-01
The conceptual design of a parallel gas-electric hybrid propulsion system for a conventional single aisle twin engine tube and wing vehicle has been developed. The study baseline vehicle and engine technology are discussed, followed by results of the hybrid propulsion system sizing and performance analysis. The weights analysis for the electric energy storage & conversion system and thermal management system is described. Finally, the potential system benefits are assessed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saad, Tony; Sutherland, James C.
To address the coding and software challenges of modern hybrid architectures, we propose an approach to multiphysics code development for high-performance computing. This approach is based on using a Domain Specific Language (DSL) in tandem with a directed acyclic graph (DAG) representation of the problem to be solved that allows runtime algorithm generation. When coupled with a large-scale parallel framework, the result is a portable development framework capable of executing on hybrid platforms and handling the challenges of multiphysics applications. In addition, we share our experience developing a code in such an environment – an effort that spans an interdisciplinarymore » team of engineers and computer scientists.« less
Saad, Tony; Sutherland, James C.
2016-05-04
To address the coding and software challenges of modern hybrid architectures, we propose an approach to multiphysics code development for high-performance computing. This approach is based on using a Domain Specific Language (DSL) in tandem with a directed acyclic graph (DAG) representation of the problem to be solved that allows runtime algorithm generation. When coupled with a large-scale parallel framework, the result is a portable development framework capable of executing on hybrid platforms and handling the challenges of multiphysics applications. In addition, we share our experience developing a code in such an environment – an effort that spans an interdisciplinarymore » team of engineers and computer scientists.« less
GeoEnvironmental Education Through Multidisciplinary Research
NASA Astrophysics Data System (ADS)
Padilla, I. Y.; Hwang, S.
2007-12-01
The growing need to understand environmental and geological processes, their impacts, and solutions in a dynamic world requires a diverse, multidiscipline, and multicultural approach in science and engineering. In the last few years, faculty at the University of Puerto Rico, Mayagüez have engaged in education and training activities aimed at developing a critical mass of students that can address a wide range of geoenvironmental problems through multidisciplinary research. Students of diverse age, gender, culture, and academic disciplines addressing different research questions work together in a common space. Hierarchy assignments use senior students as primary mentors, but foster work at parallel levels that require sharing and developing knowledge and research resources. The activities have resulted in a significant increase in the number of diverse students in science and engineering areas related to the environment.
Flow of a Gas Turbine Engine Low-Pressure Subsystem Simulated
NASA Technical Reports Server (NTRS)
Veres, Joseph P.
1997-01-01
The NASA Lewis Research Center is managing a task to numerically simulate overnight, on a parallel computing testbed, the aerodynamic flow in the complete low-pressure subsystem (LPS) of a gas turbine engine. The model solves the three-dimensional Navier- Stokes flow equations through all the components within the LPS, as well as the external flow around the engine nacelle. The LPS modeling task is being performed by Allison Engine Company under the Small Engine Technology contract. The large computer simulation was evaluated on networked computer systems using 8, 16, and 32 processors, with the parallel computing efficiency reaching 75 percent when 16 processors were used.
Biomimetic approach to cardiac tissue engineering.
Radisic, M; Park, H; Gerecht, S; Cannizzaro, C; Langer, R; Vunjak-Novakovic, G
2007-08-29
Here, we review an approach to tissue engineering of functional myocardium that is biomimetic in nature, as it involves the use of culture systems designed to recapitulate some aspects of the actual in vivo environment. To mimic the capillary network, subpopulations of neonatal rat heart cells were cultured on a highly porous elastomer scaffold with a parallel array of channels perfused with culture medium. To mimic oxygen supply by haemoglobin, the culture medium was supplemented with a perfluorocarbon (PFC) emulsion. Constructs cultivated in the presence of PFC contained higher amounts of DNA and cardiac markers and had significantly better contractile properties than control constructs cultured without PFC. To induce synchronous contractions of cultured constructs, electrical signals mimicking those in native heart were applied. Over only 8 days of cultivation, electrical stimulation induced cell alignment and coupling, markedly increased the amplitude of synchronous construct contractions and resulted in a remarkable level of ultrastructural organization. The biomimetic approach is discussed in the overall context of cardiac tissue engineering, and the possibility to engineer functional human cardiac grafts based on human stem cells.
Advanced propulsion system concept for hybrid vehicles
NASA Technical Reports Server (NTRS)
Bhate, S.; Chen, H.; Dochat, G.
1980-01-01
A series hybrid system, utilizing a free piston Stirling engine with a linear alternator, and a parallel hybrid system, incorporating a kinematic Stirling engine, are analyzed for various specified reference missions/vehicles ranging from a small two passenger commuter vehicle to a van. Parametric studies for each configuration, detail tradeoff studies to determine engine, battery and system definition, short term energy storage evaluation, and detail life cycle cost studies were performed. Results indicate that the selection of a parallel Stirling engine/electric, hybrid propulsion system can significantly reduce petroleum consumption by 70 percent over present conventional vehicles.
NASA Astrophysics Data System (ADS)
Li, Gen; Tang, Chun-An; Liang, Zheng-Zhao
2017-01-01
Multi-scale high-resolution modeling of rock failure process is a powerful means in modern rock mechanics studies to reveal the complex failure mechanism and to evaluate engineering risks. However, multi-scale continuous modeling of rock, from deformation, damage to failure, has raised high requirements on the design, implementation scheme and computation capacity of the numerical software system. This study is aimed at developing the parallel finite element procedure, a parallel rock failure process analysis (RFPA) simulator that is capable of modeling the whole trans-scale failure process of rock. Based on the statistical meso-damage mechanical method, the RFPA simulator is able to construct heterogeneous rock models with multiple mechanical properties, deal with and represent the trans-scale propagation of cracks, in which the stress and strain fields are solved for the damage evolution analysis of representative volume element by the parallel finite element method (FEM) solver. This paper describes the theoretical basis of the approach and provides the details of the parallel implementation on a Windows - Linux interactive platform. A numerical model is built to test the parallel performance of FEM solver. Numerical simulations are then carried out on a laboratory-scale uniaxial compression test, and field-scale net fracture spacing and engineering-scale rock slope examples, respectively. The simulation results indicate that relatively high speedup and computation efficiency can be achieved by the parallel FEM solver with a reasonable boot process. In laboratory-scale simulation, the well-known physical phenomena, such as the macroscopic fracture pattern and stress-strain responses, can be reproduced. In field-scale simulation, the formation process of net fracture spacing from initiation, propagation to saturation can be revealed completely. In engineering-scale simulation, the whole progressive failure process of the rock slope can be well modeled. It is shown that the parallel FE simulator developed in this study is an efficient tool for modeling the whole trans-scale failure process of rock from meso- to engineering-scale.
BPELPower—A BPEL execution engine for geospatial web services
NASA Astrophysics Data System (ADS)
Yu, Genong (Eugene); Zhao, Peisheng; Di, Liping; Chen, Aijun; Deng, Meixia; Bai, Yuqi
2012-10-01
The Business Process Execution Language (BPEL) has become a popular choice for orchestrating and executing workflows in the Web environment. As one special kind of scientific workflow, geospatial Web processing workflows are data-intensive, deal with complex structures in data and geographic features, and execute automatically with limited human intervention. To enable the proper execution and coordination of geospatial workflows, a specially enhanced BPEL execution engine is required. BPELPower was designed, developed, and implemented as a generic BPEL execution engine with enhancements for executing geospatial workflows. The enhancements are especially in its capabilities in handling Geography Markup Language (GML) and standard geospatial Web services, such as the Web Processing Service (WPS) and the Web Feature Service (WFS). BPELPower has been used in several demonstrations over the decade. Two scenarios were discussed in detail to demonstrate the capabilities of BPELPower. That study showed a standard-compliant, Web-based approach for properly supporting geospatial processing, with the only enhancement at the implementation level. Pattern-based evaluation and performance improvement of the engine are discussed: BPELPower directly supports 22 workflow control patterns and 17 workflow data patterns. In the future, the engine will be enhanced with high performance parallel processing and broad Web paradigms.
ELISA, a demonstrator environment for information systems architecture design
NASA Technical Reports Server (NTRS)
Panem, Chantal
1994-01-01
This paper describes an approach of reusability of software engineering technology in the area of ground space system design. System engineers have lots of needs similar to software developers: sharing of a common data base, capitalization of knowledge, definition of a common design process, communication between different technical domains. Moreover system designers need to simulate dynamically their system as early as possible. Software development environments, methods and tools now become operational and widely used. Their architecture is based on a unique object base, a set of common management services and they host a family of tools for each life cycle activity. In late '92, CNES decided to develop a demonstrative software environment supporting some system activities. The design of ground space data processing systems was chosen as the application domain. ELISA (Integrated Software Environment for Architectures Specification) was specified as a 'demonstrator', i.e. a sufficient basis for demonstrations, evaluation and future operational enhancements. A process with three phases was implemented: system requirements definition, design of system architectures models, and selection of physical architectures. Each phase is composed of several activities that can be performed in parallel, with the provision of Commercial Off the Shelves Tools. ELISA has been delivered to CNES in January 94, currently used for demonstrations and evaluations on real projects (e.g. SPOT4 Satellite Control Center). It is on the way of new evolutions.
Evolutionary computing for the design search and optimization of space vehicle power subsystems
NASA Technical Reports Server (NTRS)
Kordon, Mark; Klimeck, Gerhard; Hanks, David; Hua, Hook
2004-01-01
Evolutionary computing has proven to be a straightforward and robust approach for optimizing a wide range of difficult analysis and design problems. This paper discusses the application of these techniques to an existing space vehicle power subsystem resource and performance analysis simulation in a parallel processing environment. Out preliminary results demonstrate that this approach has the potential to improve the space system trade study process by allowing engineers to statistically weight subsystem goals of mass, cost and performance then automatically size power elements based on anticipated performance of the subsystem rather than on worst-case estimates.
High-Performance Compute Infrastructure in Astronomy: 2020 Is Only Months Away
NASA Astrophysics Data System (ADS)
Berriman, B.; Deelman, E.; Juve, G.; Rynge, M.; Vöckler, J. S.
2012-09-01
By 2020, astronomy will be awash with as much as 60 PB of public data. Full scientific exploitation of such massive volumes of data will require high-performance computing on server farms co-located with the data. Development of this computing model will be a community-wide enterprise that has profound cultural and technical implications. Astronomers must be prepared to develop environment-agnostic applications that support parallel processing. The community must investigate the applicability and cost-benefit of emerging technologies such as cloud computing to astronomy, and must engage the Computer Science community to develop science-driven cyberinfrastructure such as workflow schedulers and optimizers. We report here the results of collaborations between a science center, IPAC, and a Computer Science research institute, ISI. These collaborations may be considered pathfinders in developing a high-performance compute infrastructure in astronomy. These collaborations investigated two exemplar large-scale science-driver workflow applications: 1) Calculation of an infrared atlas of the Galactic Plane at 18 different wavelengths by placing data from multiple surveys on a common plate scale and co-registering all the pixels; 2) Calculation of an atlas of periodicities present in the public Kepler data sets, which currently contain 380,000 light curves. These products have been generated with two workflow applications, written in C for performance and designed to support parallel processing on multiple environments and platforms, but with different compute resource needs: the Montage image mosaic engine is I/O-bound, and the NASA Star and Exoplanet Database periodogram code is CPU-bound. Our presentation will report cost and performance metrics and lessons-learned for continuing development. Applicability of Cloud Computing: Commercial Cloud providers generally charge for all operations, including processing, transfer of input and output data, and for storage of data, and so the costs of running applications vary widely according to how they use resources. The cloud is well suited to processing CPU-bound (and memory bound) workflows such as the periodogram code, given the relatively low cost of processing in comparison with I/O operations. I/O-bound applications such as Montage perform best on high-performance clusters with fast networks and parallel file-systems. Science-driven Cyberinfrastructure: Montage has been widely used as a driver application to develop workflow management services, such as task scheduling in distributed environments, designing fault tolerance techniques for job schedulers, and developing workflow orchestration techniques. Running Parallel Applications Across Distributed Cloud Environments: Data processing will eventually take place in parallel distributed across cyber infrastructure environments having different architectures. We have used the Pegasus Work Management System (WMS) to successfully run applications across three very different environments: TeraGrid, OSG (Open Science Grid), and FutureGrid. Provisioning resources across different grids and clouds (also referred to as Sky Computing), involves establishing a distributed environment, where issues of, e.g, remote job submission, data management, and security need to be addressed. This environment also requires building virtual machine images that can run in different environments. Usually, each cloud provides basic images that can be customized with additional software and services. In most of our work, we provisioned compute resources using a custom application, called Wrangler. Pegasus WMS abstracts the architectures of the compute environments away from the end-user, and can be considered a first-generation tool suitable for scientists to run their applications on disparate environments.
PARALLEL PERTURBATION MODEL FOR CYCLE TO CYCLE VARIABILITY PPM4CCV
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ameen, Muhsin Mohammed; Som, Sibendu
This code consists of a Fortran 90 implementation of the parallel perturbation model to compute cyclic variability in spark ignition (SI) engines. Cycle-to-cycle variability (CCV) is known to be detrimental to SI engine operation resulting in partial burn and knock, and result in an overall reduction in the reliability of the engine. Numerical prediction of cycle-to-cycle variability (CCV) in SI engines is extremely challenging for two key reasons: (i) high-fidelity methods such as large eddy simulation (LES) are required to accurately capture the in-cylinder turbulent flow field, and (ii) CCV is experienced over long timescales and hence the simulations needmore » to be performed for hundreds of consecutive cycles. In the new technique, the strategy is to perform multiple parallel simulations, each of which encompasses 2-3 cycles, by effectively perturbing the simulation parameters such as the initial and boundary conditions. The PPM4CCV code is a pre-processing code and can be coupled with any engine CFD code. PPM4CCV was coupled with Converge CFD code and a 10-time speedup was demonstrated over the conventional multi-cycle LES in predicting the CCV for a motored engine. Recently, the model is also being applied to fired engines including port fuel injected (PFI) and direct injection spark ignition engines and the preliminary results are very encouraging.« less
Engine-start Control Strategy of P2 Parallel Hybrid Electric Vehicle
NASA Astrophysics Data System (ADS)
Xiangyang, Xu; Siqi, Zhao; Peng, Dong
2017-12-01
A smooth and fast engine-start process is important to parallel hybrid electric vehicles with an electric motor mounted in front of the transmission. However, there are some challenges during the engine-start control. Firstly, the electric motor must simultaneously provide a stable driving torque to ensure the drivability and a compensative torque to drag the engine before ignition. Secondly, engine-start time is a trade-off control objective because both fast start and smooth start have to be considered. To solve these problems, this paper first analyzed the resistance of the engine start process, and established a physic model in MATLAB/Simulink. Then a model-based coordinated control strategy among engine, motor and clutch was developed. Two basic control strategy during fast start and smooth start process were studied. Simulation results showed that the control objectives were realized by applying given control strategies, which can meet different requirement from the driver.
A Divergence Statistics Extension to VTK for Performance Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pebay, Philippe Pierre; Bennett, Janine Camille
This report follows the series of previous documents ([PT08, BPRT09b, PT09, BPT09, PT10, PB13], where we presented the parallel descriptive, correlative, multi-correlative, principal component analysis, contingency, k -means, order and auto-correlative statistics engines which we developed within the Visualization Tool Kit ( VTK ) as a scalable, parallel and versatile statistics package. We now report on a new engine which we developed for the calculation of divergence statistics, a concept which we hereafter explain and whose main goal is to quantify the discrepancy, in a stasticial manner akin to measuring a distance, between an observed empirical distribution and a theoretical,more » "ideal" one. The ease of use of the new diverence statistics engine is illustrated by the means of C++ code snippets. Although this new engine does not yet have a parallel implementation, it has already been applied to HPC performance analysis, of which we provide an example.« less
ProperCAD: A portable object-oriented parallel environment for VLSI CAD
NASA Technical Reports Server (NTRS)
Ramkumar, Balkrishna; Banerjee, Prithviraj
1993-01-01
Most parallel algorithms for VLSI CAD proposed to date have one important drawback: they work efficiently only on machines that they were designed for. As a result, algorithms designed to date are dependent on the architecture for which they are developed and do not port easily to other parallel architectures. A new project under way to address this problem is described. A Portable object-oriented parallel environment for CAD algorithms (ProperCAD) is being developed. The objectives of this research are (1) to develop new parallel algorithms that run in a portable object-oriented environment (CAD algorithms using a general purpose platform for portable parallel programming called CARM is being developed and a C++ environment that is truly object-oriented and specialized for CAD applications is also being developed); and (2) to design the parallel algorithms around a good sequential algorithm with a well-defined parallel-sequential interface (permitting the parallel algorithm to benefit from future developments in sequential algorithms). One CAD application that has been implemented as part of the ProperCAD project, flat VLSI circuit extraction, is described. The algorithm, its implementation, and its performance on a range of parallel machines are discussed in detail. It currently runs on an Encore Multimax, a Sequent Symmetry, Intel iPSC/2 and i860 hypercubes, a NCUBE 2 hypercube, and a network of Sun Sparc workstations. Performance data for other applications that were developed are provided: namely test pattern generation for sequential circuits, parallel logic synthesis, and standard cell placement.
Effect of strong elastic contrasts on the propagation of seismic wave in hard-rock environments
NASA Astrophysics Data System (ADS)
Saleh, R.; Zheng, L.; Liu, Q.; Milkereit, B.
2013-12-01
Understanding the propagation of seismic waves in a presence of strong elastic contrasts, such as topography, tunnels and ore-bodies is still a challenge. Safety in mining is a major concern and seismic monitoring is the main tool here. For engineering purposes, amplitudes (peak particle velocity/acceleration) and travel times of seismic events (mostly blasts or microseismic events) are critical parameters that have to be determined at various locations in a mine. These parameters are useful in preparing risk maps or to better understand the process of spatial and temporal stress distributions in a mine. Simple constant velocity models used for monitoring studies in mining, cannot explain the observed complexities in scattered seismic waves. In hard-rock environments modeling of elastic seismic wavefield require detailed 3D petrophysical, infrastructure and topographical data to simulate the propagation of seismic wave with a frequencies up to few kilohertz. With the development of efficient numerical techniques, and parallel computation facilities, a solution for such a problem is achievable. In this study, the effects of strong elastic contrasts such as ore-bodies, rough topography and tunnels will be illustrated using 3D modeling method. The main tools here are finite difference code (SOFI3D)[1] that has been benchmarked for engineering studies, and spectral element code (SPECFEM) [2], which was, developed for global seismology problems. The modeling results show locally enhanced peak particle velocity due to presence of strong elastic contrast and topography in models. [1] Bohlen, T. Parallel 3-D viscoelastic finite difference seismic modeling. Computers & Geosciences 28 (2002) 887-899 [2] Komatitsch, D., and J. Tromp, Introduction to the spectral-element method for 3-D seismic wave propagation, Geophys. J. Int., 139, 806-822, 1999.
ERIC Educational Resources Information Center
Gil, Arturo; Peidró, Adrián; Reinoso, Óscar; Marín, José María
2017-01-01
This paper presents a tool, LABEL, oriented to the teaching of parallel robotics. The application, organized as a set of tools developed using Easy Java Simulations, enables the study of the kinematics of parallel robotics. A set of classical parallel structures was implemented such that LABEL can solve the inverse and direct kinematic problem of…
Two-dimensional numerical simulation of a Stirling engine heat exchanger
NASA Technical Reports Server (NTRS)
Ibrahim, Mounir; Tew, Roy C.; Dudenhoefer, James E.
1989-01-01
The first phase of an effort to develop multidimensional models of Stirling engine components is described. The ultimate goal is to model an entire engine working space. Parallel plate and tubular heat exchanger models are described, with emphasis on the central part of the channel (i.e., ignoring hydrodynamic and thermal end effects). The model assumes laminar, incompressible flow with constant thermophysical properties. In addition, a constant axial temperature gradient is imposed. The governing equations describing the model have been solved using the Crack-Nicloson finite-difference scheme. Model predictions are compared with analytical solutions for oscillating/reversing flow and heat transfer in order to check numerical accuracy. Excellent agreement is obtained for flow both in circular tubes and between parallel plates. The computational heat transfer results are in good agreement with the analytical heat transfer results for parallel plates.
NASA Technical Reports Server (NTRS)
Talbot, Bryan; Zhou, Shu-Jia; Higgins, Glenn; Zukor, Dorothy (Technical Monitor)
2002-01-01
One of the most significant challenges in large-scale climate modeling, as well as in high-performance computing in other scientific fields, is that of effectively integrating many software models from multiple contributors. A software framework facilitates the integration task, both in the development and runtime stages of the simulation. Effective software frameworks reduce the programming burden for the investigators, freeing them to focus more on the science and less on the parallel communication implementation. while maintaining high performance across numerous supercomputer and workstation architectures. This document surveys numerous software frameworks for potential use in Earth science modeling. Several frameworks are evaluated in depth, including Parallel Object-Oriented Methods and Applications (POOMA), Cactus (from (he relativistic physics community), Overture, Goddard Earth Modeling System (GEMS), the National Center for Atmospheric Research Flux Coupler, and UCLA/UCB Distributed Data Broker (DDB). Frameworks evaluated in less detail include ROOT, Parallel Application Workspace (PAWS), and Advanced Large-Scale Integrated Computational Environment (ALICE). A host of other frameworks and related tools are referenced in this context. The frameworks are evaluated individually and also compared with each other.
PREMER: a Tool to Infer Biological Networks.
Villaverde, Alejandro F; Becker, Kolja; Banga, Julio R
2017-10-04
Inferring the structure of unknown cellular networks is a main challenge in computational biology. Data-driven approaches based on information theory can determine the existence of interactions among network nodes automatically. However, the elucidation of certain features - such as distinguishing between direct and indirect interactions or determining the direction of a causal link - requires estimating information-theoretic quantities in a multidimensional space. This can be a computationally demanding task, which acts as a bottleneck for the application of elaborate algorithms to large-scale network inference problems. The computational cost of such calculations can be alleviated by the use of compiled programs and parallelization. To this end we have developed PREMER (Parallel Reverse Engineering with Mutual information & Entropy Reduction), a software toolbox that can run in parallel and sequential environments. It uses information theoretic criteria to recover network topology and determine the strength and causality of interactions, and allows incorporating prior knowledge, imputing missing data, and correcting outliers. PREMER is a free, open source software tool that does not require any commercial software. Its core algorithms are programmed in FORTRAN 90 and implement OpenMP directives. It has user interfaces in Python and MATLAB/Octave, and runs on Windows, Linux and OSX (https://sites.google.com/site/premertoolbox/).
Echegaray, Sebastian; Bakr, Shaimaa; Rubin, Daniel L; Napel, Sandy
2017-10-06
The aim of this study was to develop an open-source, modular, locally run or server-based system for 3D radiomics feature computation that can be used on any computer system and included in existing workflows for understanding associations and building predictive models between image features and clinical data, such as survival. The QIFE exploits various levels of parallelization for use on multiprocessor systems. It consists of a managing framework and four stages: input, pre-processing, feature computation, and output. Each stage contains one or more swappable components, allowing run-time customization. We benchmarked the engine using various levels of parallelization on a cohort of CT scans presenting 108 lung tumors. Two versions of the QIFE have been released: (1) the open-source MATLAB code posted to Github, (2) a compiled version loaded in a Docker container, posted to DockerHub, which can be easily deployed on any computer. The QIFE processed 108 objects (tumors) in 2:12 (h/mm) using 1 core, and 1:04 (h/mm) hours using four cores with object-level parallelization. We developed the Quantitative Image Feature Engine (QIFE), an open-source feature-extraction framework that focuses on modularity, standards, parallelism, provenance, and integration. Researchers can easily integrate it with their existing segmentation and imaging workflows by creating input and output components that implement their existing interfaces. Computational efficiency can be improved by parallelizing execution at the cost of memory usage. Different parallelization levels provide different trade-offs, and the optimal setting will depend on the size and composition of the dataset to be processed.
PISCES: An environment for parallel scientific computation
NASA Technical Reports Server (NTRS)
Pratt, T. W.
1985-01-01
The parallel implementation of scientific computing environment (PISCES) is a project to provide high-level programming environments for parallel MIMD computers. Pisces 1, the first of these environments, is a FORTRAN 77 based environment which runs under the UNIX operating system. The Pisces 1 user programs in Pisces FORTRAN, an extension of FORTRAN 77 for parallel processing. The major emphasis in the Pisces 1 design is in providing a carefully specified virtual machine that defines the run-time environment within which Pisces FORTRAN programs are executed. Each implementation then provides the same virtual machine, regardless of differences in the underlying architecture. The design is intended to be portable to a variety of architectures. Currently Pisces 1 is implemented on a network of Apollo workstations and on a DEC VAX uniprocessor via simulation of the task level parallelism. An implementation for the Flexible Computing Corp. FLEX/32 is under construction. An introduction to the Pisces 1 virtual computer and the FORTRAN 77 extensions is presented. An example of an algorithm for the iterative solution of a system of equations is given. The most notable features of the design are the provision for several granularities of parallelism in programs and the provision of a window mechanism for distributed access to large arrays of data.
Materials as stem cell regulators
Murphy, William L.; McDevitt, Todd C.; Engler, Adam J.
2014-01-01
The stem cell/material interface is a complex, dynamic microenvironment in which the cell and the material cooperatively dictate one another's fate: the cell by remodelling its surroundings, and the material through its inherent properties (such as adhesivity, stiffness, nanostructure or degradability). Stem cells in contact with materials are able to sense their properties, integrate cues via signal propagation and ultimately translate parallel signalling information into cell fate decisions. However, discovering the mechanisms by which stem cells respond to inherent material characteristics is challenging because of the highly complex, multicomponent signalling milieu present in the stem cell environment. In this Review, we discuss recent evidence that shows that inherent material properties may be engineered to dictate stem cell fate decisions, and overview a subset of the operative signal transduction mechanisms that have begun to emerge. Further developments in stem cell engineering and mechanotransduction are poised to have substantial implications for stem cell biology and regenerative medicine. PMID:24845994
Decoupling the effect of shear stress and stretch on tissue growth & remodeling in a vascular graft.
van Haaften, Eline E; Wissing, Tamar B; Rutten, Marcel; Bulsink, Jurgen A; Gashi, Kujtim; van Kelle, Mathieu A J; Smits, Anthal; Bouten, Carlijn; Kurniawan, Nicholas A
2018-06-07
The success of cardiovascular tissue engineering strategies largely depends on the mechanical environment in which cells develop a neo-tissue via growth and remodeling processes. This mechanical environment is defined by the local scaffold architecture to which cells adhere, i.e., the micro-environment, and by external mechanical cues to which cells respond, i.e., hemodynamic loading. The hemodynamic environment of early-developing blood vessels consists of both shear stress (due to blood flow) and circumferential stretch (due to blood pressure). Experimental platforms that recapitulate this mechanical environment in a controlled and tunable manner are thus critical for investigating cardiovascular tissue engineering. In traditional perfusion bioreactors, however, shear stress and stretch are coupled, hampering a clear delineation of their effects on cell and tissue response. Here, we uniquely designed a bioreactor that independently combines these two types of mechanical cues in eight parallel vascular grafts. The system is computationally and experimentally validated, through finite element analysis and culture of tissue constructs respectively, to distinguish various levels of shear stress (up to 5 Pa) and cyclic stretch (up to 1.10). To illustrate the usefulness of the system, we investigated the relative contribution of cyclic stretch (1.05 at 0.5 Hz) and shear stress (1 Pa) to tissue development. Both types of hemodynamic loading contributed to cell alignment, but the contribution of shear stress overruled stretch-induced cell proliferation and matrix (i.e., collagen and glycosaminoglycan) production. At a macroscopic level, cyclic stretching led to the most linear stress-stretch response, which was not related to the presence of shear stress. In conclusion, we have developed a bioreactor that is particularly suited to further unravel the interplay between hemodynamics and in situ tissue engineering processes. Using the new system, the present work highlights the importance of hemodynamic loading to the study of developing vascular tissues.
Grider, Gary A.; Poole, Stephen W.
2015-09-01
Collective buffering and data pattern solutions are provided for storage, retrieval, and/or analysis of data in a collective parallel processing environment. For example, a method can be provided for data storage in a collective parallel processing environment. The method comprises receiving data to be written for a plurality of collective processes within a collective parallel processing environment, extracting a data pattern for the data to be written for the plurality of collective processes, generating a representation describing the data pattern, and saving the data and the representation.
An interactive parallel programming environment applied in atmospheric science
NASA Technical Reports Server (NTRS)
vonLaszewski, G.
1996-01-01
This article introduces an interactive parallel programming environment (IPPE) that simplifies the generation and execution of parallel programs. One of the tasks of the environment is to generate message-passing parallel programs for homogeneous and heterogeneous computing platforms. The parallel programs are represented by using visual objects. This is accomplished with the help of a graphical programming editor that is implemented in Java and enables portability to a wide variety of computer platforms. In contrast to other graphical programming systems, reusable parts of the programs can be stored in a program library to support rapid prototyping. In addition, runtime performance data on different computing platforms is collected in a database. A selection process determines dynamically the software and the hardware platform to be used to solve the problem in minimal wall-clock time. The environment is currently being tested on a Grand Challenge problem, the NASA four-dimensional data assimilation system.
Efficient Parallel Engineering Computing on Linux Workstations
NASA Technical Reports Server (NTRS)
Lou, John Z.
2010-01-01
A C software module has been developed that creates lightweight processes (LWPs) dynamically to achieve parallel computing performance in a variety of engineering simulation and analysis applications to support NASA and DoD project tasks. The required interface between the module and the application it supports is simple, minimal and almost completely transparent to the user applications, and it can achieve nearly ideal computing speed-up on multi-CPU engineering workstations of all operating system platforms. The module can be integrated into an existing application (C, C++, Fortran and others) either as part of a compiled module or as a dynamically linked library (DLL).
ERIC Educational Resources Information Center
Gold, Zachary Samuel
2017-01-01
Engineering play is a new perspective on preschool education that views constructive play as an engineering design process that parallels the way engineers think and work when they develop engineered solutions to human problems (Bairaktarova, Evangelou, Bagiati, & Brophy, 2011). Early research from this perspective supports its use in framing…
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
This is a fact sheet on the U.S. Department of Energy's (DOE) Advanced Reciprocating Engine Systems program (ARES), which is designed to promote separate, but parallel engine development between the major stationary, gaseous fueled engine manufacturers in the United States.
Iterative algorithms for large sparse linear systems on parallel computers
NASA Technical Reports Server (NTRS)
Adams, L. M.
1982-01-01
Algorithms for assembling in parallel the sparse system of linear equations that result from finite difference or finite element discretizations of elliptic partial differential equations, such as those that arise in structural engineering are developed. Parallel linear stationary iterative algorithms and parallel preconditioned conjugate gradient algorithms are developed for solving these systems. In addition, a model for comparing parallel algorithms on array architectures is developed and results of this model for the algorithms are given.
Performance of parallel computation using CUDA for solving the one-dimensional elasticity equations
NASA Astrophysics Data System (ADS)
Darmawan, J. B. B.; Mungkasi, S.
2017-01-01
In this paper, we investigate the performance of parallel computation in solving the one-dimensional elasticity equations. Elasticity equations are usually implemented in engineering science. Solving these equations fast and efficiently is desired. Therefore, we propose the use of parallel computation. Our parallel computation uses CUDA of the NVIDIA. Our research results show that parallel computation using CUDA has a great advantage and is powerful when the computation is of large scale.
Liquid rocket booster integration study. Volume 1: Executive summary
NASA Technical Reports Server (NTRS)
1988-01-01
The impacts of introducing liquid rocket booster engines (LRB) into the Space Transportation System (STS)/Kennedy Space Center (KSC) launch environment are identified and evaluated. Proposed ground systems configurations are presented along with a launch site requirements summary. Prelaunch processing scenarios are described and the required facility modifications and new facility requirements are analyzed. Flight vehicle design recommendations to enhance launch processing are discussed. Processing approaches to integrate LRB with existing STS launch operations are evaluated. The key features and significance of launch site transition to a new STS configuration in parallel with ongoing launch activities are enumerated. This volume is the executive summary of the five volume series.
Liquid rocket booster integration study. Volume 5, part 1: Appendices
NASA Technical Reports Server (NTRS)
1988-01-01
The impacts of introducing liquid rocket booster engines (LRB) into the Space Transportation System (STS)/Kennedy Space Center (KSC) launch environment are identified and evaluated. Proposed ground systems configurations are presented along with a launch site requirements summary. Prelaunch processing scenarios are described and the required facility modifications and new facility requirements are analyzed. Flight vehicle design recommendations to enhance launch processing are discussed. Processing approaches to integrate LRB with existing STS launch operations are evaluated. The key features and significance of launch site transition to a new STS configuration in parallel with ongoing launch activities are enumerated. This volume is the appendices of the five volume series.
Liquid Rocket Booster Integration Study. Volume 2: Study synopsis
NASA Technical Reports Server (NTRS)
1988-01-01
The impacts of introducing liquid rocket booster engines (LRB) into the Space Transportation System (STS)/Kennedy Space Center (KSC) launch environment are identified and evaluated. Proposed ground systems configurations are presented along with a launch site requirements summary. Prelaunch processing scenarios are described and the required facility modifications and new facility requirements are analyzed. Flight vehicle design recommendations to enhance launch processing are discussed. Processing approaches to integrate LRB with existing STS launch operations are evaluated. The key features and significance of launch site transition to a new STS configuration in parallel with ongoing launch activities are enumerated. This volume is the study summary of the five volume series.
Two-dimensional numerical simulation of a Stirling engine heat exchanger
NASA Technical Reports Server (NTRS)
Ibrahim, Mounir B.; Tew, Roy C.; Dudenhoefer, James E.
1989-01-01
The first phase of an effort to develop multidimensional models of Stirling engine components is described; the ultimate goal is to model an entire engine working space. More specifically, parallel plate and tubular heat exchanger models with emphasis on the central part of the channel (i.e., ignoring hydrodynamic and thermal end effects) are described. The model assumes: laminar, incompressible flow with constant thermophysical properties. In addition, a constant axial temperature gradient is imposed. The governing equations, describing the model, were solved using Crank-Nicloson finite-difference scheme. Model predictions were compared with analytical solutions for oscillating/reversing flow and heat transfer in order to check numerical accuracy. Excellent agreement was obtained for the model predictions with analytical solutions available for both flow in circular tubes and between parallel plates. Also the heat transfer computational results are in good agreement with the heat transfer analytical results for parallel plates.
NASA Technical Reports Server (NTRS)
Marshall, William M.; Borowski, Stanley K.; Bulman, Mel; Joyner, Russell; Martin, Charles R.
2015-01-01
Brief History of NTP: Project Rover Began in 1950s by Los Alamos Scientific Labs (now Los Alamos National Labs) and ran until 1970s Tested a series of nuclear reactor engines of varying size at Nevada Test Site (now Nevada National Security Site) Ranged in scale from 111 kN (25 klbf) to 1.1 MN (250 klbf) Included Nuclear Furnace-1 tests Demonstrated the viability and capability of a nuclear rocket engine test program One of Kennedys 4 goals during famous moon speech to Congress Nuclear Engines for Rocket Vehicle Applications (NERVA) Atomic Energy Commission and NASA joint venture started in 1964 Parallel effort to Project Rover was focused on technology demonstration Tested XE engine, a 245-kN (55-klbf) engine to demonstrate startup shutdown sequencing. Hot-hydrogen stream is passed directly through fuel elements potential for radioactive material to be eroded into gaseous fuel flow as identified in previous programs NERVA and Project Rover (1950s-70s) were able to test in open atmosphere similar to conventional rocket engine test stands today Nuclear Furance-1 tests employed a full scrubber system Increased government and environmental regulations prohibit the modern testing in open atmosphere. Since the 1960s, there has been an increasing cessation on open air testing of nuclear material Political and national security concerns further compound the regulatory environment
FBIS report. Science and technology: Europe/International, March 29, 1996
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1996-03-29
;Partial Contents: Advanced Materials (EU Project to Improve Production in Metal Matrix Compounds Noted, Germany: Extremely Hard Carbon Coating Development, Italy: Director of CNR Metallic Materials Institute Interviewed); Aerospace (ESA Considers Delays, Reductions as Result of Budget Cuts, Italy: Space Agency`s Director on Restructuring, Future Plans); Automotive, Transportation (EU: Clean Diesel Engine Technology Research Reviewed); Biotechnology (Germany`s Problems, Successes in Biotechnology Discussed); Computers (EU Europort Parallel Computing Project Concluded, Italy: PQE 2000 Project on Massively Parallel Systems Viewed); Defense R&D (France: Future Tasks of `Brevel` Military Intelligence Drone Noted); Energy, Environment (German Scientist Tests Elimination of Phosphates); Advanced Manufacturing (France:more » Advanced Rapid Prototyping System Presented); Lasers, Sensors, Optics (France: Strategy of Cilas Laser Company Detailed); Microelectronics (France: Simulation Company to Develop Microelectronic Manufacturing Application); Nuclear R&D (France: Megajoule Laser Plan, Cooperation with Livermore Lab Noted); S&T Policy (EU Efforts to Aid Small Companies` Research Viewed); Telecommunications (France Telecom`s Way to Internet).« less
On the Use of CAD and Cartesian Methods for Aerodynamic Optimization
NASA Technical Reports Server (NTRS)
Nemec, M.; Aftosmis, M. J.; Pulliam, T. H.
2004-01-01
The objective for this paper is to present the development of an optimization capability for Curt3D, a Cartesian inviscid-flow analysis package. We present the construction of a new optimization framework and we focus on the following issues: 1) Component-based geometry parameterization approach using parametric-CAD models and CAPRI. A novel geometry server is introduced that addresses the issue of parallel efficiency while only sparingly consuming CAD resources; 2) The use of genetic and gradient-based algorithms for three-dimensional aerodynamic design problems. The influence of noise on the optimization methods is studied. Our goal is to create a responsive and automated framework that efficiently identifies design modifications that result in substantial performance improvements. In addition, we examine the architectural issues associated with the deployment of a CAD-based approach in a heterogeneous parallel computing environment that contains both CAD workstations and dedicated compute engines. We demonstrate the effectiveness of the framework for a design problem that features topology changes and complex geometry.
A Component-Based Extension Framework for Large-Scale Parallel Simulations in NEURON
King, James G.; Hines, Michael; Hill, Sean; Goodman, Philip H.; Markram, Henry; Schürmann, Felix
2008-01-01
As neuronal simulations approach larger scales with increasing levels of detail, the neurosimulator software represents only a part of a chain of tools ranging from setup, simulation, interaction with virtual environments to analysis and visualizations. Previously published approaches to abstracting simulator engines have not received wide-spread acceptance, which in part may be to the fact that they tried to address the challenge of solving the model specification problem. Here, we present an approach that uses a neurosimulator, in this case NEURON, to describe and instantiate the network model in the simulator's native model language but then replaces the main integration loop with its own. Existing parallel network models are easily adopted to run in the presented framework. The presented approach is thus an extension to NEURON but uses a component-based architecture to allow for replaceable spike exchange components and pluggable components for monitoring, analysis, or control that can run in this framework alongside with the simulation. PMID:19430597
Bone mechanobiology, gravity and tissue engineering: effects and insights.
Ruggiu, Alessandra; Cancedda, Ranieri
2015-12-01
Bone homeostasis strongly depends on fine tuned mechanosensitive regulation signals from environmental forces into biochemical responses. Similar to the ageing process, during spaceflights an altered mechanotransduction occurs as a result of the effects of bone unloading, eventually leading to loss of functional tissue. Although spaceflights represent the best environment to investigate near-zero gravity effects, there are major limitations for setting up experimental analysis. A more feasible approach to analyse the effects of reduced mechanostimulation on the bone is represented by the 'simulated microgravity' experiments based on: (1) in vitro studies, involving cell cultures studies and the use of bioreactors with tissue engineering approaches; (2) in vivo studies, based on animal models; and (3) direct analysis on human beings, as in the case of the bed rest tests. At present, advanced tissue engineering methods allow investigators to recreate bone microenvironment in vitro for mechanobiology studies. This group and others have generated tissue 'organoids' to mimic in vitro the in vivo bone environment and to study the alteration cells can go through when subjected to unloading. Understanding the molecular mechanisms underlying the bone tissue response to mechanostimuli will help developing new strategies to prevent loss of tissue caused by altered mechanotransduction, as well as identifying new approaches for the treatment of diseases via drug testing. This review focuses on the effects of reduced gravity on bone mechanobiology by providing the up-to-date and state of the art on the available data by drawing a parallel with the suitable tissue engineering systems. Copyright © 2014 John Wiley & Sons, Ltd.
Channeled Scaffolds for Engineering Myocardium with Mechanical Stimulation
Zhang, Ting; Wan, Leo Q.; Xiong, Zhuo; Marsano, Anna; Maidhof, Robert; Park, Miri; Yan, Yongnian; Vunjak-Novakovic, Gordana
2011-01-01
The characteristics of the matrix (composition, structure, mechanical properties) and external culture environment (pulsatile perfusion, physical stimulation) are critically important for engineering functional myocardial tissue. We report the development of chitosan-collagen scaffolds with micro-pores and an array of parallel channels (~200 μm in diameter) that were specifically designed for cardiac tissue engineering with mechanical stimulation. The scaffolds were designed to have the structural and mechanical properties similar to those of the native human heart matrix. Scaffolds were seeded with neonatal rat heart cells and subjected to dynamic tensile stretch using a custom-designed bioreactor. The channels enhanced oxygen transport and facilitated the establishment of cell connections within the construct. The myocardial patches (14 mm in diameter, 1–2 mm thick) consisted of metabolically active cells and started to contract synchronously after 3 days of culture. Mechanical stimulation with high tensile stresses promoted cell alignment, elongation, and the expression of connexin-43 (Cx-43). This study confirms the importance of scaffold design and mechanical stimulation for the formation of contractile cardiac constructs. PMID:22081518
Channelled scaffolds for engineering myocardium with mechanical stimulation.
Zhang, Ting; Wan, Leo Q; Xiong, Zhuo; Marsano, Anna; Maidhof, Robert; Park, Miri; Yan, Yongnian; Vunjak-Novakovic, Gordana
2012-10-01
The characteristics of the matrix (composition, structure, mechanical properties) and external culture environment (pulsatile perfusion, physical stimulation) of the heart are important characteristics in the engineering of functional myocardial tissue. This study reports on the development of chitosan-collagen scaffolds with micropores and an array of parallel channels (~ 200 µm in diameter) that were specifically designed for cardiac tissue engineering using mechanical stimulation. The scaffolds were designed to have similar structural and mechanical properties of those of native heart matrix. Scaffolds were seeded with neonatal rat heart cells and subjected to dynamic tensile stretch using a custom designed bioreactor. The channels enhanced oxygen transport and facilitated the establishment of cell connections within the construct. The myocardial patches (14 mm in diameter, 1-2 mm thick) consisted of metabolically active cells that began to contract synchronously after 3 days of culture. Mechanical stimulation with high tensile stress promoted cell alignment, elongation, and expression of connexin-43 (Cx-43). This study confirms the importance of scaffold design and mechanical stimulation for the formation of contractile cardiac constructs. Copyright © 2011 John Wiley & Sons, Ltd.
SPEEDES - A multiple-synchronization environment for parallel discrete-event simulation
NASA Technical Reports Server (NTRS)
Steinman, Jeff S.
1992-01-01
Synchronous Parallel Environment for Emulation and Discrete-Event Simulation (SPEEDES) is a unified parallel simulation environment. It supports multiple-synchronization protocols without requiring users to recompile their code. When a SPEEDES simulation runs on one node, all the extra parallel overhead is removed automatically at run time. When the same executable runs in parallel, the user preselects the synchronization algorithm from a list of options. SPEEDES currently runs on UNIX networks and on the California Institute of Technology/Jet Propulsion Laboratory Mark III Hypercube. SPEEDES also supports interactive simulations. Featured in the SPEEDES environment is a new parallel synchronization approach called Breathing Time Buckets. This algorithm uses some of the conservative techniques found in Time Bucket synchronization, along with the optimism that characterizes the Time Warp approach. A mathematical model derived from first principles predicts the performance of Breathing Time Buckets. Along with the Breathing Time Buckets algorithm, this paper discusses the rules for processing events in SPEEDES, describes the implementation of various other synchronization protocols supported by SPEEDES, describes some new ones for the future, discusses interactive simulations, and then gives some performance results.
Design of a massively parallel computer using bit serial processing elements
NASA Technical Reports Server (NTRS)
Aburdene, Maurice F.; Khouri, Kamal S.; Piatt, Jason E.; Zheng, Jianqing
1995-01-01
A 1-bit serial processor designed for a parallel computer architecture is described. This processor is used to develop a massively parallel computational engine, with a single instruction-multiple data (SIMD) architecture. The computer is simulated and tested to verify its operation and to measure its performance for further development.
Thermal barrier coating life prediction model development
NASA Technical Reports Server (NTRS)
Sheffler, K. D.; Demasi, J. T.
1985-01-01
A methodology was established to predict thermal barrier coating life in an environment simulative of that experienced by gas turbine airfoils. Specifically, work is being conducted to determine failure modes of thermal barrier coatings in the aircraft engine environment. Analytical studies coupled with appropriate physical and mechanical property determinations are being employed to derive coating life prediction model(s) on the important failure mode(s). An initial review of experimental and flight service components indicates that the predominant mode of TBC failure involves thermomechanical spallation of the ceramic coating layer. This ceramic spallation involves the formation of a dominant crack in the ceramic coating parallel to and closely adjacent to the metal-ceramic interface. Initial results from a laboratory test program designed to study the influence of various driving forces such as temperature, thermal cycle frequency, environment, and coating thickness, on ceramic coating spalling life suggest that bond coat oxidation damage at the metal-ceramic interface contributes significantly to thermomechanical cracking in the ceramic layer. Low cycle rate furnace testing in air and in argon clearly shows a dramatic increase of spalling life in the non-oxidizing environments.
Gregoretti, Francesco; Belcastro, Vincenzo; di Bernardo, Diego; Oliva, Gennaro
2010-04-21
The reverse engineering of gene regulatory networks using gene expression profile data has become crucial to gain novel biological knowledge. Large amounts of data that need to be analyzed are currently being produced due to advances in microarray technologies. Using current reverse engineering algorithms to analyze large data sets can be very computational-intensive. These emerging computational requirements can be met using parallel computing techniques. It has been shown that the Network Identification by multiple Regression (NIR) algorithm performs better than the other ready-to-use reverse engineering software. However it cannot be used with large networks with thousands of nodes--as is the case in biological networks--due to the high time and space complexity. In this work we overcome this limitation by designing and developing a parallel version of the NIR algorithm. The new implementation of the algorithm reaches a very good accuracy even for large gene networks, improving our understanding of the gene regulatory networks that is crucial for a wide range of biomedical applications.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-13
..., measured parallel to the centerline. \\2\\ Subchapters E (Load Lines), F (Marine Engineering), J (Electrical Engineering), N (Dangerous Cargoes), S (Subdivision and Stability), and W (Lifesaving Appliances and...
Numerical Prediction of CCV in a PFI Engine using a Parallel LES Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ameen, Muhsin M; Mirzaeian, Mohsen; Millo, Federico
Cycle-to-cycle variability (CCV) is detrimental to IC engine operation and can lead to partial burn, misfire, and knock. Predicting CCV numerically is extremely challenging due to two key reasons. Firstly, high-fidelity methods such as large eddy simulation (LES) are required to accurately resolve the incylinder turbulent flowfield both spatially and temporally. Secondly, CCV is experienced over long timescales and hence the simulations need to be performed for hundreds of consecutive cycles. Ameen et al. (Int. J. Eng. Res., 2017) developed a parallel perturbation model (PPM) approach to dissociate this long time-scale problem into several shorter timescale problems. The strategy ismore » to perform multiple single-cycle simulations in parallel by effectively perturbing the initial velocity field based on the intensity of the in-cylinder turbulence. This strategy was demonstrated for motored engine and it was shown that the mean and variance of the in-cylinder flowfield was captured reasonably well by this approach. In the present study, this PPM approach is extended to simulate the CCV in a fired port-fuel injected (PFI) SI engine. Two operating conditions are considered – a medium CCV operating case corresponding to 2500 rpm and 16 bar BMEP and a low CCV case corresponding to 4000 rpm and 12 bar BMEP. The predictions from this approach are also shown to be similar to the consecutive LES cycles. Both the consecutive and PPM LES cycles are observed to under-predict the variability in the early stage of combustion. The parallel approach slightly underpredicts the cyclic variability at all stages of combustion as compared to the consecutive LES cycles. However, it is shown that the parallel approach is able to predict the coefficient of variation (COV) of the in-cylinder pressure and burn rate related parameters with sufficient accuracy, and is also able to predict the qualitative trends in CCV with changing operating conditions. The convergence of the statistics predicted by the PPM approach with respect to the number of consecutive cycles required for each parallel simulation is also investigated. It is shown that this new approach is able to give accurate predictions of the CCV in fired engines in less than one-tenth of the time required for the conventional approach of simulating consecutive engine cycles.« less
Parallel 3D Multi-Stage Simulation of a Turbofan Engine
NASA Technical Reports Server (NTRS)
Turner, Mark G.; Topp, David A.
1998-01-01
A 3D multistage simulation of each component of a modern GE Turbofan engine has been made. An axisymmetric view of this engine is presented in the document. This includes a fan, booster rig, high pressure compressor rig, high pressure turbine rig and a low pressure turbine rig. In the near future, all components will be run in a single calculation for a solution of 49 blade rows. The simulation exploits the use of parallel computations by using two levels of parallelism. Each blade row is run in parallel and each blade row grid is decomposed into several domains and run in parallel. 20 processors are used for the 4 blade row analysis. The average passage approach developed by John Adamczyk at NASA Lewis Research Center has been further developed and parallelized. This is APNASA Version A. It is a Navier-Stokes solver using a 4-stage explicit Runge-Kutta time marching scheme with variable time steps and residual smoothing for convergence acceleration. It has an implicit K-E turbulence model which uses an ADI solver to factor the matrix. Between 50 and 100 explicit time steps are solved before a blade row body force is calculated and exchanged with the other blade rows. This outer iteration has been coined a "flip." Efforts have been made to make the solver linearly scaleable with the number of blade rows. Enough flips are run (between 50 and 200) so the solution in the entire machine is not changing. The K-E equations are generally solved every other explicit time step. One of the key requirements in the development of the parallel code was to make the parallel solution exactly (bit for bit) match the serial solution. This has helped isolate many small parallel bugs and guarantee the parallelization was done correctly. The domain decomposition is done only in the axial direction since the number of points axially is much larger than the other two directions. This code uses MPI for message passing. The parallel speed up of the solver portion (no 1/0 or body force calculation) for a grid which has 227 points axially.
Structural Health Monitoring on Turbine Engines Using Microwave Blade Tip Clearance Sensors
NASA Technical Reports Server (NTRS)
Woike, Mark; Abdul-Aziz, Ali; Clem, Michelle
2014-01-01
The ability to monitor the structural health of the rotating components, especially in the hot sections of turbine engines, is of major interest to aero community in improving engine safety and reliability. The use of instrumentation for these applications remains very challenging. It requires sensors and techniques that are highly accurate, are able to operate in a high temperature environment, and can detect minute changes and hidden flaws before catastrophic events occur. The National Aeronautics and Space Administration (NASA) has taken a lead role in the investigation of new sensor technologies and techniques for the in situ structural health monitoring of gas turbine engines. As part of this effort, microwave sensor technology has been investigated as a means of making high temperature non-contact blade tip clearance, blade tip timing, and blade vibration measurements for use in gas turbine engines. This paper presents a summary of key results and findings obtained from the evaluation of two different types of microwave sensors that have been investigated for use possible in structural health monitoring applications. The first is a microwave blade tip clearance sensor that has been evaluated on a large scale Axial Vane Fan, a subscale Turbofan, and more recently on sub-scale turbine engine like disks. The second is a novel microwave based blade vibration sensor that was also used in parallel with the microwave blade tip clearance sensors on the experiments with the sub-scale turbine engine disks.
Structural health monitoring on turbine engines using microwave blade tip clearance sensors
NASA Astrophysics Data System (ADS)
Woike, Mark; Abdul-Aziz, Ali; Clem, Michelle
2014-04-01
The ability to monitor the structural health of the rotating components, especially in the hot sections of turbine engines, is of major interest to the aero community in improving engine safety and reliability. The use of instrumentation for these applications remains very challenging. It requires sensors and techniques that are highly accurate, are able to operate in a high temperature environment, and can detect minute changes and hidden flaws before catastrophic events occur. The National Aeronautics and Space Administration (NASA) has taken a lead role in the investigation of new sensor technologies and techniques for the in situ structural health monitoring of gas turbine engines. As part of this effort, microwave sensor technology has been investigated as a means of making high temperature non-contact blade tip clearance, blade tip timing, and blade vibration measurements for use in gas turbine engines. This paper presents a summary of key results and findings obtained from the evaluation of two different types of microwave sensors that have been investigated for possible use in structural health monitoring applications. The first is a microwave blade tip clearance sensor that has been evaluated on a large scale Axial Vane Fan, a subscale Turbofan, and more recently on sub-scale turbine engine like disks. The second is a novel microwave based blade vibration sensor that was also used in parallel with the microwave blade tip clearance sensors on the same experiments with the sub-scale turbine engine disks.
Fe-oxidizing microbes are hydrothermal vent ecosystem engineers at the Loihi Seamount (Invited)
NASA Astrophysics Data System (ADS)
Chan, C. S.; McAllister, S.; Leavitt, A.; Emerson, D.; Moyer, C. L.; Glazer, B. T.
2013-12-01
Microaerophilic Fe-oxidizing microorganisms (FeOM) colonize gradients of Fe(II) and oxygen, taking advantage of the available chemical energy. Vast communities of FeOM proliferate at deep sea hydrothermal vents, forming mineralized mats that range from centimeters to meters thick. Because these mats structure the environment for both FeOM and the entire microbial community, the Fe-oxidizers are acting as ecosystem engineers. What organisms are responsible for initiating these mats, and how does the physical structure and community composition develop as the mats mature? By connecting structure, function, and ecology, we can better interpret modern mat structures, as well as ancient fossilized mats. We have been studying Fe microbial mats at Loihi Seamount in Hawaii, a long-term study site that has become a model for Fe oxidation in marine hydrothermal systems. Recent improvements in ROV imaging systems allow us to see a great range of mat textures and colors, which may represent diverse habitats and/or different stages of mat development. With improved imaging and sampling techniques, we have been able to obtain discrete, intact samples of these delicate microbial mats. Previous bulk sampling methods showed that mats consist of a mixture of Fe-mineralized morphologies. Our analyses of intact mats show that mats are initiated by one type of structure-former (either a stalk-former like Mariprofundus ferrooxydans or a Zetaproteobacterial sheath-former). These microbes may be the vanguard organisms that stabilize chemical gradients in this dynamic environment, allowing colonization by other organisms (evidenced by branching tubes, fibrillar nests, and other morphologies). We will show evidence of the composition and development of these mats, and discuss parallels between these marine Fe mats and their freshwater counterparts, supporting the idea that FeOM engineer environments favorable for growth.
Concurrent Software Engineering Project
ERIC Educational Resources Information Center
Stankovic, Nenad; Tillo, Tammam
2009-01-01
Concurrent engineering or overlapping activities is a business strategy for schedule compression on large development projects. Design parameters and tasks from every aspect of a product's development process and their interdependencies are overlapped and worked on in parallel. Concurrent engineering suffers from negative effects such as excessive…
Multidisciplinary High-Fidelity Analysis and Optimization of Aerospace Vehicles. Part 1; Formulation
NASA Technical Reports Server (NTRS)
Walsh, J. L.; Townsend, J. C.; Salas, A. O.; Samareh, J. A.; Mukhopadhyay, V.; Barthelemy, J.-F.
2000-01-01
An objective of the High Performance Computing and Communication Program at the NASA Langley Research Center is to demonstrate multidisciplinary shape and sizing optimization of a complete aerospace vehicle configuration by using high-fidelity, finite element structural analysis and computational fluid dynamics aerodynamic analysis in a distributed, heterogeneous computing environment that includes high performance parallel computing. A software system has been designed and implemented to integrate a set of existing discipline analysis codes, some of them computationally intensive, into a distributed computational environment for the design of a highspeed civil transport configuration. The paper describes the engineering aspects of formulating the optimization by integrating these analysis codes and associated interface codes into the system. The discipline codes are integrated by using the Java programming language and a Common Object Request Broker Architecture (CORBA) compliant software product. A companion paper presents currently available results.
Chaining direct memory access data transfer operations for compute nodes in a parallel computer
Archer, Charles J.; Blocksome, Michael A.
2010-09-28
Methods, systems, and products are disclosed for chaining DMA data transfer operations for compute nodes in a parallel computer that include: receiving, by an origin DMA engine on an origin node in an origin injection FIFO buffer for the origin DMA engine, a RGET data descriptor specifying a DMA transfer operation data descriptor on the origin node and a second RGET data descriptor on the origin node, the second RGET data descriptor specifying a target RGET data descriptor on the target node, the target RGET data descriptor specifying an additional DMA transfer operation data descriptor on the origin node; creating, by the origin DMA engine, an RGET packet in dependence upon the RGET data descriptor, the RGET packet containing the DMA transfer operation data descriptor and the second RGET data descriptor; and transferring, by the origin DMA engine to a target DMA engine on the target node, the RGET packet.
High-Performance Parallel Analysis of Coupled Problems for Aircraft Propulsion
NASA Technical Reports Server (NTRS)
Felippa, C. A.; Farhat, C.; Park, K. C.; Gumaste, U.; Chen, P.-S.; Lesoinne, M.; Stern, P.
1996-01-01
This research program dealt with the application of high-performance computing methods to the numerical simulation of complete jet engines. The program was initiated in January 1993 by applying two-dimensional parallel aeroelastic codes to the interior gas flow problem of a bypass jet engine. The fluid mesh generation, domain decomposition and solution capabilities were successfully tested. Attention was then focused on methodology for the partitioned analysis of the interaction of the gas flow with a flexible structure and with the fluid mesh motion driven by these structural displacements. The latter is treated by a ALE technique that models the fluid mesh motion as that of a fictitious mechanical network laid along the edges of near-field fluid elements. New partitioned analysis procedures to treat this coupled three-component problem were developed during 1994 and 1995. These procedures involved delayed corrections and subcycling, and have been successfully tested on several massively parallel computers, including the iPSC-860, Paragon XP/S and the IBM SP2. For the global steady-state axisymmetric analysis of a complete engine we have decided to use the NASA-sponsored ENG10 program, which uses a regular FV-multiblock-grid discretization in conjunction with circumferential averaging to include effects of blade forces, loss, combustor heat addition, blockage, bleeds and convective mixing. A load-balancing preprocessor tor parallel versions of ENG10 was developed. During 1995 and 1996 we developed the capability tor the first full 3D aeroelastic simulation of a multirow engine stage. This capability was tested on the IBM SP2 parallel supercomputer at NASA Ames. Benchmark results were presented at the 1196 Computational Aeroscience meeting.
Planetary-Scale Geospatial Data Analysis Techniques in Google's Earth Engine Platform (Invited)
NASA Astrophysics Data System (ADS)
Hancher, M.
2013-12-01
Geoscientists have more and more access to new tools for large-scale computing. With any tool, some tasks are easy and other tasks hard. It is natural to look to new computing platforms to increase the scale and efficiency of existing techniques, but there is a more exiting opportunity to discover and develop a new vocabulary of fundamental analysis idioms that are made easy and effective by these new tools. Google's Earth Engine platform is a cloud computing environment for earth data analysis that combines a public data catalog with a large-scale computational facility optimized for parallel processing of geospatial data. The data catalog includes a nearly complete archive of scenes from Landsat 4, 5, 7, and 8 that have been processed by the USGS, as well as a wide variety of other remotely-sensed and ancillary data products. Earth Engine supports a just-in-time computation model that enables real-time preview during algorithm development and debugging as well as during experimental data analysis and open-ended data exploration. Data processing operations are performed in parallel across many computers in Google's datacenters. The platform automatically handles many traditionally-onerous data management tasks, such as data format conversion, reprojection, resampling, and associating image metadata with pixel data. Early applications of Earth Engine have included the development of Google's global cloud-free fifteen-meter base map and global multi-decadal time-lapse animations, as well as numerous large and small experimental analyses by scientists from a range of academic, government, and non-governmental institutions, working in a wide variety of application areas including forestry, agriculture, urban mapping, and species habitat modeling. Patterns in the successes and failures of these early efforts have begun to emerge, sketching the outlines of a new set of simple and effective approaches to geospatial data analysis.
NASA Astrophysics Data System (ADS)
Tellman, B.; Schwarz, B.
2014-12-01
This talk describes the development of a web application to predict and communicate vulnerability to floods given publicly available data, disaster science, and geotech cloud capabilities. The proof of concept in Google Earth Engine API with initial testing on case studies in New York and Utterakhand India demonstrates the potential of highly parallelized cloud computing to model socio-ecological disaster vulnerability at high spatial and temporal resolution and in near real time. Cloud computing facilitates statistical modeling with variables derived from large public social and ecological data sets, including census data, nighttime lights (NTL), and World Pop to derive social parameters together with elevation, satellite imagery, rainfall, and observed flood data from Dartmouth Flood Observatory to derive biophysical parameters. While more traditional, physically based hydrological models that rely on flow algorithms and numerical methods are currently unavailable in parallelized computing platforms like Google Earth Engine, there is high potential to explore "data driven" modeling that trades physics for statistics in a parallelized environment. A data driven approach to flood modeling with geographically weighted logistic regression has been initially tested on Hurricane Irene in southeastern New York. Comparison of model results with observed flood data reveals a 97% accuracy of the model to predict flooded pixels. Testing on multiple storms is required to further validate this initial promising approach. A statistical social-ecological flood model that could produce rapid vulnerability assessments to predict who might require immediate evacuation and where could serve as an early warning. This type of early warning system would be especially relevant in data poor places lacking the computing power, high resolution data such as LiDar and stream gauges, or hydrologic expertise to run physically based models in real time. As the data-driven model presented relies on globally available data, the only real time data input required would be typical data from a weather service, e.g. precipitation or coarse resolution flood prediction. However, model uncertainty will vary locally depending upon the resolution and frequency of observed flood and socio-economic damage impact data.
P-HARP: A parallel dynamic spectral partitioner
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sohn, A.; Biswas, R.; Simon, H.D.
1997-05-01
Partitioning unstructured graphs is central to the parallel solution of problems in computational science and engineering. The authors have introduced earlier the sequential version of an inertial spectral partitioner called HARP which maintains the quality of recursive spectral bisection (RSB) while forming the partitions an order of magnitude faster than RSB. The serial HARP is known to be the fastest spectral partitioner to date, three to four times faster than similar partitioners on a variety of meshes. This paper presents a parallel version of HARP, called P-HARP. Two types of parallelism have been exploited: loop level parallelism and recursive parallelism.more » P-HARP has been implemented in MPI on the SGI/Cray T3E and the IBM SP2. Experimental results demonstrate that P-HARP can partition a mesh of over 100,000 vertices into 256 partitions in 0.25 seconds on a 64-processor T3E. Experimental results further show that P-HARP can give nearly a 20-fold speedup on 64 processors. These results indicate that graph partitioning is no longer a major bottleneck that hinders the advancement of computational science and engineering for dynamically-changing real-world applications.« less
Multi-source Geospatial Data Analysis with Google Earth Engine
NASA Astrophysics Data System (ADS)
Erickson, T.
2014-12-01
The Google Earth Engine platform is a cloud computing environment for data analysis that combines a public data catalog with a large-scale computational facility optimized for parallel processing of geospatial data. The data catalog is a multi-petabyte archive of georeferenced datasets that include images from Earth observing satellite and airborne sensors (examples: USGS Landsat, NASA MODIS, USDA NAIP), weather and climate datasets, and digital elevation models. Earth Engine supports both a just-in-time computation model that enables real-time preview and debugging during algorithm development for open-ended data exploration, and a batch computation mode for applying algorithms over large spatial and temporal extents. The platform automatically handles many traditionally-onerous data management tasks, such as data format conversion, reprojection, and resampling, which facilitates writing algorithms that combine data from multiple sensors and/or models. Although the primary use of Earth Engine, to date, has been the analysis of large Earth observing satellite datasets, the computational platform is generally applicable to a wide variety of use cases that require large-scale geospatial data analyses. This presentation will focus on how Earth Engine facilitates the analysis of geospatial data streams that originate from multiple separate sources (and often communities) and how it enables collaboration during algorithm development and data exploration. The talk will highlight current projects/analyses that are enabled by this functionality.https://earthengine.google.org
Learning from the Parallel Pathways of Makers to Broaden Pathways to Engineering
ERIC Educational Resources Information Center
Foster, Christina; Wigner, Aubrey; Lande, Micah; Jordan, Shawn S.
2018-01-01
Background: Makers are a growing community of STEM-minded people who bridge technical and non-technical backgrounds to imagine, build and fabricate engineering systems. Some have engineering training, some do not. This paper presents a study to explore the educational pathways of adult Makers and how they intersect with engineering. This research…
The natural history of the sleep and respiratory engineering track at EMBC 1988 to 2010.
Leder, Ron S; Schlotthauer, Gaston; Penzel, Thomas; Jane, Raimon
2010-01-01
Sleep science and respiratory engineering as medical subspecialties and research areas grew up side-by-side with biomedical engineering. The formation of EMBS in the 1950's and the discovery of REM sleep in the 1950's led to parallel development and interaction of sleep and biomedical engineering in diagnostics and therapeutics.
GSHR-Tree: a spatial index tree based on dynamic spatial slot and hash table in grid environments
NASA Astrophysics Data System (ADS)
Chen, Zhanlong; Wu, Xin-cai; Wu, Liang
2008-12-01
Computation Grids enable the coordinated sharing of large-scale distributed heterogeneous computing resources that can be used to solve computationally intensive problems in science, engineering, and commerce. Grid spatial applications are made possible by high-speed networks and a new generation of Grid middleware that resides between networks and traditional GIS applications. The integration of the multi-sources and heterogeneous spatial information and the management of the distributed spatial resources and the sharing and cooperative of the spatial data and Grid services are the key problems to resolve in the development of the Grid GIS. The performance of the spatial index mechanism is the key technology of the Grid GIS and spatial database affects the holistic performance of the GIS in Grid Environments. In order to improve the efficiency of parallel processing of a spatial mass data under the distributed parallel computing grid environment, this paper presents a new grid slot hash parallel spatial index GSHR-Tree structure established in the parallel spatial indexing mechanism. Based on the hash table and dynamic spatial slot, this paper has improved the structure of the classical parallel R tree index. The GSHR-Tree index makes full use of the good qualities of R-Tree and hash data structure. This paper has constructed a new parallel spatial index that can meet the needs of parallel grid computing about the magnanimous spatial data in the distributed network. This arithmetic splits space in to multi-slots by multiplying and reverting and maps these slots to sites in distributed and parallel system. Each sites constructs the spatial objects in its spatial slot into an R tree. On the basis of this tree structure, the index data was distributed among multiple nodes in the grid networks by using large node R-tree method. The unbalance during process can be quickly adjusted by means of a dynamical adjusting algorithm. This tree structure has considered the distributed operation, reduplication operation transfer operation of spatial index in the grid environment. The design of GSHR-Tree has ensured the performance of the load balance in the parallel computation. This tree structure is fit for the parallel process of the spatial information in the distributed network environments. Instead of spatial object's recursive comparison where original R tree has been used, the algorithm builds the spatial index by applying binary code operation in which computer runs more efficiently, and extended dynamic hash code for bit comparison. In GSHR-Tree, a new server is assigned to the network whenever a split of a full node is required. We describe a more flexible allocation protocol which copes with a temporary shortage of storage resources. It uses a distributed balanced binary spatial tree that scales with insertions to potentially any number of storage servers through splits of the overloaded ones. The application manipulates the GSHR-Tree structure from a node in the grid environment. The node addresses the tree through its image that the splits can make outdated. This may generate addressing errors, solved by the forwarding among the servers. In this paper, a spatial index data distribution algorithm that limits the number of servers has been proposed. We improve the storage utilization at the cost of additional messages. The structure of GSHR-Tree is believed that the scheme of this grid spatial index should fit the needs of new applications using endlessly larger sets of spatial data. Our proposal constitutes a flexible storage allocation method for a distributed spatial index. The insertion policy can be tuned dynamically to cope with periods of storage shortage. In such cases storage balancing should be favored for better space utilization, at the price of extra message exchanges between servers. This structure makes a compromise in the updating of the duplicated index and the transformation of the spatial index data. Meeting the needs of the grid computing, GSHRTree has a flexible structure in order to satisfy new needs in the future. The GSHR-Tree provides the R-tree capabilities for large spatial datasets stored over interconnected servers. The analysis, including the experiments, confirmed the efficiency of our design choices. The scheme should fit the needs of new applications of spatial data, using endlessly larger datasets. Using the system response time of the parallel processing of spatial scope query algorithm as the performance evaluation factor, According to the result of the simulated the experiments, GSHR-Tree is performed to prove the reasonable design and the high performance of the indexing structure that the paper presented.
High-performance parallel analysis of coupled problems for aircraft propulsion
NASA Technical Reports Server (NTRS)
Felippa, C. A.; Farhat, C.; Lanteri, S.; Gumaste, U.; Ronaghi, M.
1994-01-01
Applications are described of high-performance parallel, computation for the analysis of complete jet engines, considering its multi-discipline coupled problem. The coupled problem involves interaction of structures with gas dynamics, heat conduction and heat transfer in aircraft engines. The methodology issues addressed include: consistent discrete formulation of coupled problems with emphasis on coupling phenomena; effect of partitioning strategies, augmentation and temporal solution procedures; sensitivity of response to problem parameters; and methods for interfacing multiscale discretizations in different single fields. The computer implementation issues addressed include: parallel treatment of coupled systems; domain decomposition and mesh partitioning strategies; data representation in object-oriented form and mapping to hardware driven representation, and tradeoff studies between partitioning schemes and fully coupled treatment.
Stage-by-Stage and Parallel Flow Path Compressor Modeling for a Variable Cycle Engine
NASA Technical Reports Server (NTRS)
Kopasakis, George; Connolly, Joseph W.; Cheng, Larry
2015-01-01
This paper covers the development of stage-by-stage and parallel flow path compressor modeling approaches for a Variable Cycle Engine. The stage-by-stage compressor modeling approach is an extension of a technique for lumped volume dynamics and performance characteristic modeling. It was developed to improve the accuracy of axial compressor dynamics over lumped volume dynamics modeling. The stage-by-stage compressor model presented here is formulated into a parallel flow path model that includes both axial and rotational dynamics. This is done to enable the study of compressor and propulsion system dynamic performance under flow distortion conditions. The approaches utilized here are generic and should be applicable for the modeling of any axial flow compressor design.
Fast Whole-Engine Stirling Analysis
NASA Technical Reports Server (NTRS)
Dyson, Rodger W.; Wilson, Scott D.; Tew, Roy C.; Demko, Rikako
2006-01-01
This presentation discusses the simulation approach to whole-engine for physical consistency, REV regenerator modeling, grid layering for smoothness, and quality, conjugate heat transfer method adjustment, high-speed low cost parallel cluster, and debugging.
Liquid rocket booster integration study. Volume 3: Study products. Part 2: Sections 8-19
NASA Technical Reports Server (NTRS)
1988-01-01
The impacts of introducing liquid rocket booster engines (LRB) into the Space Transportation System (STS)/Kennedy Space Center (KSC) launch environment are identified and evaluated. Proposed ground systems configurations are presented along with a launch site requirements summary. Prelaunch processing scenarios are described and the required facility modifications and new facility requirements are analyzed. Flight vehicle design recommendations to enhance launch processing are discussed. Processing approaches to integrate LRB with existing STS launch operations are evaluated. The key features and significance of launch site transition to a new STS configuration in parallel with ongoing launch activities are enumerated. This volume is part two of the study products section of the five volume series.
Liquid rocket booster integration study. Volume 3, part 1: Study products
NASA Technical Reports Server (NTRS)
1988-01-01
The impacts of introducing liquid rocket booster engines (LRB) into the Space Transportation System (STS)/Kennedy Space Center (KSC) launch environment are identified and evaluated. Proposed ground systems configurations are presented along with a launch site requirements summary. Prelaunch processing scenarios are described and the required facility modifications and new facility requirements are analyzed. Flight vehicle design recommendations to enhance launch processing are discussed. Processing approaches to integrate LRB with existing STS launch operations are evaluated. The key features and significance of launch site transition to a new STS configuration in parallel with ongoing launch activities are enumerated. This volume is part one of the study products section of the five volume series.
Clinical image processing engine
NASA Astrophysics Data System (ADS)
Han, Wei; Yao, Jianhua; Chen, Jeremy; Summers, Ronald
2009-02-01
Our group provides clinical image processing services to various institutes at NIH. We develop or adapt image processing programs for a variety of applications. However, each program requires a human operator to select a specific set of images and execute the program, as well as store the results appropriately for later use. To improve efficiency, we design a parallelized clinical image processing engine (CIPE) to streamline and parallelize our service. The engine takes DICOM images from a PACS server, sorts and distributes the images to different applications, multithreads the execution of applications, and collects results from the applications. The engine consists of four modules: a listener, a router, a job manager and a data manager. A template filter in XML format is defined to specify the image specification for each application. A MySQL database is created to store and manage the incoming DICOM images and application results. The engine achieves two important goals: reduce the amount of time and manpower required to process medical images, and reduce the turnaround time for responding. We tested our engine on three different applications with 12 datasets and demonstrated that the engine improved the efficiency dramatically.
Architecture-Adaptive Computing Environment: A Tool for Teaching Parallel Programming
NASA Technical Reports Server (NTRS)
Dorband, John E.; Aburdene, Maurice F.
2002-01-01
Recently, networked and cluster computation have become very popular. This paper is an introduction to a new C based parallel language for architecture-adaptive programming, aCe C. The primary purpose of aCe (Architecture-adaptive Computing Environment) is to encourage programmers to implement applications on parallel architectures by providing them the assurance that future architectures will be able to run their applications with a minimum of modification. A secondary purpose is to encourage computer architects to develop new types of architectures by providing an easily implemented software development environment and a library of test applications. This new language should be an ideal tool to teach parallel programming. In this paper, we will focus on some fundamental features of aCe C.
High-performance parallel analysis of coupled problems for aircraft propulsion
NASA Technical Reports Server (NTRS)
Felippa, C. A.; Farhat, C.; Chen, P.-S.; Gumaste, U.; Leoinne, M.; Stern, P.
1995-01-01
This research program deals with the application of high-performance computing methods to the numerical simulation of complete jet engines. The program was initiated in 1993 by applying two-dimensional parallel aeroelastic codes to the interior gas flow problem of a by-pass jet engine. The fluid mesh generation, domain decomposition and solution capabilities were successfully tested. Attention was then focused on methodology for the partitioned analysis of the interaction of the gas flow with a flexible structure and with the fluid mesh motion driven by these structural displacements. The latter is treated by an ALE technique that models the fluid mesh motion as that of a fictitious mechanical network laid along the edges of near-field fluid elements. New partitioned analysis procedures to treat this coupled 3-component problem were developed in 1994. These procedures involved delayed corrections and subcycling, and have been successfully tested on several massively parallel computers. For the global steady-state axisymmetric analysis of a complete engine we have decided to use the NASA-sponsored ENG10 program, which uses a regular FV-multiblock-grid discretization in conjunction with circumferential averaging to include effects of blade forces, loss, combustor heat addition, blockage, bleeds and convective mixing. A load-balancing preprocessor for parallel versions of ENG10 has been developed. It is planned to use the steady-state global solution provided by ENG10 as input to a localized three-dimensional FSI analysis for engine regions where aeroelastic effects may be important.
Research in Parallel Algorithms and Software for Computational Aerosciences
NASA Technical Reports Server (NTRS)
Domel, Neal D.
1996-01-01
Phase I is complete for the development of a Computational Fluid Dynamics parallel code with automatic grid generation and adaptation for the Euler analysis of flow over complex geometries. SPLITFLOW, an unstructured Cartesian grid code developed at Lockheed Martin Tactical Aircraft Systems, has been modified for a distributed memory/massively parallel computing environment. The parallel code is operational on an SGI network, Cray J90 and C90 vector machines, SGI Power Challenge, and Cray T3D and IBM SP2 massively parallel machines. Parallel Virtual Machine (PVM) is the message passing protocol for portability to various architectures. A domain decomposition technique was developed which enforces dynamic load balancing to improve solution speed and memory requirements. A host/node algorithm distributes the tasks. The solver parallelizes very well, and scales with the number of processors. Partially parallelized and non-parallelized tasks consume most of the wall clock time in a very fine grain environment. Timing comparisons on a Cray C90 demonstrate that Parallel SPLITFLOW runs 2.4 times faster on 8 processors than its non-parallel counterpart autotasked over 8 processors.
Research in Parallel Algorithms and Software for Computational Aerosciences
NASA Technical Reports Server (NTRS)
Domel, Neal D.
1996-01-01
Phase 1 is complete for the development of a computational fluid dynamics CFD) parallel code with automatic grid generation and adaptation for the Euler analysis of flow over complex geometries. SPLITFLOW, an unstructured Cartesian grid code developed at Lockheed Martin Tactical Aircraft Systems, has been modified for a distributed memory/massively parallel computing environment. The parallel code is operational on an SGI network, Cray J90 and C90 vector machines, SGI Power Challenge, and Cray T3D and IBM SP2 massively parallel machines. Parallel Virtual Machine (PVM) is the message passing protocol for portability to various architectures. A domain decomposition technique was developed which enforces dynamic load balancing to improve solution speed and memory requirements. A host/node algorithm distributes the tasks. The solver parallelizes very well, and scales with the number of processors. Partially parallelized and non-parallelized tasks consume most of the wall clock time in a very fine grain environment. Timing comparisons on a Cray C90 demonstrate that Parallel SPLITFLOW runs 2.4 times faster on 8 processors than its non-parallel counterpart autotasked over 8 processors.
Virtual Collaborative Environments for System of Systems Engineering and Applications for ISAT
NASA Technical Reports Server (NTRS)
Dryer, David A.
2002-01-01
This paper describes an system of systems or metasystems approach and models developed to help prepare engineering organizations for distributed engineering environments. These changes in engineering enterprises include competition in increasingly global environments; new partnering opportunities caused by advances in information and communication technologies, and virtual collaboration issues associated with dispersed teams. To help address challenges and needs in this environment, a framework is proposed that can be customized and adapted for NASA to assist in improved engineering activities conducted in distributed, enhanced engineering environments. The approach is designed to prepare engineers for such distributed collaborative environments by learning and applying e-engineering methods and tools to a real-world engineering development scenario. The approach consists of two phases: an e-engineering basics phase and e-engineering application phase. The e-engineering basics phase addresses skills required for e-engineering. The e-engineering application phase applies these skills in a distributed collaborative environment to system development projects.
Granular Simulation of NEO Anchoring
NASA Technical Reports Server (NTRS)
Mazhar, Hammad
2011-01-01
NASA is interested in designing a spacecraft capable of visiting a Near Earth Object (NEO), performing experiments, and then returning safely. Certain periods of this mission will require the spacecraft to remain stationary relative to the NEO. Such situations require an anchoring mechanism that is compact, easy to deploy and upon mission completion, easily removed. The design philosophy used in the project relies on the simulation capability of a multibody dynamics physics engine. On Earth it is difficult to create low gravity conditions and testing in low gravity environments, whether artificial or in space is costly and therefore not feasible. Through simulation, gravity can be controlled with great accuracy, making it ideally suited to analyze the problem at hand. Using Chrono::Engine [1], a simulation package capable of utilizing massively parallel GPU hardware, several validation experiments will be performed. Once there is sufficient confidence, modeling of the NEO regolith interaction will begin after which the anchor tests will be performed and analyzed. The outcome of this task is a study with an analysis of several different anchor designs, along with a recommendation on which anchor is better suited to the task of anchoring. With the anchors tested against a range of parameters relating to soil, environment and anchor penetration angles/velocities on a NEO.
NASA Astrophysics Data System (ADS)
Knightes, C. D.; Bouchard, D.; Zepp, R. G.; Henderson, W. M.; Han, Y.; Hsieh, H. S.; Avant, B. K.; Acrey, B.; Spear, J.
2017-12-01
The unique properties of engineered nanomaterials led to their increased production and potential release into the environment. Currently available environmental fate models developed for traditional contaminants are limited in their ability to simulate nanomaterials' environmental behavior. This is due to an incomplete understanding and representation of the processes governing nanomaterial distribution in the environment and by scarce empirical data quantifying the interaction of nanomaterials with environmental surfaces. The well-known Water Quality Analysis Simulation Program (WASP) was updated to incorporate nanomaterial-specific processes, specifically hetero-aggregation with particulate matter. In parallel with this effort, laboratory studies were used to quantify parameter values parameters necessary for governing processes in surface waters. This presentation will discuss the recent developments in the new architecture for WASP8 and the newly constructed Advanced Toxicant Module. The module includes advanced algorithms for increased numbers of state variables: chemicals, solids, dissolved organic matter, pathogens, temperature, and salinity. This presentation will focus specifically on the incorporation of nanomaterials, with the applications of the fate and transport of hypothetical releases of Multi-Walled Carbon Nanotubes (MWCNT) and Graphene Oxide (GO) into the headwaters of a southeastern US coastal plains river. While this presentation focuses on nanomaterials, the advanced toxicant module can also simulate metals and organic contaminants.
Structural, Physical, and Compositional Analysis of Lunar Simulants and Regolith
NASA Technical Reports Server (NTRS)
Greenberg, Paul; Street, Kenneth W.; Gaier, James
2008-01-01
Relative to the prior manned Apollo and unmanned robotic missions, planned Lunar initiatives are comparatively complex and longer in duration. Individual crew rotations are envisioned to span several months, and various surface systems must function in the Lunar environment for periods of years. As a consequence, an increased understanding of the surface environment is required to engineer and test the associated materials, components, and systems necessary to sustain human habitation and surface operations. The effort described here concerns the analysis of existing simulant materials, with application to Lunar return samples. The interplay between these analyses fulfills the objective of ascertaining the critical properties of regolith itself, and the parallel objective of developing suitable stimulant materials for a variety of engineering applications. Presented here are measurements of the basic physical attributes, i.e. particle size distributions and general shape factors. Also discussed are structural and chemical properties, as determined through a variety of techniques, such as optical microscopy, SEM and TEM microscopy, Mossbauer Spectroscopy, X-ray diffraction, Raman microspectroscopy, inductively coupled argon plasma emission spectroscopy and energy dispersive X-ray fluorescence mapping. A comparative description of currently available stimulant materials is discussed, with implications for more detailed analyses, as well as the requirements for continued refinement of methods for simulant production.
Interactive visual optimization and analysis for RFID benchmarking.
Wu, Yingcai; Chung, Ka-Kei; Qu, Huamin; Yuan, Xiaoru; Cheung, S C
2009-01-01
Radio frequency identification (RFID) is a powerful automatic remote identification technique that has wide applications. To facilitate RFID deployment, an RFID benchmarking instrument called aGate has been invented to identify the strengths and weaknesses of different RFID technologies in various environments. However, the data acquired by aGate are usually complex time varying multidimensional 3D volumetric data, which are extremely challenging for engineers to analyze. In this paper, we introduce a set of visualization techniques, namely, parallel coordinate plots, orientation plots, a visual history mechanism, and a 3D spatial viewer, to help RFID engineers analyze benchmark data visually and intuitively. With the techniques, we further introduce two workflow procedures (a visual optimization procedure for finding the optimum reader antenna configuration and a visual analysis procedure for comparing the performance and identifying the flaws of RFID devices) for the RFID benchmarking, with focus on the performance analysis of the aGate system. The usefulness and usability of the system are demonstrated in the user evaluation.
NASA Technical Reports Server (NTRS)
Dubiel, D. J.; Lohmann, R. P.; Tanrikut, S.; Morris, P. M.
1986-01-01
Under the NASA-sponsored Energy Efficient Engine program, Pratt and Whitney has successfully completed a comprehensive test program using a 90-degree sector combustor rig that featured an advanced two-stage combustor with a succession of advanced segmented liners. Building on the successful characteristics of the first generation counter-parallel Finwall cooled segmented liner, design features of an improved performance metallic segmented liner were substantiated through representative high pressure and temperature testing in a combustor atmosphere. This second generation liner was substantially lighter and lower in cost than the predecessor configuration. The final test in this series provided an evaluation of ceramic composite liner segments in a representative combustor environment. It was demonstrated that the unique properties of ceramic composites, low density, high fracture toughness, and thermal fatigue resistance can be advantageously exploited in high temperature components. Overall, this Combustor Section Rig Test program has provided a firm basis for the design of advanced combustor liners.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hunt, D.N.
1997-02-01
The Information Engineering thrust area develops information technology to support the programmatic needs of Lawrence Livermore National Laboratory`s Engineering Directorate. Progress in five programmatic areas are described in separate reports contained herein. These are entitled Three-dimensional Object Creation, Manipulation, and Transport, Zephyr:A Secure Internet-Based Process to Streamline Engineering Procurements, Subcarrier Multiplexing: Optical Network Demonstrations, Parallel Optical Interconnect Technology Demonstration, and Intelligent Automation Architecture.
Flow of GE90 Turbofan Engine Simulated
NASA Technical Reports Server (NTRS)
Veres, Joseph P.
1999-01-01
The objective of this task was to create and validate a three-dimensional model of the GE90 turbofan engine (General Electric) using the APNASA (average passage) flow code. This was a joint effort between GE Aircraft Engines and the NASA Lewis Research Center. The goal was to perform an aerodynamic analysis of the engine primary flow path, in under 24 hours of CPU time, on a parallel distributed workstation system. Enhancements were made to the APNASA Navier-Stokes code to make it faster and more robust and to allow for the analysis of more arbitrary geometry. The resulting simulation exploited the use of parallel computations by using two levels of parallelism, with extremely high efficiency.The primary flow path of the GE90 turbofan consists of a nacelle and inlet, 49 blade rows of turbomachinery, and an exhaust nozzle. Secondary flows entering and exiting the primary flow path-such as bleed, purge, and cooling flows-were modeled macroscopically as source terms to accurately simulate the engine. The information on these source terms came from detailed descriptions of the cooling flow and from thermodynamic cycle system simulations. These provided boundary condition data to the three-dimensional analysis. A simplified combustor was used to feed boundary conditions to the turbomachinery. Flow simulations of the fan, high-pressure compressor, and high- and low-pressure turbines were completed with the APNASA code.
High-Performance Parallel Analysis of Coupled Problems for Aircraft Propulsion
NASA Technical Reports Server (NTRS)
Felippa, C. A.; Farhat, C.; Park, K. C.; Gumaste, U.; Chen, P.-S.; Lesoinne, M.; Stern, P.
1997-01-01
Applications are described of high-performance computing methods to the numerical simulation of complete jet engines. The methodology focuses on the partitioned analysis of the interaction of the gas flow with a flexible structure and with the fluid mesh motion driven by structural displacements. The latter is treated by a ALE technique that models the fluid mesh motion as that of a fictitious mechanical network laid along the edges of near-field elements. New partitioned analysis procedures to treat this coupled three-component problem were developed. These procedures involved delayed corrections and subcycling, and have been successfully tested on several massively parallel computers, including the iPSC-860, Paragon XP/S and the IBM SP2. The NASA-sponsored ENG10 program was used for the global steady state analysis of the whole engine. This program uses a regular FV-multiblock-grid discretization in conjunction with circumferential averaging to include effects of blade forces, loss, combustor heat addition, blockage, bleeds and convective mixing. A load-balancing preprocessor for parallel versions of ENG10 was developed as well as the capability for the first full 3D aeroelastic simulation of a multirow engine stage. This capability was tested on the IBM SP2 parallel supercomputer at NASA Ames.
Knight, V Bleu; Serrano, Elba E
2017-01-01
Biomaterial scaffolds have the potential to enhance neuronal development and regeneration. Understanding the genetic responses of astrocytes and neurons to biomaterials could facilitate the development of synthetic environments that enable the specification of neural tissue organization with engineered scaffolds. In this study, we used high throughput transcriptomic and imaging methods to determine the impact of a hydrogel, PuraMatrix™, on human glial cells in vitro . Parallel studies were undertaken with cells grown in a monolayer environment on tissue culture polystyrene. When the Normal Human Astrocyte (NHA) cell line is grown in a hydrogel matrix environment, the glial cells adopt a structural organization that resembles that of neuronal-glial cocultures, where neurons form clusters that are distinct from the surrounding glia. Statistical analysis of next generation RNA sequencing data uncovered a set of genes that are differentially expressed in the monolayer and matrix hydrogel environments. Functional analysis demonstrated that hydrogel-upregulated genes can be grouped into three broad categories: neuronal differentiation and/or neural plasticity, response to neural insult, and sensory perception. Our results demonstrate that hydrogel biomaterials have the potential to transform human glial cell identity, and may have applications in the repair of damaged brain tissue.
Neural Parallel Engine: A toolbox for massively parallel neural signal processing.
Tam, Wing-Kin; Yang, Zhi
2018-05-01
Large-scale neural recordings provide detailed information on neuronal activities and can help elicit the underlying neural mechanisms of the brain. However, the computational burden is also formidable when we try to process the huge data stream generated by such recordings. In this study, we report the development of Neural Parallel Engine (NPE), a toolbox for massively parallel neural signal processing on graphical processing units (GPUs). It offers a selection of the most commonly used routines in neural signal processing such as spike detection and spike sorting, including advanced algorithms such as exponential-component-power-component (EC-PC) spike detection and binary pursuit spike sorting. We also propose a new method for detecting peaks in parallel through a parallel compact operation. Our toolbox is able to offer a 5× to 110× speedup compared with its CPU counterparts depending on the algorithms. A user-friendly MATLAB interface is provided to allow easy integration of the toolbox into existing workflows. Previous efforts on GPU neural signal processing only focus on a few rudimentary algorithms, are not well-optimized and often do not provide a user-friendly programming interface to fit into existing workflows. There is a strong need for a comprehensive toolbox for massively parallel neural signal processing. A new toolbox for massively parallel neural signal processing has been created. It can offer significant speedup in processing signals from large-scale recordings up to thousands of channels. Copyright © 2018 Elsevier B.V. All rights reserved.
A review on battery thermal management in electric vehicle application
NASA Astrophysics Data System (ADS)
Xia, Guodong; Cao, Lei; Bi, Guanglong
2017-11-01
The global issues of energy crisis and air pollution have offered a great opportunity to develop electric vehicles. However, so far, cycle life of power battery, environment adaptability, driving range and charging time seems far to compare with the level of traditional vehicles with internal combustion engine. Effective battery thermal management (BTM) is absolutely essential to relieve this situation. This paper reviews the existing literature from two levels that are cell level and battery module level. For single battery, specific attention is paid to three important processes which are heat generation, heat transport, and heat dissipation. For large format cell, multi-scale multi-dimensional coupled models have been developed. This will facilitate the investigation on factors, such as local irreversible heat generation, thermal resistance, current distribution, etc., that account for intrinsic temperature gradients existing in cell. For battery module based on air and liquid cooling, series, series-parallel and parallel cooling configurations are discussed. Liquid cooling strategies, especially direct liquid cooling strategies, are reviewed and they may advance the battery thermal management system to a new generation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Song
CFD (Computational Fluid Dynamics) is a widely used technique in engineering design field. It uses mathematical methods to simulate and predict flow characteristics in a certain physical space. Since the numerical result of CFD computation is very hard to understand, VR (virtual reality) and data visualization techniques are introduced into CFD post-processing to improve the understandability and functionality of CFD computation. In many cases CFD datasets are very large (multi-gigabytes), and more and more interactions between user and the datasets are required. For the traditional VR application, the limitation of computing power is a major factor to prevent visualizing largemore » dataset effectively. This thesis presents a new system designing to speed up the traditional VR application by using parallel computing and distributed computing, and the idea of using hand held device to enhance the interaction between a user and VR CFD application as well. Techniques in different research areas including scientific visualization, parallel computing, distributed computing and graphical user interface designing are used in the development of the final system. As the result, the new system can flexibly be built on heterogeneous computing environment, dramatically shorten the computation time.« less
Embedded Implementation of VHR Satellite Image Segmentation
Li, Chao; Balla-Arabé, Souleymane; Ginhac, Dominique; Yang, Fan
2016-01-01
Processing and analysis of Very High Resolution (VHR) satellite images provide a mass of crucial information, which can be used for urban planning, security issues or environmental monitoring. However, they are computationally expensive and, thus, time consuming, while some of the applications, such as natural disaster monitoring and prevention, require high efficiency performance. Fortunately, parallel computing techniques and embedded systems have made great progress in recent years, and a series of massively parallel image processing devices, such as digital signal processors or Field Programmable Gate Arrays (FPGAs), have been made available to engineers at a very convenient price and demonstrate significant advantages in terms of running-cost, embeddability, power consumption flexibility, etc. In this work, we designed a texture region segmentation method for very high resolution satellite images by using the level set algorithm and the multi-kernel theory in a high-abstraction C environment and realize its register-transfer level implementation with the help of a new proposed high-level synthesis-based design flow. The evaluation experiments demonstrate that the proposed design can produce high quality image segmentation with a significant running-cost advantage. PMID:27240370
Pacing a data transfer operation between compute nodes on a parallel computer
Blocksome, Michael A [Rochester, MN
2011-09-13
Methods, systems, and products are disclosed for pacing a data transfer between compute nodes on a parallel computer that include: transferring, by an origin compute node, a chunk of an application message to a target compute node; sending, by the origin compute node, a pacing request to a target direct memory access (`DMA`) engine on the target compute node using a remote get DMA operation; determining, by the origin compute node, whether a pacing response to the pacing request has been received from the target DMA engine; and transferring, by the origin compute node, a next chunk of the application message if the pacing response to the pacing request has been received from the target DMA engine.
NASA Astrophysics Data System (ADS)
Eschenbach, E. A.; Conklin, M. H.
2007-12-01
The need to train students in hydrologic science and environmental engineering is well established. Likewise, the public requires a raised awareness of the seriousness of water quality and availability problems. The WATERS Network (WATer and Environmental Research Systems Network ) has the potential to significantly change the way students, researchers, citizens, policy makers and industry members learn about environmental problems and solutions regarding water quality, quantity and distribution. This potential can be met if the efforts of water scientists, computer scientists, and educators are integrated appropriately. Successful pilot projects have found that cyberinfrastructure for education and outreach needs to be developed in parallel with research related cyberinfrastructure. We propose further integration of research, education and outreach activities. Through the use of technology that connects students, faculty, researchers, policy makers and others, WATERS Network can provide learning opportunities and teaching efficiencies that can revolutionize environmental science and engineering education. However, there are a plethora of existing environmental science and engineering educational programs. In this environment, WATERS can make a greater impact through careful selection of activities that build upon its unique strengths, that have high potential for engaging the members, and that meet identified needs: (i) modernizing curricula and pedagogy (ii) integrating science and education, (iii) sustainable professional development, and (iv) training the next generation of interdisciplinary water and social scientists and environmental engineers. National and observatory-based education facilities would establish the physical infrastructure necessary to coordinate education and outreach activities. Each observatory would partner with local educators and citizens to develop activities congruent with the scientific mission of the observatory. An unprecedented opportunity exists for educational research of both formal and informal environmental science and engineering education in order to understand how the Network can be efficiently used to create effective technology-based learning environments for all participants.
An Expert System for the Development of Efficient Parallel Code
NASA Technical Reports Server (NTRS)
Jost, Gabriele; Chun, Robert; Jin, Hao-Qiang; Labarta, Jesus; Gimenez, Judit
2004-01-01
We have built the prototype of an expert system to assist the user in the development of efficient parallel code. The system was integrated into the parallel programming environment that is currently being developed at NASA Ames. The expert system interfaces to tools for automatic parallelization and performance analysis. It uses static program structure information and performance data in order to automatically determine causes of poor performance and to make suggestions for improvements. In this paper we give an overview of our programming environment, describe the prototype implementation of our expert system, and demonstrate its usefulness with several case studies.
DEPOSITION DISTRICUTION AMONG THE PARALLEL PATHWAYS IN THE HUMAN LUNG CONDUCTING AIRWAY STRUCTURE.
DEPOSITION DISTRIBUTION AMONG THE PARALLEL PATHWAYS IN THE HUMAN LUNG CONDUCTING AIRWAY STRUCTURE. Chong S. Kim*, USEPA National Health and Environmental Effects Research Lab. RTP, NC 27711; Z. Zhang and C. Kleinstreuer, Department of Mechanical and Aerospace Engineering, North C...
Mathematics for Physicists and Engineers.
ERIC Educational Resources Information Center
Organisation for Economic Cooperation and Development, Paris (France).
The text is a report of the OEEC Seminar on "The Mathematical Knowledge Required by the Physicist and Engineer" held in Paris, 1961. There are twelve major papers presented: (1) An American Parallel (describes the work of the Panel on Physical Sciences and Engineering of the Committee on the Undergraduate Program in Mathematics of the Mathematical…
2008-02-09
Campbell, S. Ogata, and F. Shimojo, “ Multimillion atom simulations of nanosystems on parallel computers,” in Proceedings of the International...nanomesas: multimillion -atom molecular dynamics simulations on parallel computers,” J. Appl. Phys. 94, 6762 (2003). 21. P. Vashishta, R. K. Kalia...and A. Nakano, “ Multimillion atom molecular dynamics simulations of nanoparticles on parallel computers,” Journal of Nanoparticle Research 5, 119-135
Partitioning and packing mathematical simulation models for calculation on parallel computers
NASA Technical Reports Server (NTRS)
Arpasi, D. J.; Milner, E. J.
1986-01-01
The development of multiprocessor simulations from a serial set of ordinary differential equations describing a physical system is described. Degrees of parallelism (i.e., coupling between the equations) and their impact on parallel processing are discussed. The problem of identifying computational parallelism within sets of closely coupled equations that require the exchange of current values of variables is described. A technique is presented for identifying this parallelism and for partitioning the equations for parallel solution on a multiprocessor. An algorithm which packs the equations into a minimum number of processors is also described. The results of the packing algorithm when applied to a turbojet engine model are presented in terms of processor utilization.
Method of vibration isolating an aircraft engine
NASA Technical Reports Server (NTRS)
Bender, Stanley I. (Inventor); Butler, Lawrence (Inventor); Dawes, Peter W. (Inventor)
1991-01-01
A method for coupling an engine to a support frame for mounting to a fuselage of an aircraft using a three point vibration isolating mounting system in which the load reactive forces at each mounting point are statically and dynamically determined. A first vibration isolating mount pivotably couples a first end of an elongated support beam to a stator portion of an engine with the pivoting action of the vibration mount being oriented such that it is pivotable about a line parallel to a center line of the engine. An aft end of the supporting frame is coupled to the engine through an additional pair of vibration isolating mounts with the mounts being oriented such that they are pivotable about a circumference of the engine. The aft mounts are symmetrically spaced to each side of the supporting frame by 45 degrees. The relative orientation between the front mount and the pair of rear mounts is such that only the rear mounts provide load reactive forces parallel to the engine center line, in support of the engine to the aircraft against thrust forces. The forward mount is oriented so as to provide only radial forces to the engine and some lifting forces to maintain the engine in position adjacent a fuselage. Since each mount is connected to provide specific forces to support the engine, forces required of each mount are statically and dynamically determinable.
Toward an automated parallel computing environment for geosciences
NASA Astrophysics Data System (ADS)
Zhang, Huai; Liu, Mian; Shi, Yaolin; Yuen, David A.; Yan, Zhenzhen; Liang, Guoping
2007-08-01
Software for geodynamic modeling has not kept up with the fast growing computing hardware and network resources. In the past decade supercomputing power has become available to most researchers in the form of affordable Beowulf clusters and other parallel computer platforms. However, to take full advantage of such computing power requires developing parallel algorithms and associated software, a task that is often too daunting for geoscience modelers whose main expertise is in geosciences. We introduce here an automated parallel computing environment built on open-source algorithms and libraries. Users interact with this computing environment by specifying the partial differential equations, solvers, and model-specific properties using an English-like modeling language in the input files. The system then automatically generates the finite element codes that can be run on distributed or shared memory parallel machines. This system is dynamic and flexible, allowing users to address different problems in geosciences. It is capable of providing web-based services, enabling users to generate source codes online. This unique feature will facilitate high-performance computing to be integrated with distributed data grids in the emerging cyber-infrastructures for geosciences. In this paper we discuss the principles of this automated modeling environment and provide examples to demonstrate its versatility.
Development of Supersonic Vehicle for Demonstration of a Precooled Turbojet Engine
NASA Astrophysics Data System (ADS)
Sawai, Shujiro; Fujita, Kazuhisa; Kobayashi, Hiroaki; Sakai, Shin'ichiro; Bando, Nobutaka; Kadooka, Shouhei; Tsuboi, Nobuyuki; Miyaji, Koji; Uchiyama, Taku; Hashimoto, Tatsuaki
JAXA is developing Mach 5 hypersonic turbojet engine technology that can be applied in a future hypersonic transport. Now, Jet Engine Technology Research Center of JAXA conducts the experimental study using a 1 / 10 scale-model engine. In parallel to engine development activities, a new supersonic flight-testing vehicle for the hypersonic turbojet engine is under development since 2004. In this paper, the system configuration of the flight-testing vehicle is outlined and development status is reported.
Simulation Exploration through Immersive Parallel Planes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brunhart-Lupo, Nicholas J; Bush, Brian W; Gruchalla, Kenny M
We present a visualization-driven simulation system that tightly couples systems dynamics simulations with an immersive virtual environment to allow analysts to rapidly develop and test hypotheses in a high-dimensional parameter space. To accomplish this, we generalize the two-dimensional parallel-coordinates statistical graphic as an immersive 'parallel-planes' visualization for multivariate time series emitted by simulations running in parallel with the visualization. In contrast to traditional parallel coordinate's mapping the multivariate dimensions onto coordinate axes represented by a series of parallel lines, we map pairs of the multivariate dimensions onto a series of parallel rectangles. As in the case of parallel coordinates, eachmore » individual observation in the dataset is mapped to a polyline whose vertices coincide with its coordinate values. Regions of the rectangles can be 'brushed' to highlight and select observations of interest: a 'slider' control allows the user to filter the observations by their time coordinate. In an immersive virtual environment, users interact with the parallel planes using a joystick that can select regions on the planes, manipulate selection, and filter time. The brushing and selection actions are used to both explore existing data as well as to launch additional simulations corresponding to the visually selected portions of the input parameter space. As soon as the new simulations complete, their resulting observations are displayed in the virtual environment. This tight feedback loop between simulation and immersive analytics accelerates users' realization of insights about the simulation and its output.« less
Simulation Exploration through Immersive Parallel Planes: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brunhart-Lupo, Nicholas; Bush, Brian W.; Gruchalla, Kenny
We present a visualization-driven simulation system that tightly couples systems dynamics simulations with an immersive virtual environment to allow analysts to rapidly develop and test hypotheses in a high-dimensional parameter space. To accomplish this, we generalize the two-dimensional parallel-coordinates statistical graphic as an immersive 'parallel-planes' visualization for multivariate time series emitted by simulations running in parallel with the visualization. In contrast to traditional parallel coordinate's mapping the multivariate dimensions onto coordinate axes represented by a series of parallel lines, we map pairs of the multivariate dimensions onto a series of parallel rectangles. As in the case of parallel coordinates, eachmore » individual observation in the dataset is mapped to a polyline whose vertices coincide with its coordinate values. Regions of the rectangles can be 'brushed' to highlight and select observations of interest: a 'slider' control allows the user to filter the observations by their time coordinate. In an immersive virtual environment, users interact with the parallel planes using a joystick that can select regions on the planes, manipulate selection, and filter time. The brushing and selection actions are used to both explore existing data as well as to launch additional simulations corresponding to the visually selected portions of the input parameter space. As soon as the new simulations complete, their resulting observations are displayed in the virtual environment. This tight feedback loop between simulation and immersive analytics accelerates users' realization of insights about the simulation and its output.« less
Schaafsma, Murk; van der Deijl, Wilfred; Smits, Jacqueline M; Rahmel, Axel O; de Vries Robbé, Pieter F; Hoitsma, Andries J
2011-05-01
Organ allocation systems have become complex and difficult to comprehend. We introduced decision tables to specify the rules of allocation systems for different organs. A rule engine with decision tables as input was tested for the Kidney Allocation System (ETKAS). We compared this rule engine with the currently used ETKAS by running 11,000 historical match runs and by running the rule engine in parallel with the ETKAS on our allocation system. Decision tables were easy to implement and successful in verifying correctness, completeness, and consistency. The outcomes of the 11,000 historical matches in the rule engine and the ETKAS were exactly the same. Running the rule engine simultaneously in parallel and in real time with the ETKAS also produced no differences. Specifying organ allocation rules in decision tables is already a great step forward in enhancing the clarity of the systems. Yet, using these tables as rule engine input for matches optimizes the flexibility, simplicity and clarity of the whole process, from specification to the performed matches, and in addition this new method allows well controlled simulations. © 2011 The Authors. Transplant International © 2011 European Society for Organ Transplantation.
Code Optimization and Parallelization on the Origins: Looking from Users' Perspective
NASA Technical Reports Server (NTRS)
Chang, Yan-Tyng Sherry; Thigpen, William W. (Technical Monitor)
2002-01-01
Parallel machines are becoming the main compute engines for high performance computing. Despite their increasing popularity, it is still a challenge for most users to learn the basic techniques to optimize/parallelize their codes on such platforms. In this paper, we present some experiences on learning these techniques for the Origin systems at the NASA Advanced Supercomputing Division. Emphasis of this paper will be on a few essential issues (with examples) that general users should master when they work with the Origins as well as other parallel systems.
Teachers' Mathematics as Mathematics-at-Work
ERIC Educational Resources Information Center
Bednarz, Nadine; Proulx, Jérôme
2017-01-01
Through recognising mathematics teachers as professionals who use mathematics in their workplace, this article traces a parallel between the mathematics enacted by teachers in their practice and the mathematics used in workplaces found in studies of professionals (e.g. nurses, engineers, bankers). This parallel is developed through the five…
NASA Technical Reports Server (NTRS)
Watson, Willie R.; Nark, Douglas M.; Nguyen, Duc T.; Tungkahotara, Siroj
2006-01-01
A finite element solution to the convected Helmholtz equation in a nonuniform flow is used to model the noise field within 3-D acoustically treated aero-engine nacelles. Options to select linear or cubic Hermite polynomial basis functions and isoparametric elements are included. However, the key feature of the method is a domain decomposition procedure that is based upon the inter-mixing of an iterative and a direct solve strategy for solving the discrete finite element equations. This procedure is optimized to take full advantage of sparsity and exploit the increased memory and parallel processing capability of modern computer architectures. Example computations are presented for the Langley Flow Impedance Test facility and a rectangular mapping of a full scale, generic aero-engine nacelle. The accuracy and parallel performance of this new solver are tested on both model problems using a supercomputer that contains hundreds of central processing units. Results show that the method gives extremely accurate attenuation predictions, achieves super-linear speedup over hundreds of CPUs, and solves upward of 25 million complex equations in a quarter of an hour.
Rocket Engine Nozzle Side Load Transient Analysis Methodology: A Practical Approach
NASA Technical Reports Server (NTRS)
Shi, John J.
2005-01-01
During the development stage, in order to design/to size the rocket engine components and to reduce the risks, the local dynamic environments as well as dynamic interface loads must be defined. There are two kinds of dynamic environment, i.e. shock transients and steady-state random and sinusoidal vibration environments. Usually, the steady-state random and sinusoidal vibration environments are scalable, but the shock environments are not scalable. In other words, based on similarities only random vibration environments can be defined for a new engine. The methodology covered in this paper provides a way to predict the shock environments and the dynamic loads for new engine systems and new engine components in the early stage of new engine development or engine nozzle modifications.
NASA Astrophysics Data System (ADS)
Slaughter, A. E.; Permann, C.; Peterson, J. W.; Gaston, D.; Andrs, D.; Miller, J.
2014-12-01
The Idaho National Laboratory (INL)-developed Multiphysics Object Oriented Simulation Environment (MOOSE; www.mooseframework.org), is an open-source, parallel computational framework for enabling the solution of complex, fully implicit multiphysics systems. MOOSE provides a set of computational tools that scientists and engineers can use to create sophisticated multiphysics simulations. Applications built using MOOSE have computed solutions for chemical reaction and transport equations, computational fluid dynamics, solid mechanics, heat conduction, mesoscale materials modeling, geomechanics, and others. To facilitate the coupling of diverse and highly-coupled physical systems, MOOSE employs the Jacobian-free Newton-Krylov (JFNK) method when solving the coupled nonlinear systems of equations arising in multiphysics applications. The MOOSE framework is written in C++, and leverages other high-quality, open-source scientific software packages such as LibMesh, Hypre, and PETSc. MOOSE uses a "hybrid parallel" model which combines both shared memory (thread-based) and distributed memory (MPI-based) parallelism to ensure efficient resource utilization on a wide range of computational hardware. MOOSE-based applications are inherently modular, which allows for simulation expansion (via coupling of additional physics modules) and the creation of multi-scale simulations. Any application developed with MOOSE supports running (in parallel) any other MOOSE-based application. Each application can be developed independently, yet easily communicate with other applications (e.g., conductivity in a slope-scale model could be a constant input, or a complete phase-field micro-structure simulation) without additional code being written. This method of development has proven effective at INL and expedites the development of sophisticated, sustainable, and collaborative simulation tools.
NASA Astrophysics Data System (ADS)
Sun, Dongye; Lin, Xinyou; Qin, Datong; Deng, Tao
2012-11-01
Energy management(EM) is a core technique of hybrid electric bus(HEB) in order to advance fuel economy performance optimization and is unique for the corresponding configuration. There are existing algorithms of control strategy seldom take battery power management into account with international combustion engine power management. In this paper, a type of power-balancing instantaneous optimization(PBIO) energy management control strategy is proposed for a novel series-parallel hybrid electric bus. According to the characteristic of the novel series-parallel architecture, the switching boundary condition between series and parallel mode as well as the control rules of the power-balancing strategy are developed. The equivalent fuel model of battery is implemented and combined with the fuel of engine to constitute the objective function which is to minimize the fuel consumption at each sampled time and to coordinate the power distribution in real-time between the engine and battery. To validate the proposed strategy effective and reasonable, a forward model is built based on Matlab/Simulink for the simulation and the dSPACE autobox is applied to act as a controller for hardware in-the-loop integrated with bench test. Both the results of simulation and hardware-in-the-loop demonstrate that the proposed strategy not only enable to sustain the battery SOC within its operational range and keep the engine operation point locating the peak efficiency region, but also the fuel economy of series-parallel hybrid electric bus(SPHEB) dramatically advanced up to 30.73% via comparing with the prototype bus and a similar improvement for PBIO strategy relative to rule-based strategy, the reduction of fuel consumption is up to 12.38%. The proposed research ensures the algorithm of PBIO is real-time applicability, improves the efficiency of SPHEB system, as well as suite to complicated configuration perfectly.
He, Xianzhi; Zhang, Lei; Liu, Pengchong; Liu, Li; Deng, Hui; Huang, Jinhai
2015-03-01
Staphylococcal enterotoxins (SEs) produced by Staphylococcus aureus have increasingly given rise to human health and food safety. Genetically engineered small molecular antibody is a useful tool in immuno-detection and treatment for clinical illness caused by SEs. In this study, we constructed the V(L)-V(H) tail-parallel genetically engineered antibody against SEs by using the repertoire of rearranged germ-line immunoglobulin variable region genes. Total RNA were extracted from six hybridoma cell lines that stably express anti-SEs antibodies. The variable region genes of light chain (V(L)) and heavy chain (V(H)) were cloned by reverse transcription PCR, and their classical murine antibody structure and functional V(D)J gene rearrangement were analyzed. To construct the eukaryotic V(H)-V(L) tail-parallel co-expression vectors based on the "5'-V(H)-ivs-IRES-V(L)-3'" mode, the ivs-IRES fragment and V(L) genes were spliced by two-step overlap extension PCR, and then, the recombined gene fragment and V(H) genes were inserted into the pcDNA3.1(+) expression vector sequentially. And then the constructed eukaryotic expression clones termed as p2C2HILO and p5C12HILO were transfected into baby hamster kidney 21 cell line, respectively. Two clonal cell lines stably expressing V(L)-V(H) tail-parallel antibodies against SEs were obtained, and the antibodies that expressed intracytoplasma were evaluated by enzyme-linked immunosorbent assay, immunofluorescence assay, and flow cytometry. SEs can stimulate the expression of some chemokines and chemokine receptors in porcine IPEC-J2 cells; mRNA transcription level of four chemokines and chemokine receptors can be blocked by the recombinant SE antibody prepared in this study. Our results showed that it is possible to get functional V(L)-V(H) tail-parallel genetically engineered antibodies in same vector using eukaryotic expression system.
MER : from landing to six wheels on Mars ... twice
NASA Technical Reports Server (NTRS)
Krajewski, Joel; Burke, Kevin; Lewicki, Chris; Limonadi, Daniel; Trebi-Ollennu, Ashitey; Voorhees, Chris
2005-01-01
Application of the Pathfinder landing system design to enclose the much larger Mars Exploration Rover required a variety of Rover deployments to achieve the surface driving configuration. The project schedule demanded that software design, engineering model test, and flight hardware build to be accomplished in parallel. This challenge was met through (a) bounding unknown environments against which to design and test, (b) early mechanical prototype testing, (c) constraining the scope of on-board autonomy to survival-critical deployments, (d) executing a balance of nominal and off-nominal test cases, (e) developing off-nominal event mitigation techniques before landing, (f) flexible replanning in response to surprises during operations. Here is discussed several specific events encountered during initial MER surface operations.
46 CFR 111.12-7 - Voltage regulation and parallel operation.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 46 Shipping 4 2013-10-01 2013-10-01 false Voltage regulation and parallel operation. 111.12-7 Section 111.12-7 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) ELECTRICAL ENGINEERING ELECTRIC SYSTEMS-GENERAL REQUIREMENTS Generator Construction and Circuits § 111.12-7 Voltage regulation and...
46 CFR 111.12-7 - Voltage regulation and parallel operation.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 46 Shipping 4 2014-10-01 2014-10-01 false Voltage regulation and parallel operation. 111.12-7 Section 111.12-7 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) ELECTRICAL ENGINEERING ELECTRIC SYSTEMS-GENERAL REQUIREMENTS Generator Construction and Circuits § 111.12-7 Voltage regulation and...
46 CFR 111.12-7 - Voltage regulation and parallel operation.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 46 Shipping 4 2012-10-01 2012-10-01 false Voltage regulation and parallel operation. 111.12-7 Section 111.12-7 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) ELECTRICAL ENGINEERING ELECTRIC SYSTEMS-GENERAL REQUIREMENTS Generator Construction and Circuits § 111.12-7 Voltage regulation and...
46 CFR 111.12-7 - Voltage regulation and parallel operation.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 46 Shipping 4 2011-10-01 2011-10-01 false Voltage regulation and parallel operation. 111.12-7 Section 111.12-7 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) ELECTRICAL ENGINEERING ELECTRIC SYSTEMS-GENERAL REQUIREMENTS Generator Construction and Circuits § 111.12-7 Voltage regulation and...
Scalable Visual Analytics of Massive Textual Datasets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krishnan, Manoj Kumar; Bohn, Shawn J.; Cowley, Wendy E.
2007-04-01
This paper describes the first scalable implementation of text processing engine used in Visual Analytics tools. These tools aid information analysts in interacting with and understanding large textual information content through visual interfaces. By developing parallel implementation of the text processing engine, we enabled visual analytics tools to exploit cluster architectures and handle massive dataset. The paper describes key elements of our parallelization approach and demonstrates virtually linear scaling when processing multi-gigabyte data sets such as Pubmed. This approach enables interactive analysis of large datasets beyond capabilities of existing state-of-the art visual analytics tools.
A real time microcomputer implementation of sensor failure detection for turbofan engines
NASA Technical Reports Server (NTRS)
Delaat, John C.; Merrill, Walter C.
1989-01-01
An algorithm was developed which detects, isolates, and accommodates sensor failures using analytical redundancy. The performance of this algorithm was demonstrated on a full-scale F100 turbofan engine. The algorithm was implemented in real-time on a microprocessor-based controls computer which includes parallel processing and high order language programming. Parallel processing was used to achieve the required computational power for the real-time implementation. High order language programming was used in order to reduce the programming and maintenance costs of the algorithm implementation software. The sensor failure algorithm was combined with an existing multivariable control algorithm to give a complete control implementation with sensor analytical redundancy. The real-time microprocessor implementation of the algorithm which resulted in the successful completion of the algorithm engine demonstration, is described.
High temperature turbine engine structure
Boyd, Gary L.
1991-01-01
A high temperature turbine engine includes a rotor portion having axially stacked adjacent ceramic rotor parts. A ceramic/ceramic joint structure transmits torque between the rotor parts while maintaining coaxial alignment and axially spaced mutually parallel relation thereof despite thermal and centrifugal cycling.
A privacy-preserving parallel and homomorphic encryption scheme
NASA Astrophysics Data System (ADS)
Min, Zhaoe; Yang, Geng; Shi, Jingqi
2017-04-01
In order to protect data privacy whilst allowing efficient access to data in multi-nodes cloud environments, a parallel homomorphic encryption (PHE) scheme is proposed based on the additive homomorphism of the Paillier encryption algorithm. In this paper we propose a PHE algorithm, in which plaintext is divided into several blocks and blocks are encrypted with a parallel mode. Experiment results demonstrate that the encryption algorithm can reach a speed-up ratio at about 7.1 in the MapReduce environment with 16 cores and 4 nodes.
Parallel programming with Easy Java Simulations
NASA Astrophysics Data System (ADS)
Esquembre, F.; Christian, W.; Belloni, M.
2018-01-01
Nearly all of today's processors are multicore, and ideally programming and algorithm development utilizing the entire processor should be introduced early in the computational physics curriculum. Parallel programming is often not introduced because it requires a new programming environment and uses constructs that are unfamiliar to many teachers. We describe how we decrease the barrier to parallel programming by using a java-based programming environment to treat problems in the usual undergraduate curriculum. We use the easy java simulations programming and authoring tool to create the program's graphical user interface together with objects based on those developed by Kaminsky [Building Parallel Programs (Course Technology, Boston, 2010)] to handle common parallel programming tasks. Shared-memory parallel implementations of physics problems, such as time evolution of the Schrödinger equation, are available as source code and as ready-to-run programs from the AAPT-ComPADRE digital library.
NASA Astrophysics Data System (ADS)
Moore, R. T.; Hansen, M. C.
2011-12-01
Google Earth Engine is a new technology platform that enables monitoring and measurement of changes in the earth's environment, at planetary scale, on a large catalog of earth observation data. The platform offers intrinsically-parallel computational access to thousands of computers in Google's data centers. Initial efforts have focused primarily on global forest monitoring and measurement, in support of REDD+ activities in the developing world. The intent is to put this platform into the hands of scientists and developing world nations, in order to advance the broader operational deployment of existing scientific methods, and strengthen the ability for public institutions and civil society to better understand, manage and report on the state of their natural resources. Earth Engine currently hosts online nearly the complete historical Landsat archive of L5 and L7 data collected over more than twenty-five years. Newly-collected Landsat imagery is downloaded from USGS EROS Center into Earth Engine on a daily basis. Earth Engine also includes a set of historical and current MODIS data products. The platform supports generation, on-demand, of spatial and temporal mosaics, "best-pixel" composites (for example to remove clouds and gaps in satellite imagery), as well as a variety of spectral indices. Supervised learning methods are available over the Landsat data catalog. The platform also includes a new application programming framework, or "API", that allows scientists access to these computational and data resources, to scale their current algorithms or develop new ones. Under the covers of the Google Earth Engine API is an intrinsically-parallel image-processing system. Several forest monitoring applications powered by this API are currently in development and expected to be operational in 2011. Combining science with massive data and technology resources in a cloud-computing framework can offer advantages of computational speed, ease-of-use and collaboration, as well as transparency in data and methods. Methods developed for global processing of MODIS data to map land cover are being adopted for use with Landsat data. Specifically, the MODIS Vegetation Continuous Field product methodology has been applied for mapping forest extent and change at national scales using Landsat time-series data sets. Scaling this method to continental and global scales is enabled by Google Earth Engine computing capabilities. By combining the supervised learning VCF approach with the Landsat archive and cloud computing, unprecedented monitoring of land cover dynamics is enabled.
Development of a Robust and Efficient Parallel Solver for Unsteady Turbomachinery Flows
NASA Technical Reports Server (NTRS)
West, Jeff; Wright, Jeffrey; Thakur, Siddharth; Luke, Ed; Grinstead, Nathan
2012-01-01
The traditional design and analysis practice for advanced propulsion systems relies heavily on expensive full-scale prototype development and testing. Over the past decade, use of high-fidelity analysis and design tools such as CFD early in the product development cycle has been identified as one way to alleviate testing costs and to develop these devices better, faster and cheaper. In the design of advanced propulsion systems, CFD plays a major role in defining the required performance over the entire flight regime, as well as in testing the sensitivity of the design to the different modes of operation. Increased emphasis is being placed on developing and applying CFD models to simulate the flow field environments and performance of advanced propulsion systems. This necessitates the development of next generation computational tools which can be used effectively and reliably in a design environment. The turbomachinery simulation capability presented here is being developed in a computational tool called Loci-STREAM [1]. It integrates proven numerical methods for generalized grids and state-of-the-art physical models in a novel rule-based programming framework called Loci [2] which allows: (a) seamless integration of multidisciplinary physics in a unified manner, and (b) automatic handling of massively parallel computing. The objective is to be able to routinely simulate problems involving complex geometries requiring large unstructured grids and complex multidisciplinary physics. An immediate application of interest is simulation of unsteady flows in rocket turbopumps, particularly in cryogenic liquid rocket engines. The key components of the overall methodology presented in this paper are the following: (a) high fidelity unsteady simulation capability based on Detached Eddy Simulation (DES) in conjunction with second-order temporal discretization, (b) compliance with Geometric Conservation Law (GCL) in order to maintain conservative property on moving meshes for second-order time-stepping scheme, (c) a novel cloud-of-points interpolation method (based on a fast parallel kd-tree search algorithm) for interfaces between turbomachinery components in relative motion which is demonstrated to be highly scalable, and (d) demonstrated accuracy and parallel scalability on large grids (approx 250 million cells) in full turbomachinery geometries.
Idle speed and fuel vapor recovery control system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Orzel, D.V.
1993-06-01
A method for controlling idling speed of an engine via bypass throttle connected in parallel to a primary engine throttle and for controlling purge flow through a vapor recovery system into an air/fuel intake of the engine is described, comprising the steps of: positioning the bypass throttle to decrease any difference between a desired engine idle speed and actual engine idle speed; and decreasing the purge flow when said bypass throttle position is less than a preselected fraction of a maximum bypass throttle position.
Teaching ethics to engineers: ethical decision making parallels the engineering design process.
Bero, Bridget; Kuhlman, Alana
2011-09-01
In order to fulfill ABET requirements, Northern Arizona University's Civil and Environmental engineering programs incorporate professional ethics in several of its engineering courses. This paper discusses an ethics module in a 3rd year engineering design course that focuses on the design process and technical writing. Engineering students early in their student careers generally possess good black/white critical thinking skills on technical issues. Engineering design is the first time students are exposed to "grey" or multiple possible solution technical problems. To identify and solve these problems, the engineering design process is used. Ethical problems are also "grey" problems and present similar challenges to students. Students need a practical tool for solving these ethical problems. The step-wise engineering design process was used as a model to demonstrate a similar process for ethical situations. The ethical decision making process of Martin and Schinzinger was adapted for parallelism to the design process and presented to students as a step-wise technique for identification of the pertinent ethical issues, relevant moral theories, possible outcomes and a final decision. Students had greatest difficulty identifying the broader, global issues presented in an ethical situation, but by the end of the module, were better able to not only identify the broader issues, but also to more comprehensively assess specific issues, generate solutions and a desired response to the issue.
TRANSMISSION NETWORK PLANNING METHOD FOR COMPARATIVE STUDIES (JOURNAL VERSION)
An automated transmission network planning method for comparative studies is presented. This method employs logical steps that may closely parallel those taken in practice by the planning engineers. Use is made of a sensitivity matrix to simulate the engineers' experience in sele...
C-C1-04: Building a Health Services Information Technology Research Environment
Gehrum, David W; Jones, JB; Romania, Gregory J; Young, David L; Lerch, Virginia R; Bruce, Christa A; Donkochik, Diane; Stewart, Walter F
2010-01-01
Background: The electronic health record (EHR) has opened a new era for health services research (HSR) where information technology (IT) is used to re-engineer care processes. While the EHR provides one means of advancing novel solutions, a promising strategy is to develop tools (e.g., online questionnaires, visual display tools, decision support) distinct from, but which interact with, the EHR. Development of such software tools outside the EHR offers an advantage in flexibility, sophistication, and ultimately in portability to other settings. However, institutional IT departments have an imperative to protect patient data and to standardize IT processes to ensure system-level security and support traditional business needs. Such imperatives usually present formidable process barriers to testing novel software solutions. We describe how, in collaboration with our IT department, we are creating an environment and a process that allows for routine and rapid testing of novel software solutions. Methods: We convened a working group consisting of IT and research personnel with expertise in information security, database design/management, web design, EHR programming, and health services research. The working group was tasked with developing a research IT environment to accomplish two objectives: maintain network/ data security and regulatory compliance; allow researchers working with external vendors to rapidly prototype and, in a clinical setting, test web-based tools. Results: Two parallel solutions, one focused on hardware, the second on oversight and management, were developed. First, we concluded that three separate, staged development environments were required to allow external vendor access for testing software and for transitioning software to be used in a clinic. In parallel, the extant oversight process for approving/managing access to internal/external personnel had to be altered to reflect the scope and scale of discrete research projects, as opposed to an enterpriselevel approach to IT management. Conclusions: Innovation in health services software development requires a flexible, scalable IT environment adapted to the unique objectives of a HSR software development model. In our experience, implementing the hardware solution is less challenging than the cultural change required to implement such a model and the modifications to administrative and oversight processes to sustain an environment for rapid product development and testing.
Model-Based Systems Engineering in the Execution of Search and Rescue Operations
2015-09-01
OSC can fulfill the duties of an ACO but it may make sense to split the duties if there are no communication links between the OSC and participating...parallel mode. This mode is the most powerful option because it 35 creates sequence diagrams that generate parallel “ swim lanes” for each asset...greater flexibility is desired, sequence mode generates diagrams based purely on sequential action and activity diagrams without the parallel “ swim lanes
Combining engineered cell-sensors with multi-agent systems to realize smart environment
NASA Astrophysics Data System (ADS)
Chen, Mei
2013-03-01
The connection of everything in a sensory and an intelligent way is a pursuit in smart environment. This paper introduces the engineered cell-sensors into the multi-agent systems to realize the smart environment. The seamless interface with the natural environment and strong information-processing ability of cell with the achievements of synthetic biology make the construction of engineered cell-sensors possible. However, the engineered cell-sensors are only simple-functional and unreliable computational entities. Therefore how to combine engineered cell-sensors with digital device is a key problem in order to realize the smart environment. We give the abstract structure and interaction modes of the engineered cell-sensors in order to introduce engineered cell-sensors into multi-agent systems. We believe that the introduction of engineered cell-sensors will push forward the development of the smart environment.
78 FR 46932 - Notice of Intent to Grant Exclusive Patent License; Safe Environment Engineering
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-02
...; Safe Environment Engineering AGENCY: Department of the Navy, DoD. ACTION: Notice. SUMMARY: The Department of the Navy hereby gives notice of its intent to grant to Safe Environment Engineering a revocable, nonassignable, exclusive license to practice Safe Environment Engineering's proprietary sensor systems for the...
NASA Technical Reports Server (NTRS)
Demakes, P. T.; Hirsch, G. N.; Stewart, W. A.; Glatt, C. R.
1976-01-01
The use of a recoverable liquid rocket booster (LRB) system to replace the existing solid rocket booster (SRB) system for the shuttle was studied. Historical weight estimating relationships were developed for the LRB using Saturn technology and modified as required. Mission performance was computed using February 1975 shuttle configuration groundrules to allow reasonable comparison of the existing shuttle with the study designs. The launch trajectory was constrained to pass through both the RTLS/AOA and main engine cut off points of the shuttle reference mission 1. Performance analysis is based on a point design trajectory model which optimizes initial tilt rate and exoatmospheric pitch profile. A gravity turn was employed during the boost phase in place of the shuttle angle of attack profile. Engine throttling add/or shutdown was used to constrain dynamic pressure and/or longitudinal acceleration where necessary. Four basic configurations were investigated: a parallel burn vehicle with an F-1 engine powered LRB; a parallel burn vehicle with a high pressure engine powered LRB; a series burn vehicle with a high pressure engine powered LRB. The relative sizes of the LRB and the ET are optimized to minimize GLOW in most cases.
Parallel Performance of a Combustion Chemistry Simulation
Skinner, Gregg; Eigenmann, Rudolf
1995-01-01
We used a description of a combustion simulation's mathematical and computational methods to develop a version for parallel execution. The result was a reasonable performance improvement on small numbers of processors. We applied several important programming techniques, which we describe, in optimizing the application. This work has implications for programming languages, compiler design, and software engineering.
ERIC Educational Resources Information Center
Chen, Hsinchun; Martinez, Joanne; Kirchhoff, Amy; Ng, Tobun D.; Schatz, Bruce R.
1998-01-01
Grounded on object filtering, automatic indexing, and co-occurrence analysis, an experiment was performed using a parallel supercomputer to analyze over 400,000 abstracts in an INSPEC computer engineering collection. A user evaluation revealed that system-generated thesauri were better than the human-generated INSPEC subject thesaurus in concept…
Sequential color video to parallel color video converter
NASA Technical Reports Server (NTRS)
1975-01-01
The engineering design, development, breadboard fabrication, test, and delivery of a breadboard field sequential color video to parallel color video converter is described. The converter was designed for use onboard a manned space vehicle to eliminate a flickering TV display picture and to reduce the weight and bulk of previous ground conversion systems.
System Engineering Issues for Avionics Survival in the Space Environment
NASA Technical Reports Server (NTRS)
Pavelitz, Steven
1999-01-01
This paper examines how the system engineering process influences the design of a spacecraft's avionics by considering the space environment. Avionics are susceptible to the thermal, radiation, plasma, and meteoroids/orbital debris environments. The environment definitions for various spacecraft mission orbits (LEO/low inclination, LEO/Polar, MEO, HEO, GTO, GEO and High ApogeeElliptical) are discussed. NASA models and commercial software used for environment analysis are reviewed. Applicability of technical references, such as NASA TM-4527 "Natural Orbital Environment Guidelines for Use in Aerospace Vehicle Development" is discussed. System engineering references, such as the MSFC System Engineering Handbook, are reviewed to determine how the environments are accounted for in the system engineering process. Tools and databases to assist the system engineer and avionics designer in addressing space environment effects on avionics are described and usefulness assessed.
Reducing neural network training time with parallel processing
NASA Technical Reports Server (NTRS)
Rogers, James L., Jr.; Lamarsh, William J., II
1995-01-01
Obtaining optimal solutions for engineering design problems is often expensive because the process typically requires numerous iterations involving analysis and optimization programs. Previous research has shown that a near optimum solution can be obtained in less time by simulating a slow, expensive analysis with a fast, inexpensive neural network. A new approach has been developed to further reduce this time. This approach decomposes a large neural network into many smaller neural networks that can be trained in parallel. Guidelines are developed to avoid some of the pitfalls when training smaller neural networks in parallel. These guidelines allow the engineer: to determine the number of nodes on the hidden layer of the smaller neural networks; to choose the initial training weights; and to select a network configuration that will capture the interactions among the smaller neural networks. This paper presents results describing how these guidelines are developed.
Engineering and Computing Portal to Solve Environmental Problems
NASA Astrophysics Data System (ADS)
Gudov, A. M.; Zavozkin, S. Y.; Sotnikov, I. Y.
2018-01-01
This paper describes architecture and services of the Engineering and Computing Portal, which is considered to be a complex solution that provides access to high-performance computing resources, enables to carry out computational experiments, teach parallel technologies and solve computing tasks, including technogenic safety ones.
NASA Astrophysics Data System (ADS)
Work, Paul R.
1991-12-01
This thesis investigates the parallelization of existing serial programs in computational electromagnetics for use in a parallel environment. Existing algorithms for calculating the radar cross section of an object are covered, and a ray-tracing code is chosen for implementation on a parallel machine. Current parallel architectures are introduced and a suitable parallel machine is selected for the implementation of the chosen ray-tracing algorithm. The standard techniques for the parallelization of serial codes are discussed, including load balancing and decomposition considerations, and appropriate methods for the parallelization effort are selected. A load balancing algorithm is modified to increase the efficiency of the application, and a high level design of the structure of the serial program is presented. A detailed design of the modifications for the parallel implementation is also included, with both the high level and the detailed design specified in a high level design language called UNITY. The correctness of the design is proven using UNITY and standard logic operations. The theoretical and empirical results show that it is possible to achieve an efficient parallel application for a serial computational electromagnetic program where the characteristics of the algorithm and the target architecture critically influence the development of such an implementation.
Visualization and Tracking of Parallel CFD Simulations
NASA Technical Reports Server (NTRS)
Vaziri, Arsi; Kremenetsky, Mark
1995-01-01
We describe a system for interactive visualization and tracking of a 3-D unsteady computational fluid dynamics (CFD) simulation on a parallel computer. CM/AVS, a distributed, parallel implementation of a visualization environment (AVS) runs on the CM-5 parallel supercomputer. A CFD solver is run as a CM/AVS module on the CM-5. Data communication between the solver, other parallel visualization modules, and a graphics workstation, which is running AVS, are handled by CM/AVS. Partitioning of the visualization task, between CM-5 and the workstation, can be done interactively in the visual programming environment provided by AVS. Flow solver parameters can also be altered by programmable interactive widgets. This system partially removes the requirement of storing large solution files at frequent time steps, a characteristic of the traditional 'simulate (yields) store (yields) visualize' post-processing approach.
Multitasking TORT under UNICOS: Parallel performance models and measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnett, A.; Azmy, Y.Y.
1999-09-27
The existing parallel algorithms in the TORT discrete ordinates code were updated to function in a UNICOS environment. A performance model for the parallel overhead was derived for the existing algorithms. The largest contributors to the parallel overhead were identified and a new algorithm was developed. A parallel overhead model was also derived for the new algorithm. The results of the comparison of parallel performance models were compared to applications of the code to two TORT standard test problems and a large production problem. The parallel performance models agree well with the measured parallel overhead.
Multitasking TORT Under UNICOS: Parallel Performance Models and Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Azmy, Y.Y.; Barnett, D.A.
1999-09-27
The existing parallel algorithms in the TORT discrete ordinates were updated to function in a UNI-COS environment. A performance model for the parallel overhead was derived for the existing algorithms. The largest contributors to the parallel overhead were identified and a new algorithm was developed. A parallel overhead model was also derived for the new algorithm. The results of the comparison of parallel performance models were compared to applications of the code to two TORT standard test problems and a large production problem. The parallel performance models agree well with the measured parallel overhead.
A real time neural net estimator of fatigue life
NASA Technical Reports Server (NTRS)
Troudet, T.; Merrill, W.
1990-01-01
A neural net architecture is proposed to estimate, in real-time, the fatigue life of mechanical components, as part of the Intelligent Control System for Reusable Rocket Engines. Arbitrary component loading values were used as input to train a two hidden-layer feedforward neural net to estimate component fatigue damage. The ability of the net to learn, based on a local strain approach, the mapping between load sequence and fatigue damage has been demonstrated for a uniaxial specimen. Because of its demonstrated performance, the neural computation may be extended to complex cases where the loads are biaxial or triaxial, and the geometry of the component is complex (e.g., turbopump blades). The generality of the approach is such that load/damage mappings can be directly extracted from experimental data without requiring any knowledge of the stress/strain profile of the component. In addition, the parallel network architecture allows real-time life calculations even for high frequency vibrations. Owing to its distributed nature, the neural implementation will be robust and reliable, enabling its use in hostile environments such as rocket engines. This neural net estimator of fatigue life is seen as the enabling technology to achieve component life prognosis, and therefore would be an important part of life extending control for reusable rocket engines.
Modeling and analysis of the TF30-P-3 compressor system with inlet pressure distortion
NASA Technical Reports Server (NTRS)
Mazzawy, R. S.; Banks, G. A.
1976-01-01
Circumferential inlet distortion testing of a TF30-P-3 afterburning turbofan engine was conducted at NASA-Lewis Research Center. Pratt and Whitney Aircraft analyzed the data using its multiple segment parallel compressor model and classical compressor theory. Distortion attenuation analysis resulted in a detailed flow field calculation with good agreement between multiple segment model predictions and the test data. Sensitivity of the engine stall line to circumferential inlet distortion was calculated on the basis of parallel compressor theory to be more severe than indicated by the data. However, the calculated stall site location was in agreement with high response instrumentation measurements.
NASA Technical Reports Server (NTRS)
Hoberecht, Mark A.
2010-01-01
As part of the Exploration Technology Development Program (ETDP) under the auspices of the Exploration Systems Mission Directorate (ESMD), NASA is developing both primary fuel cell power systems and regenerative fuel cell (RFC) energy storage systems within the fuel cell portion of the Energy Storage Project. This effort is being led by the NASA Glenn Research Center (GRC) in partnership with the NASA Johnson Space Center (JSC), Jet Propulsion Laboratory (JPL), NASA Kennedy Space Center (KSC), and industrial partners. The development goals are to improve fuel cell and electrolysis stack electrical performance, reduce system mass, volume, and parasitic power requirements, and increase system life and reliability. A major focus of this effort has been the parallel development of both flow-through and non-flow-through proton exchange membrane (PEM) primary fuel cell power systems. The plan has been, at the appropriate time, to select a single primary fuel cell technology for eventual flight hardware development. Ideally, that appropriate time would occur after both technologies have achieved a technology readiness level (TRL) of six, which represents an engineering model fidelity PEM fuel cell system being successfully tested in a relevant environment. Budget constraints in fiscal year 2009 and beyond have prevented NASA from continuing to pursue the parallel development of both primary fuel cell options. Because very limited data exists for either system, a toplevel, qualitative assessment based on engineering judgement was performed expeditiously to provide guidance for a selection. At that time, the non-flow-through technology was selected for continued development because of potentially major advantages in terms of weight, volume, parasitic power, reliability, and life. This author believes that the advantages are significant enough, and the potential benefits great enough, to offset the higher state of technology readiness of flow-through technology. This paper summarizes the technical considerations which helped form the engineering judgement that led to the final decision.
Catón, Laura; Yurkov, Andrey; Giesbers, Marcel; Dijksterhuis, Jan; Ingham, Colin J.
2017-01-01
Fungi are strongly affected by their physical environment. Microfabrication offers the possibility of creating new culture environments and ecosystems with defined characteristics. Here, we report the isolation of a novel member of the fungal genus Acremonium using a microengineered cultivation chip. This isolate was unusual in that it organizes into macroscopic structures when initially cultivated within microwells with a porous aluminum oxide (PAO) base. These “templated mycelial bundles” (TMB) were formed from masses of parallel hyphae with side branching suppressed. TMB were highly hydrated, facilitating the passive movement of solutes along the bundle. By using a range of culture chips, it was deduced that the critical factors in triggering the TMB were growth in microwells from 50 to 300 μm in diameter with a PAO base. Cultivation experiments, using spores and pigments as tracking agents, indicate that bulk growth of the TMB occurs at the base. TMB morphology is highly coherent and is maintained after growing out of the microwells. TMB can explore their environment by developing unbundled lateral hyphae; TMB only followed if nutrients were available. Because of the ease of fabricating numerous microstructures, we suggest this is a productive approach for exploring morphology and growth in multicellular microorganisms and microbial communities. PMID:28769882
Convective Heating of the LIFE Engine Target During Injection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holdener, D S; Tillack, M S; Wang, X R
2011-10-24
Target survival in the hostile, high temperature xenon environment of the proposed Laser Inertial Fusion Energy (LIFE) engine is critical. This work focuses on the flow properties and convective heat load imposed upon the surface of the indirect drive target while traveling through the xenon gas. While this rarefied flow is traditionally characterized as being within the continuum regime, it is approaching transition where conventional CFD codes reach their bounds of operation. Thus ANSYS, specifically the Navier-Stokes module CFX, will be used in parallel with direct simulation Monte Carlo code DS2V and analytically and empirically derived expressions for heat transfermore » to the hohlraum for validation. Comparison of the viscous and thermal boundary layers of ANSYS and DS2V were shown to be nearly identical, with the surface heat flux varying less than 8% on average. From the results herein, external baffles have been shown to reduce this heat transfer to the sensitive laser entrance hole (LEH) windows and optimize target survival independent of other reactor parameters.« less
A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems
Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo
2017-01-01
Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems. PMID:28079187
A real time neural net estimator of fatigue life
NASA Technical Reports Server (NTRS)
Troudet, T.; Merrill, W.
1990-01-01
A neural network architecture is proposed to estimate, in real-time, the fatigue life of mechanical components, as part of the intelligent Control System for Reusable Rocket Engines. Arbitrary component loading values were used as input to train a two hidden-layer feedforward neural net to estimate component fatigue damage. The ability of the net to learn, based on a local strain approach, the mapping between load sequence and fatigue damage has been demonstrated for a uniaxial specimen. Because of its demonstrated performance, the neural computation may be extended to complex cases where the loads are biaxial or triaxial, and the geometry of the component is complex (e.g., turbopumps blades). The generality of the approach is such that load/damage mappings can be directly extracted from experimental data without requiring any knowledge of the stress/strain profile of the component. In addition, the parallel network architecture allows real-time life calculations even for high-frequency vibrations. Owing to its distributed nature, the neural implementation will be robust and reliable, enabling its use in hostile environments such as rocket engines.
Design and validation of a clinical-scale bioreactor for long-term isolated lung culture.
Charest, Jonathan M; Okamoto, Tatsuya; Kitano, Kentaro; Yasuda, Atsushi; Gilpin, Sarah E; Mathisen, Douglas J; Ott, Harald C
2015-06-01
The primary treatment for end-stage lung disease is lung transplantation. However, donor organ shortage remains a major barrier for many patients. In recent years, techniques for maintaining lungs ex vivo for evaluation and short-term (<12 h) resuscitation have come into more widespread use in an attempt to expand the donor pool. In parallel, progress in whole organ engineering has provided the potential perspective of patient derived grafts grown on demand. As both of these strategies advance to more complex interventions for lung repair and regeneration, the need for a long-term organ culture system becomes apparent. Herein we describe a novel clinical scale bioreactor capable of maintaining functional porcine and human lungs for at least 72 h in isolated lung culture (ILC). The fully automated, computer controlled, sterile, closed circuit system enables physiologic pulsatile perfusion and negative pressure ventilation, while gas exchange function, and metabolism can be evaluated. Creation of this stable, biomimetic long-term culture environment will enable advanced interventions in both donor lungs and engineered grafts of human scale. Copyright © 2015 Elsevier Ltd. All rights reserved.
A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems.
Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo
2017-01-12
Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems.
A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems
NASA Astrophysics Data System (ADS)
Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo
2017-01-01
Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems.
Multilevel decomposition of complete vehicle configuration in a parallel computing environment
NASA Technical Reports Server (NTRS)
Bhatt, Vinay; Ragsdell, K. M.
1989-01-01
This research summarizes various approaches to multilevel decomposition to solve large structural problems. A linear decomposition scheme based on the Sobieski algorithm is selected as a vehicle for automated synthesis of a complete vehicle configuration in a parallel processing environment. The research is in a developmental state. Preliminary numerical results are presented for several example problems.
Systems Engineering and Integration as a Foundation for Mission Engineering
2015-09-01
parallels the INCOSE definition to develop a system, it is important to note the focus on a complete life cycle balanced solution and the satisfaction of...unacceptable to relevant stakeholders. Principle 7 Managers should acknowledge the potential conflicts between (a) their own role as corporate...engineer in understanding the relationships between various needs and to identify similarities, differences, or redundancies. Another method for
Parallel Multi-cycle LES of an Optical Pent-roof DISI Engine Under Motored Operating Conditions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Dam, Noah; Sjöberg, Magnus; Zeng, Wei
The use of Large-eddy Simulations (LES) has increased due to their ability to resolve the turbulent fluctuations of engine flows and capture the resulting cycle-to-cycle variability. One drawback of LES, however, is the requirement to run multiple engine cycles to obtain the necessary cycle statistics for full validation. The standard method to obtain the cycles by running a single simulation through many engine cycles sequentially can take a long time to complete. Recently, a new strategy has been proposed by our research group to reduce the amount of time necessary to simulate the many engine cycles by running individual enginemore » cycle simulations in parallel. With modern large computing systems this has the potential to reduce the amount of time necessary for a full set of simulated engine cycles to finish by up to an order of magnitude. In this paper, the Parallel Perturbation Methodology (PPM) is used to simulate up to 35 engine cycles of an optically accessible, pent-roof Directinjection Spark-ignition (DISI) engine at two different motored engine operating conditions, one throttled and one un-throttled. Comparisons are made against corresponding sequential-cycle simulations to verify the similarity of results using either methodology. Mean results from the PPM approach are very similar to sequential-cycle results with less than 0.5% difference in pressure and a magnitude structure index (MSI) of 0.95. Differences in cycle-to-cycle variability (CCV) predictions are larger, but close to the statistical uncertainty in the measurement for the number of cycles simulated. PPM LES results were also compared against experimental data. Mean quantities such as pressure or mean velocities were typically matched to within 5- 10%. Pressure CCVs were under-predicted, mostly due to the lack of any perturbations in the pressure boundary conditions between cycles. Velocity CCVs for the simulations had the same average magnitude as experiments, but the experimental data showed greater spatial variation in the root-mean-square (RMS). Conversely, circular standard deviation results showed greater repeatability of the flow directionality and swirl vortex positioning than the simulations.« less
Scalable Unix commands for parallel processors : a high-performance implementation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ong, E.; Lusk, E.; Gropp, W.
2001-06-22
We describe a family of MPI applications we call the Parallel Unix Commands. These commands are natural parallel versions of common Unix user commands such as ls, ps, and find, together with a few similar commands particular to the parallel environment. We describe the design and implementation of these programs and present some performance results on a 256-node Linux cluster. The Parallel Unix Commands are open source and freely available.
Modelling parallel programs and multiprocessor architectures with AXE
NASA Technical Reports Server (NTRS)
Yan, Jerry C.; Fineman, Charles E.
1991-01-01
AXE, An Experimental Environment for Parallel Systems, was designed to model and simulate for parallel systems at the process level. It provides an integrated environment for specifying computation models, multiprocessor architectures, data collection, and performance visualization. AXE is being used at NASA-Ames for developing resource management strategies, parallel problem formulation, multiprocessor architectures, and operating system issues related to the High Performance Computing and Communications Program. AXE's simple, structured user-interface enables the user to model parallel programs and machines precisely and efficiently. Its quick turn-around time keeps the user interested and productive. AXE models multicomputers. The user may easily modify various architectural parameters including the number of sites, connection topologies, and overhead for operating system activities. Parallel computations in AXE are represented as collections of autonomous computing objects known as players. Their use and behavior is described. Performance data of the multiprocessor model can be observed on a color screen. These include CPU and message routing bottlenecks, and the dynamic status of the software.
NASA Technical Reports Server (NTRS)
Nguyen, Duc T.
1990-01-01
Practical engineering application can often be formulated in the form of a constrained optimization problem. There are several solution algorithms for solving a constrained optimization problem. One approach is to convert a constrained problem into a series of unconstrained problems. Furthermore, unconstrained solution algorithms can be used as part of the constrained solution algorithms. Structural optimization is an iterative process where one starts with an initial design, a finite element structure analysis is then performed to calculate the response of the system (such as displacements, stresses, eigenvalues, etc.). Based upon the sensitivity information on the objective and constraint functions, an optimizer such as ADS or IDESIGN, can be used to find the new, improved design. For the structural analysis phase, the equation solver for the system of simultaneous, linear equations plays a key role since it is needed for either static, or eigenvalue, or dynamic analysis. For practical, large-scale structural analysis-synthesis applications, computational time can be excessively large. Thus, it is necessary to have a new structural analysis-synthesis code which employs new solution algorithms to exploit both parallel and vector capabilities offered by modern, high performance computers such as the Convex, Cray-2 and Cray-YMP computers. The objective of this research project is, therefore, to incorporate the latest development in the parallel-vector equation solver, PVSOLVE into the widely popular finite-element production code, such as the SAP-4. Furthermore, several nonlinear unconstrained optimization subroutines have also been developed and tested under a parallel computer environment. The unconstrained optimization subroutines are not only useful in their own right, but they can also be incorporated into a more popular constrained optimization code, such as ADS.
An 8-Fold Parallel Reactor System for Combinatorial Catalysis Research
Stoll, Norbert; Allwardt, Arne; Dingerdissen, Uwe
2006-01-01
Increasing economic globalization and mounting time and cost pressure on the development of new raw materials for the chemical industry as well as materials and environmental engineering constantly raise the demands on technologies to be used. Parallelization, miniaturization, and automation are the main concepts involved in increasing the rate of chemical and biological experimentation. PMID:17671621
Innovative Language-Based & Object-Oriented Structured AMR Using Fortran 90 and OpenMP
NASA Technical Reports Server (NTRS)
Norton, C.; Balsara, D.
1999-01-01
Parallel adaptive mesh refinement (AMR) is an important numerical technique that leads to the efficient solution of many physical and engineering problems. In this paper, we describe how AMR programing can be performed in an object-oreinted way using the modern aspects of Fortran 90 combined with the parallelization features of OpenMP.
Radisic, Milica; Park, Hyoungshin; Martens, Timothy P.; Salazar-Lazaro, Johanna E.; Geng, Wenliang; Wang, Yadong; Langer, Robert; Freed, Lisa E.; Vunjak-Novakovic, Gordana
2009-01-01
Native myocardium consists of several cell types, of which approximately one-third are myocytes and most of the nonmyocytes are fibroblasts. By analogy with monolayer culture in which fibroblasts were removed to prevent overgrowth, early attempts to engineer myocardium utilized cell populations enriched for cardiac myocytes (CMs; ~80–90% of total cells). We hypothesized that the pre-treatment of synthetic elastomeric scaffolds with cardiac fibroblasts (CFs) will enhance the functional assembly of the engineered cardiac constructs by creating an environment supportive of cardiomyocyte attachment and function. Cells isolated from neonatal rat ventricles were prepared to form three distinct populations: rapidly plating cells identified as CFs, slowly plating cells identified as CMs, and unseparated initial population of cells (US). The cell fractions (3 × 106 cells total) were seeded into poly(glycerol sebacate) scaffolds (highly porous discs, 5 mm in diameter × 2-mm thick) using Matrigel™, either separately (CM or CF), concurrently (US), or sequentially (CF pre-treatment followed by CM culture, CF + CM), and cultured in spinner flasks. The CF + CM group had the highest amplitude of contraction and the lowest excitation threshold, superior DNA content, and higher glucose consumption rate. The CF + CM group exhibited compact 100- to 200-μm thick layers of elongated myocytes aligned in parallel over layers of collagen-producing fibroblasts, while US and CM groups exhibited scattered and poorly elongated myocytes. The sequential co-culture of CF and CM on a synthetic elastomer scaffold thus created an environment supportive of cardiomyocyte attachment, differentiation, and contractile function, presumably due to scaffold conditioning by cultured fibroblasts. When implanted over the infarcted myocardium in a nude rat model, cell-free poly(glycerol sebacate) remained at the ventricular wall after 2 weeks of in vivo, and was vascularized. PMID:18041719
Cryogenic Fluid Management Technology and Nuclear Thermal Propulsion
NASA Technical Reports Server (NTRS)
Taylor, Brian D.; Caffrey, Jarvis; Hedayat, Ali; Stephens, Jonathan; Polsgrove, Robert
2016-01-01
Cryogenic fluid management (CFM) is critical to the success of future nuclear thermal propulsion powered vehicles. While this is an issue for any propulsion system utilizing cryogenic propellants, this is made more challenging by the radiation flux produced by the reactor in a nuclear thermal rocket (NTR). Managing the cryogenic fuel to prevent propellant loss to boil off and leakage is needed to limit the required quantity of propellant to a reasonable level. Analysis shows deposition of energy into liquid hydrogen fuel tanks in the vicinity of the nuclear thermal engine. This is on top of ambient environment sources of heat. Investments in cryogenic/thermal management systems (some of which are ongoing at various organizations) are needed in parallel to nuclear thermal engine development in order to one day see the successful operation of an entire stage. High durability, low thermal conductivity insulation is one developmental need. Light weight cryocoolers capable of removing heat from large fluid volumes at temperatures as low as approx. 20 K are needed to remove heat leak from the propellant of an NTR. Valve leakage is an additional CFM issue of great importance. Leakage rates of state of the art, launch vehicle size valves (which is approximately the size valves needed for a Mars transfer vehicle) are quite high and would result in large quantities of lost propellant over a long duration mission. Additionally, the liquid acquisition system inside the propellant tank must deliver properly conditioned propellant to the feed line for successful engine operation and avoid intake of warm or gaseous propellant. Analysis of the thermal environment and the CFM technology development are discussed in the accompanying presentation.
An Overview of High Performance Computing and Challenges for the Future
Google Tech Talks
2017-12-09
In this talk we examine how high performance computing has changed over the last 10-year and look toward the future in terms of trends. These changes have had and will continue to have a major impact on our software. A new generation of software libraries and lgorithms are needed for the effective and reliable use of (wide area) dynamic, distributed and parallel environments. Some of the software and algorithm challenges have already been encountered, such as management of communication and memory hierarchies through a combination of compile--time and run--time techniques, but the increased scale of computation, depth of memory hierarchies, range of latencies, and increased run--time environment variability will make these problems much harder. We will focus on the redesign of software to fit multicore architectures. Speaker: Jack Dongarra University of Tennessee Oak Ridge National Laboratory University of Manchester Jack Dongarra received a Bachelor of Science in Mathematics from Chicago State University in 1972 and a Master of Science in Computer Science from the Illinois Institute of Technology in 1973. He received his Ph.D. in Applied Mathematics from the University of New Mexico in 1980. He worked at the Argonne National Laboratory until 1989, becoming a senior scientist. He now holds an appointment as University Distinguished Professor of Computer Science in the Electrical Engineering and Computer Science Department at the University of Tennessee, has the position of a Distinguished Research Staff member in the Computer Science and Mathematics Division at Oak Ridge National Laboratory (ORNL), Turing Fellow in the Computer Science and Mathematics Schools at the University of Manchester, and an Adjunct Professor in the Computer Science Department at Rice University. He specializes in numerical algorithms in linear algebra, parallel computing, the use of advanced-computer architectures, programming methodology, and tools for parallel computers. His research includes the development, testing and documentation of high quality mathematical software. He has contributed to the design and implementation of the following open source software packages and systems: EISPACK, LINPACK, the BLAS, LAPACK, ScaLAPACK, Netlib, PVM, MPI, NetSolve, Top500, ATLAS, and PAPI. He has published approximately 200 articles, papers, reports and technical memoranda and he is coauthor of several books. He was awarded the IEEE Sid Fernbach Award in 2004 for his contributions in the application of high performance computers using innovative approaches. He is a Fellow of the AAAS, ACM, and the IEEE and a member of the National Academy of Engineering.
An Overview of High Performance Computing and Challenges for the Future
DOE Office of Scientific and Technical Information (OSTI.GOV)
Google Tech Talks
In this talk we examine how high performance computing has changed over the last 10-year and look toward the future in terms of trends. These changes have had and will continue to have a major impact on our software. A new generation of software libraries and lgorithms are needed for the effective and reliable use of (wide area) dynamic, distributed and parallel environments. Some of the software and algorithm challenges have already been encountered, such as management of communication and memory hierarchies through a combination of compile--time and run--time techniques, but the increased scale of computation, depth of memory hierarchies,more » range of latencies, and increased run--time environment variability will make these problems much harder. We will focus on the redesign of software to fit multicore architectures. Speaker: Jack Dongarra University of Tennessee Oak Ridge National Laboratory University of Manchester Jack Dongarra received a Bachelor of Science in Mathematics from Chicago State University in 1972 and a Master of Science in Computer Science from the Illinois Institute of Technology in 1973. He received his Ph.D. in Applied Mathematics from the University of New Mexico in 1980. He worked at the Argonne National Laboratory until 1989, becoming a senior scientist. He now holds an appointment as University Distinguished Professor of Computer Science in the Electrical Engineering and Computer Science Department at the University of Tennessee, has the position of a Distinguished Research Staff member in the Computer Science and Mathematics Division at Oak Ridge National Laboratory (ORNL), Turing Fellow in the Computer Science and Mathematics Schools at the University of Manchester, and an Adjunct Professor in the Computer Science Department at Rice University. He specializes in numerical algorithms in linear algebra, parallel computing, the use of advanced-computer architectures, programming methodology, and tools for parallel computers. His research includes the development, testing and documentation of high quality mathematical software. He has contributed to the design and implementation of the following open source software packages and systems: EISPACK, LINPACK, the BLAS, LAPACK, ScaLAPACK, Netlib, PVM, MPI, NetSolve, Top500, ATLAS, and PAPI. He has published approximately 200 articles, papers, reports and technical memoranda and he is coauthor of several books. He was awarded the IEEE Sid Fernbach Award in 2004 for his contributions in the application of high performance computers using innovative approaches. He is a Fellow of the AAAS, ACM, and the IEEE and a member of the National Academy of Engineering.« less
A unified framework for building high performance DVEs
NASA Astrophysics Data System (ADS)
Lei, Kaibin; Ma, Zhixia; Xiong, Hua
2011-10-01
A unified framework for integrating PC cluster based parallel rendering with distributed virtual environments (DVEs) is presented in this paper. While various scene graphs have been proposed in DVEs, it is difficult to enable collaboration of different scene graphs. This paper proposes a technique for non-distributed scene graphs with the capability of object and event distribution. With the increase of graphics data, DVEs require more powerful rendering ability. But general scene graphs are inefficient in parallel rendering. The paper also proposes a technique to connect a DVE and a PC cluster based parallel rendering environment. A distributed multi-player video game is developed to show the interaction of different scene graphs and the parallel rendering performance on a large tiled display wall.
NASA Technical Reports Server (NTRS)
Atwell, Matthew J.; Melcher, John C.; Hurlbert, Eric A.; Morehead, Robert L.
2017-01-01
A liquid oxygen, liquid methane (LOX/LCH4) reaction control system (RCS) was tested at NASA Glenn Research Center's Plum Brook Station in the Spacecraft Propulsion Research Facility (B-2) under simulated altitude and thermal vacuum conditions. The RCS is a subsystem of the Integrated Cryogenic Propulsion Test Article (ICPTA) and was initially developed under Project Morpheus. Composed of two 28 lbf-thrust and two 7 lbf-thrust engines, the RCS is fed in parallel with the ICPTA main engine from four propellant tanks. 40 tests consisting of 1,010 individual thruster pulses were performed across 6 different test days. Major test objectives were focused on system dynamics, and included characterization of fluid transients, manifold priming, manifold thermal conditioning, thermodynamic vent system (TVS) performance, and main engine/RCS interaction. Peak surge pressures from valve opening and closing events were examined. It was determined that these events were impacted significantly by vapor cavity formation and collapse. In most cases the valve opening transient was more severe than the valve closing. Under thermal vacuum conditions it was shown that TVS operation is unnecessary to maintain liquid conditions at the thruster inlets. However, under higher heat leak environments the RCS can still be operated in a self-conditioning mode without overboard TVS venting, contingent upon the engines managing a range of potentially severe thermal transients. Lastly, during testing under cold thermal conditions the engines experienced significant ignition problems. Only after warming the thruster bodies with a gaseous nitrogen purge to an intermediate temperature was successful ignition demonstrated.
Architecture Adaptive Computing Environment
NASA Technical Reports Server (NTRS)
Dorband, John E.
2006-01-01
Architecture Adaptive Computing Environment (aCe) is a software system that includes a language, compiler, and run-time library for parallel computing. aCe was developed to enable programmers to write programs, more easily than was previously possible, for a variety of parallel computing architectures. Heretofore, it has been perceived to be difficult to write parallel programs for parallel computers and more difficult to port the programs to different parallel computing architectures. In contrast, aCe is supportable on all high-performance computing architectures. Currently, it is supported on LINUX clusters. aCe uses parallel programming constructs that facilitate writing of parallel programs. Such constructs were used in single-instruction/multiple-data (SIMD) programming languages of the 1980s, including Parallel Pascal, Parallel Forth, C*, *LISP, and MasPar MPL. In aCe, these constructs are extended and implemented for both SIMD and multiple- instruction/multiple-data (MIMD) architectures. Two new constructs incorporated in aCe are those of (1) scalar and virtual variables and (2) pre-computed paths. The scalar-and-virtual-variables construct increases flexibility in optimizing memory utilization in various architectures. The pre-computed-paths construct enables the compiler to pre-compute part of a communication operation once, rather than computing it every time the communication operation is performed.
A review of Curtiss-Wright rotary engine developments with respect to general aviation potential
NASA Technical Reports Server (NTRS)
Jones, C.
1979-01-01
Aviation related rotary (Wankel-type) engine tests, possible growth directions and relevant developments at Curtiss-Wright have been reviewed. Automotive rotary engines including stratified charge are described and flight test results of rotary aircraft engines are presented. The current 300 HP engine prototype shows basic durability and competitive performance potential. Recent parallel developments have separately confirmed the geometric advantages of the rotary engine for direct injected unthrottled stratified charge. Specific fuel consumption equal to or better than pre- or swirl-chamber diesels, low emission and multi-fuel capability have been shown by rig tests of similar rotary engine.
Cell Migration in 1D and 2D Nanofiber Microenvironments.
Estabridis, Horacio M; Jana, Aniket; Nain, Amrinder; Odde, David J
2018-03-01
Understanding how cells migrate in fibrous environments is important in wound healing, immune function, and cancer progression. A key question is how fiber orientation and network geometry influence cell movement. Here we describe a quantitative, modeling-based approach toward identifying the mechanisms by which cells migrate in fibrous geometries having well controlled orientation. Specifically, U251 glioblastoma cells were seeded onto non-electrospinning Spinneret based tunable engineering parameters fiber substrates that consist of networks of suspended 400 nm diameter nanofibers. Cells were classified based on the local fiber geometry and cell migration dynamics observed by light microscopy. Cells were found in three distinct geometries: adhering two a single fiber, adhering to two parallel fibers, and adhering to a network of orthogonal fibers. Cells adhering to a single fiber or two parallel fibers can only move in one dimension along the fiber axis, whereas cells on a network of orthogonal fibers can move in two dimensions. We found that cells move faster and more persistently in 1D geometries than in 2D, with cell migration being faster on parallel fibers than on single fibers. To explain these behaviors mechanistically, we simulated cell migration in the three different geometries using a motor-clutch based model for cell traction forces. Using nearly identical parameter sets for each of the three cases, we found that the simulated cells naturally replicated the reduced migration in 2D relative to 1D geometries. In addition, the modestly faster 1D migration on parallel fibers relative to single fibers was captured using a correspondingly modest increase in the number of clutches to reflect increased surface area of adhesion on parallel fibers. Overall, the integrated modeling and experimental analysis shows that cell migration in response to varying fibrous geometries can be explained by a simple mechanical readout of geometry via a motor-clutch mechanism.
Research Study Towards a MEFFV Electric Armament System
2004-01-01
CHPSPerf Inputs Parameter Setting Engine Power (kW) 500 per engine Generator Power (kW) 500/generator Traction Motors Power (kW) 500/side # Battery Pack...Cells in Parallel 2 # Motors in Drive Train 2 Max Power of Traction Motors 200 Minimum Engine Power (kW) 50 Optimum Engine Power (kW) 750 Stop... motors . Other options were examined for the energy storage system. Of particular interest in this regard is the use of the CPA flywheel as the load
Henriques, David; González, Patricia; Doallo, Ramón; Saez-Rodriguez, Julio; Banga, Julio R.
2017-01-01
Background We consider a general class of global optimization problems dealing with nonlinear dynamic models. Although this class is relevant to many areas of science and engineering, here we are interested in applying this framework to the reverse engineering problem in computational systems biology, which yields very large mixed-integer dynamic optimization (MIDO) problems. In particular, we consider the framework of logic-based ordinary differential equations (ODEs). Methods We present saCeSS2, a parallel method for the solution of this class of problems. This method is based on an parallel cooperative scatter search metaheuristic, with new mechanisms of self-adaptation and specific extensions to handle large mixed-integer problems. We have paid special attention to the avoidance of convergence stagnation using adaptive cooperation strategies tailored to this class of problems. Results We illustrate its performance with a set of three very challenging case studies from the domain of dynamic modelling of cell signaling. The simpler case study considers a synthetic signaling pathway and has 84 continuous and 34 binary decision variables. A second case study considers the dynamic modeling of signaling in liver cancer using high-throughput data, and has 135 continuous and 109 binaries decision variables. The third case study is an extremely difficult problem related with breast cancer, involving 690 continuous and 138 binary decision variables. We report computational results obtained in different infrastructures, including a local cluster, a large supercomputer and a public cloud platform. Interestingly, the results show how the cooperation of individual parallel searches modifies the systemic properties of the sequential algorithm, achieving superlinear speedups compared to an individual search (e.g. speedups of 15 with 10 cores), and significantly improving (above a 60%) the performance with respect to a non-cooperative parallel scheme. The scalability of the method is also good (tests were performed using up to 300 cores). Conclusions These results demonstrate that saCeSS2 can be used to successfully reverse engineer large dynamic models of complex biological pathways. Further, these results open up new possibilities for other MIDO-based large-scale applications in the life sciences such as metabolic engineering, synthetic biology, drug scheduling. PMID:28813442
Penas, David R; Henriques, David; González, Patricia; Doallo, Ramón; Saez-Rodriguez, Julio; Banga, Julio R
2017-01-01
We consider a general class of global optimization problems dealing with nonlinear dynamic models. Although this class is relevant to many areas of science and engineering, here we are interested in applying this framework to the reverse engineering problem in computational systems biology, which yields very large mixed-integer dynamic optimization (MIDO) problems. In particular, we consider the framework of logic-based ordinary differential equations (ODEs). We present saCeSS2, a parallel method for the solution of this class of problems. This method is based on an parallel cooperative scatter search metaheuristic, with new mechanisms of self-adaptation and specific extensions to handle large mixed-integer problems. We have paid special attention to the avoidance of convergence stagnation using adaptive cooperation strategies tailored to this class of problems. We illustrate its performance with a set of three very challenging case studies from the domain of dynamic modelling of cell signaling. The simpler case study considers a synthetic signaling pathway and has 84 continuous and 34 binary decision variables. A second case study considers the dynamic modeling of signaling in liver cancer using high-throughput data, and has 135 continuous and 109 binaries decision variables. The third case study is an extremely difficult problem related with breast cancer, involving 690 continuous and 138 binary decision variables. We report computational results obtained in different infrastructures, including a local cluster, a large supercomputer and a public cloud platform. Interestingly, the results show how the cooperation of individual parallel searches modifies the systemic properties of the sequential algorithm, achieving superlinear speedups compared to an individual search (e.g. speedups of 15 with 10 cores), and significantly improving (above a 60%) the performance with respect to a non-cooperative parallel scheme. The scalability of the method is also good (tests were performed using up to 300 cores). These results demonstrate that saCeSS2 can be used to successfully reverse engineer large dynamic models of complex biological pathways. Further, these results open up new possibilities for other MIDO-based large-scale applications in the life sciences such as metabolic engineering, synthetic biology, drug scheduling.
HeNCE: A Heterogeneous Network Computing Environment
Beguelin, Adam; Dongarra, Jack J.; Geist, George Al; ...
1994-01-01
Network computing seeks to utilize the aggregate resources of many networked computers to solve a single problem. In so doing it is often possible to obtain supercomputer performance from an inexpensive local area network. The drawback is that network computing is complicated and error prone when done by hand, especially if the computers have different operating systems and data formats and are thus heterogeneous. The heterogeneous network computing environment (HeNCE) is an integrated graphical environment for creating and running parallel programs over a heterogeneous collection of computers. It is built on a lower level package called parallel virtual machine (PVM).more » The HeNCE philosophy of parallel programming is to have the programmer graphically specify the parallelism of a computation and to automate, as much as possible, the tasks of writing, compiling, executing, debugging, and tracing the network computation. Key to HeNCE is a graphical language based on directed graphs that describe the parallelism and data dependencies of an application. Nodes in the graphs represent conventional Fortran or C subroutines and the arcs represent data and control flow. This article describes the present state of HeNCE, its capabilities, limitations, and areas of future research.« less
Parallel Unsteady Turbopump Simulations for Liquid Rocket Engines
NASA Technical Reports Server (NTRS)
Kiris, Cetin C.; Kwak, Dochan; Chan, William
2000-01-01
This paper reports the progress being made towards complete turbo-pump simulation capability for liquid rocket engines. Space Shuttle Main Engine (SSME) turbo-pump impeller is used as a test case for the performance evaluation of the MPI and hybrid MPI/Open-MP versions of the INS3D code. Then, a computational model of a turbo-pump has been developed for the shuttle upgrade program. Relative motion of the grid system for rotor-stator interaction was obtained by employing overset grid techniques. Time-accuracy of the scheme has been evaluated by using simple test cases. Unsteady computations for SSME turbo-pump, which contains 136 zones with 35 Million grid points, are currently underway on Origin 2000 systems at NASA Ames Research Center. Results from time-accurate simulations with moving boundary capability, and the performance of the parallel versions of the code will be presented in the final paper.
Injector Design Tool Improvements: User's manual for FDNS V.4.5
NASA Technical Reports Server (NTRS)
Chen, Yen-Sen; Shang, Huan-Min; Wei, Hong; Liu, Jiwen
1998-01-01
The major emphasis of the current effort is in the development and validation of an efficient parallel machine computational model, based on the FDNS code, to analyze the fluid dynamics of a wide variety of liquid jet configurations for general liquid rocket engine injection system applications. This model includes physical models for droplet atomization, breakup/coalescence, evaporation, turbulence mixing and gas-phase combustion. Benchmark validation cases for liquid rocket engine chamber combustion conditions will be performed for model validation purpose. Test cases may include shear coaxial, swirl coaxial and impinging injection systems with combinations LOXIH2 or LOXISP-1 propellant injector elements used in rocket engine designs. As a final goal of this project, a well tested parallel CFD performance methodology together with a user's operation description in a final technical report will be reported at the end of the proposed research effort.
Carroll, Sean Michael; Chubiz, Lon M.; Agashe, Deepa; Marx, Christopher J.
2015-01-01
Bioengineering holds great promise to provide fast and efficient biocatalysts for methanol-based biotechnology, but necessitates proven methods to optimize physiology in engineered strains. Here, we highlight experimental evolution as an effective means for optimizing an engineered Methylobacterium extorquens AM1. Replacement of the native formaldehyde oxidation pathway with a functional analog substantially decreased growth in an engineered Methylobacterium, but growth rapidly recovered after six hundred generations of evolution on methanol. We used whole-genome sequencing to identify the basis of adaptation in eight replicate evolved strains, and examined genomic changes in light of other growth and physiological data. We observed great variety in the numbers and types of mutations that occurred, including instances of parallel mutations at targets that may have been “rationalized” by the bioengineer, plus other “illogical” mutations that demonstrate the ability of evolution to expose unforeseen optimization solutions. Notably, we investigated mutations to RNA polymerase, which provided a massive growth benefit but are linked to highly aberrant transcriptional profiles. Overall, we highlight the power of experimental evolution to present genetic and physiological solutions for strain optimization, particularly in systems where the challenges of engineering are too many or too difficult to overcome via traditional engineering methods. PMID:27682084
Programming cells by multiplex genome engineering and accelerated evolution.
Wang, Harris H; Isaacs, Farren J; Carr, Peter A; Sun, Zachary Z; Xu, George; Forest, Craig R; Church, George M
2009-08-13
The breadth of genomic diversity found among organisms in nature allows populations to adapt to diverse environments. However, genomic diversity is difficult to generate in the laboratory and new phenotypes do not easily arise on practical timescales. Although in vitro and directed evolution methods have created genetic variants with usefully altered phenotypes, these methods are limited to laborious and serial manipulation of single genes and are not used for parallel and continuous directed evolution of gene networks or genomes. Here, we describe multiplex automated genome engineering (MAGE) for large-scale programming and evolution of cells. MAGE simultaneously targets many locations on the chromosome for modification in a single cell or across a population of cells, thus producing combinatorial genomic diversity. Because the process is cyclical and scalable, we constructed prototype devices that automate the MAGE technology to facilitate rapid and continuous generation of a diverse set of genetic changes (mismatches, insertions, deletions). We applied MAGE to optimize the 1-deoxy-D-xylulose-5-phosphate (DXP) biosynthesis pathway in Escherichia coli to overproduce the industrially important isoprenoid lycopene. Twenty-four genetic components in the DXP pathway were modified simultaneously using a complex pool of synthetic DNA, creating over 4.3 billion combinatorial genomic variants per day. We isolated variants with more than fivefold increase in lycopene production within 3 days, a significant improvement over existing metabolic engineering techniques. Our multiplex approach embraces engineering in the context of evolution by expediting the design and evolution of organisms with new and improved properties.
Differential Draining of Parallel-Fed Propellant Tanks in Morpheus and Apollo Flight
NASA Technical Reports Server (NTRS)
Hurlbert, Eric; Guardado, Hector; Hernandez, Humberto; Desai, Pooja
2015-01-01
Parallel-fed propellant tanks are an advantageous configuration for many spacecraft. Parallel-fed tanks allow the center of gravity (cg) to be maintained over the engine(s), as opposed to serial-fed propellant tanks which result in a cg shift as propellants are drained from tank one tank first opposite another. Parallel-fed tanks also allow for tank isolation if that is needed. Parallel tanks and feed systems have been used in several past vehicles including the Apollo Lunar Module. The design of the feedsystem connecting the parallel tank is critical to maintain balance in the propellant tanks. The design must account for and minimize the effect of manufacturing variations that could cause delta-p or mass flowrate differences, which would lead to propellant imbalance. Other sources of differential draining will be discussed. Fortunately, physics provides some self-correcting behaviors that tend to equalize any initial imbalance. The question concerning whether or not active control of propellant in each tank is required or can be avoided or not is also important to answer. In order to provide data on parallel-fed tanks and differential draining in flight for cryogenic propellants (as well as any other fluid), a vertical test bed (flying lander) for terrestrial use was employed. The Morpheus vertical test bed is a parallel-fed propellant tank system that uses passive design to keep the propellant tanks balanced. The system is operated in blow down. The Morpheus vehicle was instrumented with a capacitance level sensor in each propellant tank in order to measure the draining of propellants in over 34 tethered and 12 free flights. Morpheus did experience an approximately 20 lb/m imbalance in one pair of tanks. The cause of this imbalance will be discussed. This paper discusses the analysis, design, flight simulation vehicle dynamic modeling, and flight test of the Morpheus parallel-fed propellant. The Apollo LEM data is also examined in this summary report of the flight data.
NASA Astrophysics Data System (ADS)
Brunet, V.; Molton, P.; Bézard, H.; Deck, S.; Jacquin, L.
2012-01-01
This paper describes the results obtained during the European Union JEDI (JEt Development Investigations) project carried out in cooperation between ONERA and Airbus. The aim of these studies was first to acquire a complete database of a modern-type engine jet installation set under a wall-to-wall swept wing in various transonic flow conditions. Interactions between the engine jet, the pylon, and the wing were studied thanks to ¤advanced¥ measurement techniques. In parallel, accurate Reynolds-averaged Navier Stokes (RANS) simulations were carried out from simple ones with the Spalart Allmaras model to more complex ones like the DRSM-SSG (Differential Reynolds Stress Modef of Speziale Sarkar Gatski) turbulence model. In the end, Zonal-Detached Eddy Simulations (Z-DES) were also performed to compare different simulation techniques. All numerical results are accurately validated thanks to the experimental database acquired in parallel. This complete and complex study of modern civil aircraft engine installation allowed many upgrades in understanding and simulation methods to be obtained. Furthermore, a setup for engine jet installation studies has been validated for possible future works in the S3Ch transonic research wind-tunnel. The main conclusions are summed up in this paper.
Equation solvers for distributed-memory computers
NASA Technical Reports Server (NTRS)
Storaasli, Olaf O.
1994-01-01
A large number of scientific and engineering problems require the rapid solution of large systems of simultaneous equations. The performance of parallel computers in this area now dwarfs traditional vector computers by nearly an order of magnitude. This talk describes the major issues involved in parallel equation solvers with particular emphasis on the Intel Paragon, IBM SP-1 and SP-2 processors.
The multi-queue model applied to random access protocol
NASA Astrophysics Data System (ADS)
Fan, Xinlong
2013-03-01
The connection of everything in a sensory and an intelligent way is a pursuit in smart environment. This paper introduces the engineered cell-sensors into the multi-agent systems to realize the smart environment. The seamless interface with the natural environment and strong information-processing ability of cell with the achievements of synthetic biology make the construction of engineered cell-sensors possible. However, the engineered cell-sensors are only simple-functional and unreliable computational entities. Therefore how to combine engineered cell-sensors with digital device is a key problem in order to realize the smart environment. We give the abstract structure and interaction modes of the engineered cell-sensors in order to introduce engineered cell-sensors into multi-agent systems. We believe that the introduction of engineered cell-sensors will push forward the development of the smart environment.
Parallel machine architecture and compiler design facilities
NASA Technical Reports Server (NTRS)
Kuck, David J.; Yew, Pen-Chung; Padua, David; Sameh, Ahmed; Veidenbaum, Alex
1990-01-01
The objective is to provide an integrated simulation environment for studying and evaluating various issues in designing parallel systems, including machine architectures, parallelizing compiler techniques, and parallel algorithms. The status of Delta project (which objective is to provide a facility to allow rapid prototyping of parallelized compilers that can target toward different machine architectures) is summarized. Included are the surveys of the program manipulation tools developed, the environmental software supporting Delta, and the compiler research projects in which Delta has played a role.
40 CFR 1033.130 - Instructions for engine remanufacturing or engine installation.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Instructions for engine remanufacturing or engine installation. 1033.130 Section 1033.130 Protection of Environment ENVIRONMENTAL... and Related Requirements § 1033.130 Instructions for engine remanufacturing or engine installation. (a...
40 CFR 1033.130 - Instructions for engine remanufacturing or engine installation.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 34 2012-07-01 2012-07-01 false Instructions for engine remanufacturing or engine installation. 1033.130 Section 1033.130 Protection of Environment ENVIRONMENTAL... and Related Requirements § 1033.130 Instructions for engine remanufacturing or engine installation. (a...
40 CFR 1033.130 - Instructions for engine remanufacturing or engine installation.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 34 2013-07-01 2013-07-01 false Instructions for engine remanufacturing or engine installation. 1033.130 Section 1033.130 Protection of Environment ENVIRONMENTAL... and Related Requirements § 1033.130 Instructions for engine remanufacturing or engine installation. (a...
40 CFR 1033.130 - Instructions for engine remanufacturing or engine installation.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 33 2014-07-01 2014-07-01 false Instructions for engine remanufacturing or engine installation. 1033.130 Section 1033.130 Protection of Environment ENVIRONMENTAL... and Related Requirements § 1033.130 Instructions for engine remanufacturing or engine installation. (a...
40 CFR 1033.130 - Instructions for engine remanufacturing or engine installation.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 33 2011-07-01 2011-07-01 false Instructions for engine remanufacturing or engine installation. 1033.130 Section 1033.130 Protection of Environment ENVIRONMENTAL... and Related Requirements § 1033.130 Instructions for engine remanufacturing or engine installation. (a...
Petascale turbulence simulation using a highly parallel fast multipole method on GPUs
NASA Astrophysics Data System (ADS)
Yokota, Rio; Barba, L. A.; Narumi, Tetsu; Yasuoka, Kenji
2013-03-01
This paper reports large-scale direct numerical simulations of homogeneous-isotropic fluid turbulence, achieving sustained performance of 1.08 petaflop/s on GPU hardware using single precision. The simulations use a vortex particle method to solve the Navier-Stokes equations, with a highly parallel fast multipole method (FMM) as numerical engine, and match the current record in mesh size for this application, a cube of 40963 computational points solved with a spectral method. The standard numerical approach used in this field is the pseudo-spectral method, relying on the FFT algorithm as the numerical engine. The particle-based simulations presented in this paper quantitatively match the kinetic energy spectrum obtained with a pseudo-spectral method, using a trusted code. In terms of parallel performance, weak scaling results show the FMM-based vortex method achieving 74% parallel efficiency on 4096 processes (one GPU per MPI process, 3 GPUs per node of the TSUBAME-2.0 system). The FFT-based spectral method is able to achieve just 14% parallel efficiency on the same number of MPI processes (using only CPU cores), due to the all-to-all communication pattern of the FFT algorithm. The calculation time for one time step was 108 s for the vortex method and 154 s for the spectral method, under these conditions. Computing with 69 billion particles, this work exceeds by an order of magnitude the largest vortex-method calculations to date.
Compiler and Runtime Support for Programming in Adaptive Parallel Environments
1998-10-15
noother job is waiting for resources, and use a smaller number of processors when other jobs needresources. Setia et al. [15, 20] have shown that such...15] Vijay K. Naik, Sanjeev Setia , and Mark Squillante. Performance analysis of job scheduling policiesin parallel supercomputing environments. In...on networks ofheterogeneous workstations. Technical Report CSE-94-012, Oregon Graduate Institute of Scienceand Technology, 1994.[20] Sanjeev Setia
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boyle, Peter; Christ, Norman; Gara, Alan
A list prefetch engine improves a performance of a parallel computing system. The list prefetch engine receives a current cache miss address. The list prefetch engine evaluates whether the current cache miss address is valid. If the current cache miss address is valid, the list prefetch engine compares the current cache miss address and a list address. A list address represents an address in a list. A list describes an arbitrary sequence of prior cache miss addresses. The prefetch engine prefetches data according to the list, if there is a match between the current cache miss address and the listmore » address.« less
Injector element characterization methodology
NASA Technical Reports Server (NTRS)
Cox, George B., Jr.
1988-01-01
Characterization of liquid rocket engine injector elements is an important part of the development process for rocket engine combustion devices. Modern nonintrusive instrumentation for flow velocity and spray droplet size measurement, and automated, computer-controlled test facilities allow rapid, low-cost evaluation of injector element performance and behavior. Application of these methods in rocket engine development, paralleling their use in gas turbine engine development, will reduce rocket engine development cost and risk. The Alternate Turbopump (ATP) Hot Gas Systems (HGS) preburner injector elements were characterized using such methods, and the methodology and some of the results obtained will be shown.
Boyle, Peter [Edinburgh, GB; Christ, Norman [Irvington, NY; Gara, Alan [Yorktown Heights, NY; Kim,; Changhoan, [San Jose, CA; Mawhinney, Robert [New York, NY; Ohmacht, Martin [Yorktown Heights, NY; Sugavanam, Krishnan [Yorktown Heights, NY
2012-08-28
A list prefetch engine improves a performance of a parallel computing system. The list prefetch engine receives a current cache miss address. The list prefetch engine evaluates whether the current cache miss address is valid. If the current cache miss address is valid, the list prefetch engine compares the current cache miss address and a list address. A list address represents an address in a list. A list describes an arbitrary sequence of prior cache miss addresses. The prefetch engine prefetches data according to the list, if there is a match between the current cache miss address and the list address.
Extending the boundaries of reverse engineering
NASA Astrophysics Data System (ADS)
Lawrie, Chris
2002-04-01
In today's market place the potential of Reverse Engineering as a time compression tool is commonly lost under its traditional definition. The term Reverse Engineering was coined way back at the advent of CMM machines and 3D CAD systems to describe the process of fitting surfaces to captured point data. Since these early beginnings, downstream hardware scanning and digitising systems have evolved in parallel with an upstream demand, greatly increasing the potential of a point cloud data set within engineering design and manufacturing processes. The paper will discuss the issues surrounding Reverse Engineering at the turn of the millennium.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malony, Allen D; Shende, Sameer
The primary goal of the University of Oregon's DOE "ÃÂcompetitiveness" project was to create performance technology that embodies and supports knowledge of performance data, analysis, and diagnosis in parallel performance problem solving. The target of our development activities was the TAU Performance System and the technology accomplishments reported in this and prior reports have all been incorporated in the TAU open software distribution. In addition, the project has been committed to maintaining strong interactions with the DOE SciDAC Performance Engineering Research Institute (PERI) and Center for Technology for Advanced Scientific Component Software (TASCS). This collaboration has proved valuable for translationmore » of our knowledge-based performance techniques to parallel application development and performance engineering practice. Our outreach has also extended to the DOE Advanced CompuTational Software (ACTS) collection and project. Throughout the project we have participated in the PERI and TASCS meetings, as well as the ACTS annual workshops.« less
Applications of Emerging Parallel Optical Link Technology to High Energy Physics Experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chramowicz, J.; Kwan, S.; Prosser, A.
2011-09-01
Modern particle detectors depend upon optical fiber links to deliver event data to upstream trigger and data processing systems. Future detector systems can benefit from the development of dense arrangements of high speed optical links emerging from the telecommunications and storage area network market segments. These links support data transfers in each direction at rates up to 120 Gbps in packages that minimize or even eliminate edge connector requirements. Emerging products include a class of devices known as optical engines which permit assembly of the optical transceivers in close proximity to the electrical interfaces of ASICs and FPGAs which handlemore » the data in parallel electrical format. Such assemblies will reduce required printed circuit board area and minimize electromagnetic interference and susceptibility. We will present test results of some of these parallel components and report on the development of pluggable FPGA Mezzanine Cards equipped with optical engines to provide to collaborators on the Versatile Link Common Project for the HI-LHC at CERN.« less
Engineered plant biomass particles coated with biological agents
Dooley, James H.; Lanning, David N.
2014-06-24
Plant biomass particles coated with a biological agent such as a bacterium or seed, characterized by a length dimension (L) aligned substantially parallel to a grain direction and defining a substantially uniform distance along the grain, a width dimension (W) normal to L and aligned cross grain, and a height dimension (H) normal to W and L. In particular, the L.times.H dimensions define a pair of substantially parallel side surfaces characterized by substantially intact longitudinally arrayed fibers, the W.times.H dimensions define a pair of substantially parallel end surfaces characterized by crosscut fibers and end checking between fibers, and the L.times.W dimensions define a pair of substantially parallel top and bottom surfaces.
Optimal Design of Passive Power Filters Based on Pseudo-parallel Genetic Algorithm
NASA Astrophysics Data System (ADS)
Li, Pei; Li, Hongbo; Gao, Nannan; Niu, Lin; Guo, Liangfeng; Pei, Ying; Zhang, Yanyan; Xu, Minmin; Chen, Kerui
2017-05-01
The economic costs together with filter efficiency are taken as targets to optimize the parameter of passive filter. Furthermore, the method of combining pseudo-parallel genetic algorithm with adaptive genetic algorithm is adopted in this paper. In the early stages pseudo-parallel genetic algorithm is introduced to increase the population diversity, and adaptive genetic algorithm is used in the late stages to reduce the workload. At the same time, the migration rate of pseudo-parallel genetic algorithm is improved to change with population diversity adaptively. Simulation results show that the filter designed by the proposed method has better filtering effect with lower economic cost, and can be used in engineering.
Execution environment for intelligent real-time control systems
NASA Technical Reports Server (NTRS)
Sztipanovits, Janos
1987-01-01
Modern telerobot control technology requires the integration of symbolic and non-symbolic programming techniques, different models of parallel computations, and various programming paradigms. The Multigraph Architecture, which has been developed for the implementation of intelligent real-time control systems is described. The layered architecture includes specific computational models, integrated execution environment and various high-level tools. A special feature of the architecture is the tight coupling between the symbolic and non-symbolic computations. It supports not only a data interface, but also the integration of the control structures in a parallel computing environment.
Scalable Performance Environments for Parallel Systems
NASA Technical Reports Server (NTRS)
Reed, Daniel A.; Olson, Robert D.; Aydt, Ruth A.; Madhyastha, Tara M.; Birkett, Thomas; Jensen, David W.; Nazief, Bobby A. A.; Totty, Brian K.
1991-01-01
As parallel systems expand in size and complexity, the absence of performance tools for these parallel systems exacerbates the already difficult problems of application program and system software performance tuning. Moreover, given the pace of technological change, we can no longer afford to develop ad hoc, one-of-a-kind performance instrumentation software; we need scalable, portable performance analysis tools. We describe an environment prototype based on the lessons learned from two previous generations of performance data analysis software. Our environment prototype contains a set of performance data transformation modules that can be interconnected in user-specified ways. It is the responsibility of the environment infrastructure to hide details of module interconnection and data sharing. The environment is written in C++ with the graphical displays based on X windows and the Motif toolkit. It allows users to interconnect and configure modules graphically to form an acyclic, directed data analysis graph. Performance trace data are represented in a self-documenting stream format that includes internal definitions of data types, sizes, and names. The environment prototype supports the use of head-mounted displays and sonic data presentation in addition to the traditional use of visual techniques.
Parallel evolutionary computation in bioinformatics applications.
Pinho, Jorge; Sobral, João Luis; Rocha, Miguel
2013-05-01
A large number of optimization problems within the field of Bioinformatics require methods able to handle its inherent complexity (e.g. NP-hard problems) and also demand increased computational efforts. In this context, the use of parallel architectures is a necessity. In this work, we propose ParJECoLi, a Java based library that offers a large set of metaheuristic methods (such as Evolutionary Algorithms) and also addresses the issue of its efficient execution on a wide range of parallel architectures. The proposed approach focuses on the easiness of use, making the adaptation to distinct parallel environments (multicore, cluster, grid) transparent to the user. Indeed, this work shows how the development of the optimization library can proceed independently of its adaptation for several architectures, making use of Aspect-Oriented Programming. The pluggable nature of parallelism related modules allows the user to easily configure its environment, adding parallelism modules to the base source code when needed. The performance of the platform is validated with two case studies within biological model optimization. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
ERIC Educational Resources Information Center
Koszalka, Tiffany A.; Wu, Yiyan
2010-01-01
Changes in engineering practices have spawned changes in engineering education and prompted the use of distributed learning environments. A distributed collaborative engineering design (CED) course was designed to engage engineering students in learning about and solving engineering design problems. The CED incorporated an advanced interactive…
Axisymmetric computational fluid dynamics analysis of Saturn V/S1-C/F1 nozzle and plume
NASA Technical Reports Server (NTRS)
Ruf, Joseph H.
1993-01-01
An axisymmetric single engine Computational Fluid Dynamics calculation of the Saturn V/S 1-C vehicle base region and F1 engine plume is described. There were two objectives of this work, the first was to calculate an axisymmetric approximation of the nozzle, plume and base region flow fields of S1-C/F1, relate/scale this to flight data and apply this scaling factor to a NLS/STME axisymmetric calculations from a parallel effort. The second was to assess the differences in F1 and STME plume shear layer development and concentration of combustible gases. This second piece of information was to be input/supporting data for assumptions made in NLS2 base temperature scaling methodology from which the vehicle base thermal environments were being generated. The F1 calculations started at the main combustion chamber faceplate and incorporated the turbine exhaust dump/nozzle film coolant. The plume and base region calculations were made for ten thousand feet and 57 thousand feet altitude at vehicle flight velocity and in stagnant freestream. FDNS was implemented with a 14 species, 28 reaction finite rate chemistry model plus a soot burning model for the RP-1/LOX chemistry. Nozzle and plume flow fields are shown, the plume shear layer constituents are compared to a STME plume. Conclusions are made about the validity and status of the analysis and NLS2 vehicle base thermal environment definition methodology.
Nous sommes tous des bacteries: implications for medicine, pharmacology and public health.
Triggle, David J
2012-12-15
As a species we humans are outnumbered by bacteria in both cell and gene count. This somewhat humbling observation is key to the increasing recognition that the long-standing symbiotic and commensal relations between Homo sapiens and bacteria are of great significance to basic human physiology and health. Knowledge of our human bacterial environment is contributing to an understanding of a variety of disorders including obesity and metabolic syndrome, cardiovascular disease, immunity, and neuronal development and behavior. The Human Microbiome Project is providing a genetic and ecological analysis and will serve as a parallel to the Human Genome Project. Exploration of the chemical space utilized by bacteria will contribute to the development of new small molecule therapeutic agents, including new antibiotics. And genetically re-engineered bacteria are proving to be of potential value as actual therapeutic entities. Our understanding of our bacterial world has the capability to transform radically our current approach to human health diverting it from an emphasis on acute treatments to living in healthy harmony with both our internal and external environments. Copyright © 2012 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Long, M. S.; Yantosca, R.; Nielsen, J. E; Keller, C. A.; Da Silva, A.; Sulprizio, M. P.; Pawson, S.; Jacob, D. J.
2015-01-01
The GEOS-Chem global chemical transport model (CTM), used by a large atmospheric chemistry research community, has been re-engineered to also serve as an atmospheric chemistry module for Earth system models (ESMs). This was done using an Earth System Modeling Framework (ESMF) interface that operates independently of the GEOSChem scientific code, permitting the exact same GEOSChem code to be used as an ESM module or as a standalone CTM. In this manner, the continual stream of updates contributed by the CTM user community is automatically passed on to the ESM module, which remains state of science and referenced to the latest version of the standard GEOS-Chem CTM. A major step in this re-engineering was to make GEOS-Chem grid independent, i.e., capable of using any geophysical grid specified at run time. GEOS-Chem data sockets were also created for communication between modules and with external ESM code. The grid-independent, ESMF-compatible GEOS-Chem is now the standard version of the GEOS-Chem CTM. It has been implemented as an atmospheric chemistry module into the NASA GEOS- 5 ESM. The coupled GEOS-5-GEOS-Chem system was tested for scalability and performance with a tropospheric oxidant-aerosol simulation (120 coupled species, 66 transported tracers) using 48-240 cores and message-passing interface (MPI) distributed-memory parallelization. Numerical experiments demonstrate that the GEOS-Chem chemistry module scales efficiently for the number of cores tested, with no degradation as the number of cores increases. Although inclusion of atmospheric chemistry in ESMs is computationally expensive, the excellent scalability of the chemistry module means that the relative cost goes down with increasing number of cores in a massively parallel environment.
NASA Astrophysics Data System (ADS)
Eilert, Tobias; Beckers, Maximilian; Drechsler, Florian; Michaelis, Jens
2017-10-01
The analysis tool and software package Fast-NPS can be used to analyse smFRET data to obtain quantitative structural information about macromolecules in their natural environment. In the algorithm a Bayesian model gives rise to a multivariate probability distribution describing the uncertainty of the structure determination. Since Fast-NPS aims to be an easy-to-use general-purpose analysis tool for a large variety of smFRET networks, we established an MCMC based sampling engine that approximates the target distribution and requires no parameter specification by the user at all. For an efficient local exploration we automatically adapt the multivariate proposal kernel according to the shape of the target distribution. In order to handle multimodality, the sampler is equipped with a parallel tempering scheme that is fully adaptive with respect to temperature spacing and number of chains. Since the molecular surrounding of a dye molecule affects its spatial mobility and thus the smFRET efficiency, we introduce dye models which can be selected for every dye molecule individually. These models allow the user to represent the smFRET network in great detail leading to an increased localisation precision. Finally, a tool to validate the chosen model combination is provided. Programme Files doi:http://dx.doi.org/10.17632/7ztzj63r68.1 Licencing provisions: Apache-2.0 Programming language: GUI in MATLAB (The MathWorks) and the core sampling engine in C++ Nature of problem: Sampling of highly diverse multivariate probability distributions in order to solve for macromolecular structures from smFRET data. Solution method: MCMC algorithm with fully adaptive proposal kernel and parallel tempering scheme.
A design concept of parallel elasticity extracted from biological muscles for engineered actuators.
Chen, Jie; Jin, Hongzhe; Iida, Fumiya; Zhao, Jie
2016-08-23
Series elastic actuation that takes inspiration from biological muscle-tendon units has been extensively studied and used to address the challenges (e.g. energy efficiency, robustness) existing in purely stiff robots. However, there also exists another form of passive property in biological actuation, parallel elasticity within muscles themselves, and our knowledge of it is limited: for example, there is still no general design strategy for the elasticity profile. When we look at nature, on the other hand, there seems a universal agreement in biological systems: experimental evidence has suggested that a concave-upward elasticity behaviour is exhibited within the muscles of animals. Seeking to draw possible design clues for elasticity in parallel with actuators, we use a simplified joint model to investigate the mechanisms behind this biologically universal preference of muscles. Actuation of the model is identified from general biological joints and further reduced with a specific focus on muscle elasticity aspects, for the sake of easy implementation. By examining various elasticity scenarios, one without elasticity and three with elasticity of different profiles, we find that parallel elasticity generally exerts contradictory influences on energy efficiency and disturbance rejection, due to the mechanical impedance shift thus caused. The trade-off analysis between them also reveals that concave parallel elasticity is able to achieve a more advantageous balance than linear and convex ones. It is expected that the results could contribute to our further understanding of muscle elasticity and provide a theoretical guideline on how to properly design parallel elasticity behaviours for engineering systems such as artificial actuators and robotic joints.
NASA Technical Reports Server (NTRS)
1972-01-01
The activities leading to a tentative concept selection for a pressure-fed engine and propulsion support are outlined. Multiple engine concepts were evaluted through parallel engine major component and system analyses. Booster vehicle coordination, tradeoffs, and technology/development aspects are included. The concept selected for further evaluation has a regeneratively cooled combustion chamber and nozzle in conjuction with an impinging element injector. The propellants chosen are LOX/RP-1, and combustion stabilizing baffles are used to assure dynamic combustion stability.
Shen, Wenfeng; Wei, Daming; Xu, Weimin; Zhu, Xin; Yuan, Shizhong
2010-10-01
Biological computations like electrocardiological modelling and simulation usually require high-performance computing environments. This paper introduces an implementation of parallel computation for computer simulation of electrocardiograms (ECGs) in a personal computer environment with an Intel CPU of Core (TM) 2 Quad Q6600 and a GPU of Geforce 8800GT, with software support by OpenMP and CUDA. It was tested in three parallelization device setups: (a) a four-core CPU without a general-purpose GPU, (b) a general-purpose GPU plus 1 core of CPU, and (c) a four-core CPU plus a general-purpose GPU. To effectively take advantage of a multi-core CPU and a general-purpose GPU, an algorithm based on load-prediction dynamic scheduling was developed and applied to setting (c). In the simulation with 1600 time steps, the speedup of the parallel computation as compared to the serial computation was 3.9 in setting (a), 16.8 in setting (b), and 20.0 in setting (c). This study demonstrates that a current PC with a multi-core CPU and a general-purpose GPU provides a good environment for parallel computations in biological modelling and simulation studies. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.
Specification and Analysis of Parallel Machine Architecture
1990-03-17
Parallel Machine Architeture C.V. Ramamoorthy Computer Science Division Dept. of Electrical Engineering and Computer Science University of California...capacity. (4) Adaptive: The overhead in resolution of deadlocks, etc. should be in proportion to their frequency. (5) Avoid rollbacks: Rollbacks can be...snapshots of system state graphically at a rate proportional to simulation time. Some of the examples are as follow: (1) When the simulation clock of
Graph Partitioning for Parallel Applications in Heterogeneous Grid Environments
NASA Technical Reports Server (NTRS)
Bisws, Rupak; Kumar, Shailendra; Das, Sajal K.; Biegel, Bryan (Technical Monitor)
2002-01-01
The problem of partitioning irregular graphs and meshes for parallel computations on homogeneous systems has been extensively studied. However, these partitioning schemes fail when the target system architecture exhibits heterogeneity in resource characteristics. With the emergence of technologies such as the Grid, it is imperative to study the partitioning problem taking into consideration the differing capabilities of such distributed heterogeneous systems. In our model, the heterogeneous system consists of processors with varying processing power and an underlying non-uniform communication network. We present in this paper a novel multilevel partitioning scheme for irregular graphs and meshes, that takes into account issues pertinent to Grid computing environments. Our partitioning algorithm, called MiniMax, generates and maps partitions onto a heterogeneous system with the objective of minimizing the maximum execution time of the parallel distributed application. For experimental performance study, we have considered both a realistic mesh problem from NASA as well as synthetic workloads. Simulation results demonstrate that MiniMax generates high quality partitions for various classes of applications targeted for parallel execution in a distributed heterogeneous environment.
Methods for design and evaluation of parallel computating systems (The PISCES project)
NASA Technical Reports Server (NTRS)
Pratt, Terrence W.; Wise, Robert; Haught, Mary JO
1989-01-01
The PISCES project started in 1984 under the sponsorship of the NASA Computational Structural Mechanics (CSM) program. A PISCES 1 programming environment and parallel FORTRAN were implemented in 1984 for the DEC VAX (using UNIX processes to simulate parallel processes). This system was used for experimentation with parallel programs for scientific applications and AI (dynamic scene analysis) applications. PISCES 1 was ported to a network of Apollo workstations by N. Fitzgerald.
An intelligent processing environment for real-time simulation
NASA Technical Reports Server (NTRS)
Carroll, Chester C.; Wells, Buren Earl, Jr.
1988-01-01
The development of a highly efficient and thus truly intelligent processing environment for real-time general purpose simulation of continuous systems is described. Such an environment can be created by mapping the simulation process directly onto the University of Alamba's OPERA architecture. To facilitate this effort, the field of continuous simulation is explored, highlighting areas in which efficiency can be improved. Areas in which parallel processing can be applied are also identified, and several general OPERA type hardware configurations that support improved simulation are investigated. Three direct execution parallel processing environments are introduced, each of which greatly improves efficiency by exploiting distinct areas of the simulation process. These suggested environments are candidate architectures around which a highly intelligent real-time simulation configuration can be developed.
Caudell, Thomas P; Xiao, Yunhai; Healy, Michael J
2003-01-01
eLoom is an open source graph simulation software tool, developed at the University of New Mexico (UNM), that enables users to specify and simulate neural network models. Its specification language and libraries enables users to construct and simulate arbitrary, potentially hierarchical network structures on serial and parallel processing systems. In addition, eLoom is integrated with UNM's Flatland, an open source virtual environments development tool to provide real-time visualizations of the network structure and activity. Visualization is a useful method for understanding both learning and computation in artificial neural networks. Through 3D animated pictorially representations of the state and flow of information in the network, a better understanding of network functionality is achieved. ART-1, LAPART-II, MLP, and SOM neural networks are presented to illustrate eLoom and Flatland's capabilities.
Parallel computing in genomic research: advances and applications
Ocaña, Kary; de Oliveira, Daniel
2015-01-01
Today’s genomic experiments have to process the so-called “biological big data” that is now reaching the size of Terabytes and Petabytes. To process this huge amount of data, scientists may require weeks or months if they use their own workstations. Parallelism techniques and high-performance computing (HPC) environments can be applied for reducing the total processing time and to ease the management, treatment, and analyses of this data. However, running bioinformatics experiments in HPC environments such as clouds, grids, clusters, and graphics processing unit requires the expertise from scientists to integrate computational, biological, and mathematical techniques and technologies. Several solutions have already been proposed to allow scientists for processing their genomic experiments using HPC capabilities and parallelism techniques. This article brings a systematic review of literature that surveys the most recently published research involving genomics and parallel computing. Our objective is to gather the main characteristics, benefits, and challenges that can be considered by scientists when running their genomic experiments to benefit from parallelism techniques and HPC capabilities. PMID:26604801
Parallel computing in genomic research: advances and applications.
Ocaña, Kary; de Oliveira, Daniel
2015-01-01
Today's genomic experiments have to process the so-called "biological big data" that is now reaching the size of Terabytes and Petabytes. To process this huge amount of data, scientists may require weeks or months if they use their own workstations. Parallelism techniques and high-performance computing (HPC) environments can be applied for reducing the total processing time and to ease the management, treatment, and analyses of this data. However, running bioinformatics experiments in HPC environments such as clouds, grids, clusters, and graphics processing unit requires the expertise from scientists to integrate computational, biological, and mathematical techniques and technologies. Several solutions have already been proposed to allow scientists for processing their genomic experiments using HPC capabilities and parallelism techniques. This article brings a systematic review of literature that surveys the most recently published research involving genomics and parallel computing. Our objective is to gather the main characteristics, benefits, and challenges that can be considered by scientists when running their genomic experiments to benefit from parallelism techniques and HPC capabilities.
Parallel algorithm of VLBI software correlator under multiprocessor environment
NASA Astrophysics Data System (ADS)
Zheng, Weimin; Zhang, Dong
2007-11-01
The correlator is the key signal processing equipment of a Very Lone Baseline Interferometry (VLBI) synthetic aperture telescope. It receives the mass data collected by the VLBI observatories and produces the visibility function of the target, which can be used to spacecraft position, baseline length measurement, synthesis imaging, and other scientific applications. VLBI data correlation is a task of data intensive and computation intensive. This paper presents the algorithms of two parallel software correlators under multiprocessor environments. A near real-time correlator for spacecraft tracking adopts the pipelining and thread-parallel technology, and runs on the SMP (Symmetric Multiple Processor) servers. Another high speed prototype correlator using the mixed Pthreads and MPI (Massage Passing Interface) parallel algorithm is realized on a small Beowulf cluster platform. Both correlators have the characteristic of flexible structure, scalability, and with 10-station data correlating abilities.
A Next-Generation Parallel File System Environment for the OLCF
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dillow, David A; Fuller, Douglas; Gunasekaran, Raghul
2012-01-01
When deployed in 2008/2009 the Spider system at the Oak Ridge National Laboratory s Leadership Computing Facility (OLCF) was the world s largest scale Lustre parallel file system. Envisioned as a shared parallel file system capable of delivering both the bandwidth and capacity requirements of the OLCF s diverse computational environment, Spider has since become a blueprint for shared Lustre environments deployed worldwide. Designed to support the parallel I/O requirements of the Jaguar XT5 system and other smallerscale platforms at the OLCF, the upgrade to the Titan XK6 heterogeneous system will begin to push the limits of Spider s originalmore » design by mid 2013. With a doubling in total system memory and a 10x increase in FLOPS, Titan will require both higher bandwidth and larger total capacity. Our goal is to provide a 4x increase in total I/O bandwidth from over 240GB=sec today to 1TB=sec and a doubling in total capacity. While aggregate bandwidth and total capacity remain important capabilities, an equally important goal in our efforts is dramatically increasing metadata performance, currently the Achilles heel of parallel file systems at leadership. We present in this paper an analysis of our current I/O workloads, our operational experiences with the Spider parallel file systems, the high-level design of our Spider upgrade, and our efforts in developing benchmarks that synthesize our performance requirements based on our workload characterization studies.« less
Brady, Mariea A; Waldman, Stephen D; Ethier, C Ross
2015-02-01
The unique mechanoelectrochemical environment of cartilage has motivated researchers to investigate the effect of multiple biophysical cues, including mechanical, magnetic, and electrical stimulation, on chondrocyte biology. It is well established that biophysical stimuli promote chondrocyte proliferation, differentiation, and maturation within "biological windows" of defined dose parameters, including mode, frequency, magnitude, and duration of stimuli (see companion review Part I: Cellular Response). However, the underlying molecular mechanisms and signal transduction pathways activated in response to multiple biophysical stimuli remain to be elucidated. Understanding the mechanisms of biophysical signal transduction will deepen knowledge of tissue organogenesis, remodeling, and regeneration and aiding in the treatment of pathologies such as osteoarthritis. Further, this knowledge will provide the tissue engineer with a potent toolset to manipulate and control cell fate and subsequently develop functional replacement cartilage. The aim of this article is to review chondrocyte signal transduction pathways in response to mechanical, magnetic, and electrical cues. Signal transduction does not occur along a single pathway; rather a number of parallel pathways appear to be activated, with calcium signaling apparently common to all three types of stimuli, though there are different modes of activation. Current tissue engineering strategies, such as the development of "smart" functionalized biomaterials that enable the delivery of growth factors or integration of conjugated nanoparticles, may further benefit from targeting known signal transduction pathways in combination with external biophysical cues.
Prediction of dynamic behavior of mutant strains from limited wild-type data.
Song, Hyun-Seob; Ramkrishna, Doraiswami
2012-03-01
Metabolic engineering is the field of introducing genetic changes in organisms so as to modify their function towards synthesizing new products of high impact to society. However, engineered cells frequently have impaired growth rates thus seriously limiting the rate at which such products are made. The problem is attributable to inadequate understanding of how a metabolic network functions in a dynamic sense. Predictions of mutant strain behavior in the past have been based on steady state theories such as flux balance analysis (FBA), minimization of metabolic adjustment (MOMA), and regulatory on/off minimization (ROOM). Such predictions are restricted to product yields and cannot address productivity, which is of focal interest to applications. We demonstrate that our framework ( [Song and Ramkrishna, 2010] and [Song and Ramkrishna, 2011]), based on a “cybernetic” view of metabolic systems, makes predictions of the dynamic behavior of mutant strains of Escherichia coli from a limited amount of data obtained from the wild-type. Dynamic frameworks must necessarily address the issue of metabolic regulation, which the cybernetic approach does by postulating that metabolism is an optimal dynamic response of the organism to the environment in driving reactions towards ensuring survival. The predictions made in this paper are without parallel in the literature and lay the foundation for rational metabolic engineering. Copyright © 2012 Elsevier Inc. All rights reserved.
Hybrid-Vehicle Transmission System
NASA Technical Reports Server (NTRS)
Lupo, G.; Dotti, G.
1985-01-01
Continuously-variable transmission system for hybrid vehicles couples internal-combustion engine and electric motor section, either individually or in parallel, to power vehicle wheels during steering and braking.
Software Engineering for Scientific Computer Simulations
NASA Astrophysics Data System (ADS)
Post, Douglass E.; Henderson, Dale B.; Kendall, Richard P.; Whitney, Earl M.
2004-11-01
Computer simulation is becoming a very powerful tool for analyzing and predicting the performance of fusion experiments. Simulation efforts are evolving from including only a few effects to many effects, from small teams with a few people to large teams, and from workstations and small processor count parallel computers to massively parallel platforms. Successfully making this transition requires attention to software engineering issues. We report on the conclusions drawn from a number of case studies of large scale scientific computing projects within DOE, academia and the DoD. The major lessons learned include attention to sound project management including setting reasonable and achievable requirements, building a good code team, enforcing customer focus, carrying out verification and validation and selecting the optimum computational mathematics approaches.
Ti/Al Design/Cost Trade-Off Analysis
1978-10-01
evaluate the applV!ati’an of selected titanium aluuinide alloys to both dynamic and static components of aircraft gas turbine engines . Mr. D. 0. Nash...the development of advanced aircraft gas turbine engines , a continuing objective has been to develop lightweight, high-performance designs. A parallel... engines for the design/cost trade-off study are as follows: Dynamic Components "* F1O1 Fourth-Stage Compressor Blade "* JlO1 Low Pressure Turbine Blade
A Comparison of Parallelism in Interface Designs for Computer-Based Learning Environments
ERIC Educational Resources Information Center
Min, Rik; Yu, Tao; Spenkelink, Gerd; Vos, Hans
2004-01-01
In this paper we discuss an experiment that was carried out with a prototype, designed in conformity with the concept of parallelism and the Parallel Instruction theory (the PI theory). We designed this prototype with five different interfaces, and ran an empirical study in which 18 participants completed an abstract task. The five basic designs…
Collaborative engineering and design management for the Hobby-Eberly Telescope tracker upgrade
NASA Astrophysics Data System (ADS)
Mollison, Nicholas T.; Hayes, Richard J.; Good, John M.; Booth, John A.; Savage, Richard D.; Jackson, John R.; Rafal, Marc D.; Beno, Joseph H.
2010-07-01
The engineering and design of systems as complex as the Hobby-Eberly Telescope's* new tracker require that multiple tasks be executed in parallel and overlapping efforts. When the design of individual subsystems is distributed among multiple organizations, teams, and individuals, challenges can arise with respect to managing design productivity and coordinating successful collaborative exchanges. This paper focuses on design management issues and current practices for the tracker design portion of the Hobby-Eberly Telescope Wide Field Upgrade project. The scope of the tracker upgrade requires engineering contributions and input from numerous fields including optics, instrumentation, electromechanics, software controls engineering, and site-operations. Successful system-level integration of tracker subsystems and interfaces is critical to the telescope's ultimate performance in astronomical observation. Software and process controls for design information and workflow management have been implemented to assist the collaborative transfer of tracker design data. The tracker system architecture and selection of subsystem interfaces has also proven to be a determining factor in design task formulation and team communication needs. Interface controls and requirements change controls will be discussed, and critical team interactions are recounted (a group-participation Failure Modes and Effects Analysis [FMEA] is one of special interest). This paper will be of interest to engineers, designers, and managers engaging in multi-disciplinary and parallel engineering projects that require coordination among multiple individuals, teams, and organizations.
Efficient computation of hashes
NASA Astrophysics Data System (ADS)
Lopes, Raul H. C.; Franqueira, Virginia N. L.; Hobson, Peter R.
2014-06-01
The sequential computation of hashes at the core of many distributed storage systems and found, for example, in grid services can hinder efficiency in service quality and even pose security challenges that can only be addressed by the use of parallel hash tree modes. The main contributions of this paper are, first, the identification of several efficiency and security challenges posed by the use of sequential hash computation based on the Merkle-Damgard engine. In addition, alternatives for the parallel computation of hash trees are discussed, and a prototype for a new parallel implementation of the Keccak function, the SHA-3 winner, is introduced.
40 CFR 1045.140 - What is my engine's maximum engine power?
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 34 2013-07-01 2013-07-01 false What is my engine's maximum engine power? 1045.140 Section 1045.140 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS CONTROL OF EMISSIONS FROM SPARK-IGNITION PROPULSION MARINE ENGINES AND VESSELS...
40 CFR 1045.140 - What is my engine's maximum engine power?
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 34 2012-07-01 2012-07-01 false What is my engine's maximum engine power? 1045.140 Section 1045.140 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS CONTROL OF EMISSIONS FROM SPARK-IGNITION PROPULSION MARINE ENGINES AND VESSELS...
40 CFR 1045.140 - What is my engine's maximum engine power?
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 33 2011-07-01 2011-07-01 false What is my engine's maximum engine power? 1045.140 Section 1045.140 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS CONTROL OF EMISSIONS FROM SPARK-IGNITION PROPULSION MARINE ENGINES AND VESSELS...
40 CFR 1045.140 - What is my engine's maximum engine power?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 32 2010-07-01 2010-07-01 false What is my engine's maximum engine power? 1045.140 Section 1045.140 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS CONTROL OF EMISSIONS FROM SPARK-IGNITION PROPULSION MARINE ENGINES AND VESSELS...
Engineered plant biomass particles coated with bioactive agents
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dooley, James H; Lanning, David N
Plant biomass particles coated with a bioactive agent such as a fertilizer or pesticide, characterized by a length dimension (L) aligned substantially parallel to a grain direction and defining a substantially uniform distance along the grain, a width dimension (W) normal to L and aligned cross grain, and a height dimension (H) normal to W and L. In particular, the L.times.H dimensions define a pair of substantially parallel side surfaces characterized by substantially intact longitudinally arrayed fibers, the W.times.H dimensions define a pair of substantially parallel end surfaces characterized by crosscut fibers and end checking between fibers, and the L.times.Wmore » dimensions define a pair of substantially parallel top and bottom surfaces.« less
Costa - Introduction to 2015 Annual Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Costa, James E.
In parallel with Sandia National Laboratories having two major locations (NM and CA), along with a number of smaller facilities across the nation, so too is the distribution of scientific, engineering and computing resources. As a part of Sandia’s Institutional Computing Program, CA site-based Sandia computer scientists and engineers have been providing mission and research staff with local CA resident expertise on computing options while also focusing on two growing high performance computing research problems. The first is how to increase system resilience to failure, as machines grow larger, more complex and heterogeneous. The second is how to ensure thatmore » computer hardware and configurations are optimized for specialized data analytical mission needs within the overall Sandia computing environment, including the HPC subenvironment. All of these activities support the larger Sandia effort in accelerating development and integration of high performance computing into national security missions. Sandia continues to both promote national R&D objectives, including the recent Presidential Executive Order establishing the National Strategic Computing Initiative and work to ensure that the full range of computing services and capabilities are available for all mission responsibilities, from national security to energy to homeland defense.« less
Engineering approach for cost effective operation of industrial pump systems
NASA Astrophysics Data System (ADS)
Krickis, O.; Oleksijs, R.
2017-10-01
Power plants operators are persuaded to operate the main equipment such as centrifugal pumps in economically effective way. The operation of pump sets of district heating network at power plants should be done according to prescriptions of the original equipment manufacturer with further implementation of these requirements to distributed control system of the plant. In order to operate industrial pump sets with a small number of malfunctions is necessary to control the duty point of pump sets in H-Q coordinates, which could be complex task in some installations. Alternatively, pump operation control could be organized in H-n (head vs rpm) coordinates, utilizing pressure transmitters in pressure pipeline and value of rpm from variable speed driver. Safe operation range of the pump has to be limited with system parabolas, which prevents the duty point location outside of the predefined operation area. The particular study demonstrates the engineering approach for pump’s safe operation control development in MATLAB/Simulink environment, which allows to simulate the operation of the pump at different capacities in hydraulic system with variable characteristic and to predefine the conditions for efficient simultaneous pump operation in parallel connection.
2009-01-01
The twenty two monoclonal antibodies (mAbs) currently marketed in the U.S. have captured almost half of the top-20 U.S. therapeutic biotechnology sales for 2007. Eight of these products have annual sales each of more than $1 B, were developed in the relatively short average period of six years, qualified for FDA programs designed to accelerate drug approval, and their cost has been reimbursed liberally by payers. With growth of the product class driven primarily by advancements in protein engineering and the low probability of generic threats, mAbs are now the largest class of biological therapies under development. The high cost of these drugs and the lack of generic competition conflict with a financially stressed health system, setting reimbursement by payers as the major limiting factor to growth. Advances in mAb engineering are likely to result in more effective mAb drugs and an expansion of the therapeutic indications covered by the class. The parallel development of biomarkers for identifying the patient subpopulations most likely to respond to treatment may lead to a more cost-effective use of these drugs. To achieve the success of the current top-tier mAbs, companies developing new mAb products must adapt to a significantly more challenging commercial environment. PMID:20061824
NASA Technical Reports Server (NTRS)
Storaasli, Olaf O. (Editor); Housner, Jerrold M. (Editor)
1993-01-01
Computing speed is leaping forward by several orders of magnitude each decade. Engineers and scientists gathered at a NASA Langley symposium to discuss these exciting trends as they apply to parallel computational methods for large-scale structural analysis and design. Among the topics discussed were: large-scale static analysis; dynamic, transient, and thermal analysis; domain decomposition (substructuring); and nonlinear and numerical methods.
Program For Parallel Discrete-Event Simulation
NASA Technical Reports Server (NTRS)
Beckman, Brian C.; Blume, Leo R.; Geiselman, John S.; Presley, Matthew T.; Wedel, John J., Jr.; Bellenot, Steven F.; Diloreto, Michael; Hontalas, Philip J.; Reiher, Peter L.; Weiland, Frederick P.
1991-01-01
User does not have to add any special logic to aid in synchronization. Time Warp Operating System (TWOS) computer program is special-purpose operating system designed to support parallel discrete-event simulation. Complete implementation of Time Warp mechanism. Supports only simulations and other computations designed for virtual time. Time Warp Simulator (TWSIM) subdirectory contains sequential simulation engine interface-compatible with TWOS. TWOS and TWSIM written in, and support simulations in, C programming language.
Garst, Andrew D; Bassalo, Marcelo C; Pines, Gur; Lynch, Sean A; Halweg-Edwards, Andrea L; Liu, Rongming; Liang, Liya; Wang, Zhiwen; Zeitoun, Ramsey; Alexander, William G; Gill, Ryan T
2017-01-01
Improvements in DNA synthesis and sequencing have underpinned comprehensive assessment of gene function in bacteria and eukaryotes. Genome-wide analyses require high-throughput methods to generate mutations and analyze their phenotypes, but approaches to date have been unable to efficiently link the effects of mutations in coding regions or promoter elements in a highly parallel fashion. We report that CRISPR-Cas9 gene editing in combination with massively parallel oligomer synthesis can enable trackable editing on a genome-wide scale. Our method, CRISPR-enabled trackable genome engineering (CREATE), links each guide RNA to homologous repair cassettes that both edit loci and function as barcodes to track genotype-phenotype relationships. We apply CREATE to site saturation mutagenesis for protein engineering, reconstruction of adaptive laboratory evolution experiments, and identification of stress tolerance and antibiotic resistance genes in bacteria. We provide preliminary evidence that CREATE will work in yeast. We also provide a webtool to design multiplex CREATE libraries.
Blocksome, Michael A [Rochester, MN
2011-12-20
Methods, apparatus, and products are disclosed for determining when a set of compute nodes participating in a barrier operation on a parallel computer are ready to exit the barrier operation that includes, for each compute node in the set: initializing a barrier counter with no counter underflow interrupt; configuring, upon entering the barrier operation, the barrier counter with a value in dependence upon a number of compute nodes in the set; broadcasting, by a DMA engine on the compute node to each of the other compute nodes upon entering the barrier operation, a barrier control packet; receiving, by the DMA engine from each of the other compute nodes, a barrier control packet; modifying, by the DMA engine, the value for the barrier counter in dependence upon each of the received barrier control packets; exiting the barrier operation if the value for the barrier counter matches the exit value.
Clean catalytic combustor program
NASA Technical Reports Server (NTRS)
Ekstedt, E. E.; Lyon, T. F.; Sabla, P. E.; Dodds, W. J.
1983-01-01
A combustor program was conducted to evolve and to identify the technology needed for, and to establish the credibility of, using combustors with catalytic reactors in modern high-pressure-ratio aircraft turbine engines. Two selected catalytic combustor concepts were designed, fabricated, and evaluated. The combustors were sized for use in the NASA/General Electric Energy Efficient Engine (E3). One of the combustor designs was a basic parallel-staged double-annular combustor. The second design was also a parallel-staged combustor but employed reverse flow cannular catalytic reactors. Subcomponent tests of fuel injection systems and of catalytic reactors for use in the combustion system were also conducted. Very low-level pollutant emissions and excellent combustor performance were achieved. However, it was obvious from these tests that extensive development of fuel/air preparation systems and considerable advancement in the steady-state operating temperature capability of catalytic reactor materials will be required prior to the consideration of catalytic combustion systems for use in high-pressure-ratio aircraft turbine engines.
Big-BOE: Fusing Spanish Official Gazette with Big Data Technology.
Basanta-Val, Pablo; Sánchez-Fernández, Luis
2018-06-01
The proliferation of new data sources, stemmed from the adoption of open-data schemes, in combination with an increasing computing capacity causes the inception of new type of analytics that process Internet of things with low-cost engines to speed up data processing using parallel computing. In this context, the article presents an initiative, called BIG-Boletín Oficial del Estado (BOE), designed to process the Spanish official government gazette (BOE) with state-of-the-art processing engines, to reduce computation time and to offer additional speed up for big data analysts. The goal of including a big data infrastructure is to be able to process different BOE documents in parallel with specific analytics, to search for several issues in different documents. The application infrastructure processing engine is described from an architectural perspective and from performance, showing evidence on how this type of infrastructure improves the performance of different types of simple analytics as several machines cooperate.
A parallel computing engine for a class of time critical processes.
Nabhan, T M; Zomaya, A Y
1997-01-01
This paper focuses on the efficient parallel implementation of systems of numerically intensive nature over loosely coupled multiprocessor architectures. These analytical models are of significant importance to many real-time systems that have to meet severe time constants. A parallel computing engine (PCE) has been developed in this work for the efficient simplification and the near optimal scheduling of numerical models over the different cooperating processors of the parallel computer. First, the analytical system is efficiently coded in its general form. The model is then simplified by using any available information (e.g., constant parameters). A task graph representing the interconnections among the different components (or equations) is generated. The graph can then be compressed to control the computation/communication requirements. The task scheduler employs a graph-based iterative scheme, based on the simulated annealing algorithm, to map the vertices of the task graph onto a Multiple-Instruction-stream Multiple-Data-stream (MIMD) type of architecture. The algorithm uses a nonanalytical cost function that properly considers the computation capability of the processors, the network topology, the communication time, and congestion possibilities. Moreover, the proposed technique is simple, flexible, and computationally viable. The efficiency of the algorithm is demonstrated by two case studies with good results.
Milestones in Software Engineering and Knowledge Engineering History: A Comparative Review
del Águila, Isabel M.; Palma, José; Túnez, Samuel
2014-01-01
We present a review of the historical evolution of software engineering, intertwining it with the history of knowledge engineering because “those who cannot remember the past are condemned to repeat it.” This retrospective represents a further step forward to understanding the current state of both types of engineerings; history has also positive experiences; some of them we would like to remember and to repeat. Two types of engineerings had parallel and divergent evolutions but following a similar pattern. We also define a set of milestones that represent a convergence or divergence of the software development methodologies. These milestones do not appear at the same time in software engineering and knowledge engineering, so lessons learned in one discipline can help in the evolution of the other one. PMID:24624046
Milestones in software engineering and knowledge engineering history: a comparative review.
del Águila, Isabel M; Palma, José; Túnez, Samuel
2014-01-01
We present a review of the historical evolution of software engineering, intertwining it with the history of knowledge engineering because "those who cannot remember the past are condemned to repeat it." This retrospective represents a further step forward to understanding the current state of both types of engineerings; history has also positive experiences; some of them we would like to remember and to repeat. Two types of engineerings had parallel and divergent evolutions but following a similar pattern. We also define a set of milestones that represent a convergence or divergence of the software development methodologies. These milestones do not appear at the same time in software engineering and knowledge engineering, so lessons learned in one discipline can help in the evolution of the other one.
A multiarchitecture parallel-processing development environment
NASA Technical Reports Server (NTRS)
Townsend, Scott; Blech, Richard; Cole, Gary
1993-01-01
A description is given of the hardware and software of a multiprocessor test bed - the second generation Hypercluster system. The Hypercluster architecture consists of a standard hypercube distributed-memory topology, with multiprocessor shared-memory nodes. By using standard, off-the-shelf hardware, the system can be upgraded to use rapidly improving computer technology. The Hypercluster's multiarchitecture nature makes it suitable for researching parallel algorithms in computational field simulation applications (e.g., computational fluid dynamics). The dedicated test-bed environment of the Hypercluster and its custom-built software allows experiments with various parallel-processing concepts such as message passing algorithms, debugging tools, and computational 'steering'. Such research would be difficult, if not impossible, to achieve on shared, commercial systems.
NASA Astrophysics Data System (ADS)
Tellman, B.; Sullivan, J.; Kettner, A.; Brakenridge, G. R.; Slayback, D. A.; Kuhn, C.; Doyle, C.
2016-12-01
There is an increasing need to understand flood vulnerability as the societal and economic effects of flooding increases. Risk models from insurance companies and flood models from hydrologists must be calibrated based on flood observations in order to make future predictions that can improve planning and help societies reduce future disasters. Specifically, to improve these models both traditional methods of flood prediction from physically based models as well as data-driven techniques, such as machine learning, require spatial flood observation to validate model outputs and quantify uncertainty. A key dataset that is missing for flood model validation is a global historical geo-database of flood event extents. Currently, the most advanced database of historical flood extent is hosted and maintained at the Dartmouth Flood Observatory (DFO) that has catalogued 4320 floods (1985-2015) but has only mapped 5% of these floods. We are addressing this data gap by mapping the inventory of floods in the DFO database to create a first-of- its-kind, comprehensive, global and historical geospatial database of flood events. To do so, we combine water detection algorithms on MODIS and Landsat 5,7 and 8 imagery in Google Earth Engine to map discrete flood events. The created database will be available in the Earth Engine Catalogue for download by country, region, or time period. This dataset can be leveraged for new data-driven hydrologic modeling using machine learning algorithms in Earth Engine's highly parallelized computing environment, and we will show examples for New York and Senegal.
2016-09-30
In parallel with the oil change interval study an engineering evaluation of a handheld oil condition analyzer was conducted. Within the limitations...of the study of diesel engine powered AGE assets at two U.S. Air Force locations, assets monitored were not impacted by eliminating the 6-month oil...limitations of the study , conclusions can be made from the cumulative knowledge of analyzing crankcase lubricants of diesel engine powered AGE assets
Propulsion technology for an advanced subsonic transport
NASA Technical Reports Server (NTRS)
Beheim, M. A.; Antl, R. J.; Povolny, J. H.
1972-01-01
Engine design studies for future subsonic commercial transport aircraft were conducted in parallel with airframe studies. These studies surveyed a broad distribution of design variables, including aircraft configuration, payload, range, and speed, with particular emphasis on reducing noise and exhaust emissions without severe economic and performance penalties. The results indicated that an engine for an advanced transport would be similar to the currently emerging turbofan engines. Application of current technology in the areas of noise suppression and combustors imposed severe performance and economic penalties.
NASA Technical Reports Server (NTRS)
Rakow, A.
1983-01-01
The current arrangement of a Platecoil heat exchanger which uses LN2 on the inside of parallel tubes, in counter flow to the test cell engine exhaust gases which are drawn through a box surrounding the plates by the existing vacuum blowers is examined. As a result of inadequate performance and special test data it was decided to redesign the system to accommodate an Apollo RCS engine.
Naming in a Programming Support Environment.
1984-02-01
and Control, 1974. 10. T. E. Cheatham. An Overview of the Harvard Program Development System. I; Software Engineering Environments, H. Hunke, Ed.. North...Holland Publishing Compary, 1981, pp. 253-266. 11. T. E. Cheatham. Comparing Programming Support Environments. In Software Engineering Environments...Company. 1981. Third Edition 16. F. DeRemer and H Kron Programming -inthe Large Versus Programming -in-theSmall. IEEE Transactions on Software Engineering
NASA Astrophysics Data System (ADS)
Olson, Richard F.
2013-05-01
Rendering of point scatterer based radar scenes for millimeter wave (mmW) seeker tests in real-time hardware-in-the-loop (HWIL) scene generation requires efficient algorithms and vector-friendly computer architectures for complex signal synthesis. New processor technology from Intel implements an extended 256-bit vector SIMD instruction set (AVX, AVX2) in a multi-core CPU design providing peak execution rates of hundreds of GigaFLOPS (GFLOPS) on one chip. Real world mmW scene generation code can approach peak SIMD execution rates only after careful algorithm and source code design. An effective software design will maintain high computing intensity emphasizing register-to-register SIMD arithmetic operations over data movement between CPU caches or off-chip memories. Engineers at the U.S. Army Aviation and Missile Research, Development and Engineering Center (AMRDEC) applied two basic parallel coding methods to assess new 256-bit SIMD multi-core architectures for mmW scene generation in HWIL. These include use of POSIX threads built on vector library functions and more portable, highlevel parallel code based on compiler technology (e.g. OpenMP pragmas and SIMD autovectorization). Since CPU technology is rapidly advancing toward high processor core counts and TeraFLOPS peak SIMD execution rates, it is imperative that coding methods be identified which produce efficient and maintainable parallel code. This paper describes the algorithms used in point scatterer target model rendering, the parallelization of those algorithms, and the execution performance achieved on an AVX multi-core machine using the two basic parallel coding methods. The paper concludes with estimates for scale-up performance on upcoming multi-core technology.
ERIC Educational Resources Information Center
Durruty, Ignacio; Ayude, María A.
2014-01-01
The case study discussed in this work is used at the chemical reaction engineering course, offered in fifth-year of the chemical engineering undergraduate program at National University of Mar del Plata (UNMdP). A serial-parallel reaction system based on the anaerobic degradation of particulate-containing potato processing wastewater is presented.…
Adapting high-level language programs for parallel processing using data flow
NASA Technical Reports Server (NTRS)
Standley, Hilda M.
1988-01-01
EASY-FLOW, a very high-level data flow language, is introduced for the purpose of adapting programs written in a conventional high-level language to a parallel environment. The level of parallelism provided is of the large-grained variety in which parallel activities take place between subprograms or processes. A program written in EASY-FLOW is a set of subprogram calls as units, structured by iteration, branching, and distribution constructs. A data flow graph may be deduced from an EASY-FLOW program.
NASA Technical Reports Server (NTRS)
Hribar, Michelle R.; Frumkin, Michael; Jin, Haoqiang; Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)
1998-01-01
Over the past decade, high performance computing has evolved rapidly; systems based on commodity microprocessors have been introduced in quick succession from at least seven vendors/families. Porting codes to every new architecture is a difficult problem; in particular, here at NASA, there are many large CFD applications that are very costly to port to new machines by hand. The LCM ("Legacy Code Modernization") Project is the development of an integrated parallelization environment (IPE) which performs the automated mapping of legacy CFD (Fortran) applications to state-of-the-art high performance computers. While most projects to port codes focus on the parallelization of the code, we consider porting to be an iterative process consisting of several steps: 1) code cleanup, 2) serial optimization,3) parallelization, 4) performance monitoring and visualization, 5) intelligent tools for automated tuning using performance prediction and 6) machine specific optimization. The approach for building this parallelization environment is to build the components for each of the steps simultaneously and then integrate them together. The demonstration will exhibit our latest research in building this environment: 1. Parallelizing tools and compiler evaluation. 2. Code cleanup and serial optimization using automated scripts 3. Development of a code generator for performance prediction 4. Automated partitioning 5. Automated insertion of directives. These demonstrations will exhibit the effectiveness of an automated approach for all the steps involved with porting and tuning a legacy code application for a new architecture.
NASA Technical Reports Server (NTRS)
Delaat, John C.; Merrill, Walter C.
1990-01-01
The objective of the Advanced Detection, Isolation, and Accommodation Program is to improve the overall demonstrated reliability of digital electronic control systems for turbine engines. For this purpose, an algorithm was developed which detects, isolates, and accommodates sensor failures by using analytical redundancy. The performance of this algorithm was evaluated on a real time engine simulation and was demonstrated on a full scale F100 turbofan engine. The real time implementation of the algorithm is described. The implementation used state-of-the-art microprocessor hardware and software, including parallel processing and high order language programming.
NASA Astrophysics Data System (ADS)
Gaševic, Dragan; Djuric, Dragan; Devedžic, Vladan
A relevant initiative from the software engineering community called Model Driven Engineering (MDE) is being developed in parallel with the Semantic Web (Mellor et al. 2003a). The MDE approach to software development suggests that one should first develop a model of the system under study, which is then transformed into the real thing (i.e., an executable software entity). The most important research initiative in this area is the Model Driven Architecture (MDA), which is Model Driven Architecture being developed under the umbrella of the Object Management Group (OMG). This chapter describes the basic concepts of this software engineering effort.
A real-time MPEG software decoder using a portable message-passing library
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kwong, Man Kam; Tang, P.T. Peter; Lin, Biquan
1995-12-31
We present a real-time MPEG software decoder that uses message-passing libraries such as MPL, p4 and MPI. The parallel MPEG decoder currently runs on the IBM SP system but can be easil ported to other parallel machines. This paper discusses our parallel MPEG decoding algorithm as well as the parallel programming environment under which it uses. Several technical issues are discussed, including balancing of decoding speed, memory limitation, 1/0 capacities, and optimization of MPEG decoding components. This project shows that a real-time portable software MPEG decoder is feasible in a general-purpose parallel machine.
Programming Probabilistic Structural Analysis for Parallel Processing Computer
NASA Technical Reports Server (NTRS)
Sues, Robert H.; Chen, Heh-Chyun; Twisdale, Lawrence A.; Chamis, Christos C.; Murthy, Pappu L. N.
1991-01-01
The ultimate goal of this research program is to make Probabilistic Structural Analysis (PSA) computationally efficient and hence practical for the design environment by achieving large scale parallelism. The paper identifies the multiple levels of parallelism in PSA, identifies methodologies for exploiting this parallelism, describes the development of a parallel stochastic finite element code, and presents results of two example applications. It is demonstrated that speeds within five percent of those theoretically possible can be achieved. A special-purpose numerical technique, the stochastic preconditioned conjugate gradient method, is also presented and demonstrated to be extremely efficient for certain classes of PSA problems.
Software development environments: Status and trends
NASA Technical Reports Server (NTRS)
Duffel, Larry E.
1988-01-01
Currently software engineers are the essential integrating factors tying several components together. The components consist of process, methods, computers, tools, support environments, and software engineers. The engineers today empower the tools versus the tools empowering the engineers. Some of the issues in software engineering are quality, managing the software engineering process, and productivity. A strategy to accomplish this is to promote the evolution of software engineering from an ad hoc, labor intensive activity to a managed, technology supported discipline. This strategy may be implemented by putting the process under management control, adopting appropriate methods, inserting the technology that provides automated support for the process and methods, collecting automated tools into an integrated environment and educating the personnel.
NASA Astrophysics Data System (ADS)
Lee, Po-Chih; Lee, Jyh-Jone
2012-06-01
This paper presents the analysis of three parallel manipulators with Schoenflies-motion. Each parallel manipulator possesses two limbs in structure and the end-effector has three DOFs (degree of freedom) in the translational motion and one DOF in rotational motion about a given direction axis with respect to the world coordinate system. The three isoconstrained parallel manipulators have the structures denoted as C{u/u}UwHw-//-C{v/v}UwHw, CuR{u/u}Uhw-//-CvR{v/v}Uhw and CuPuUhw-//-CvPvUhw. The kinematic equations are first introduced for each manipulator. Then, Jacobian matrix, singularity, workspace, and performance index for each mechanism are subsequently derived and analysed for the first time. The results can be helpful for the engineers to evaluate such kind of parallel robots for possible application in industry where pick-and-place motion is required.
Dynamic Load-Balancing for Distributed Heterogeneous Computing of Parallel CFD Problems
NASA Technical Reports Server (NTRS)
Ecer, A.; Chien, Y. P.; Boenisch, T.; Akay, H. U.
2000-01-01
The developed methodology is aimed at improving the efficiency of executing block-structured algorithms on parallel, distributed, heterogeneous computers. The basic approach of these algorithms is to divide the flow domain into many sub- domains called blocks, and solve the governing equations over these blocks. Dynamic load balancing problem is defined as the efficient distribution of the blocks among the available processors over a period of several hours of computations. In environments with computers of different architecture, operating systems, CPU speed, memory size, load, and network speed, balancing the loads and managing the communication between processors becomes crucial. Load balancing software tools for mutually dependent parallel processes have been created to efficiently utilize an advanced computation environment and algorithms. These tools are dynamic in nature because of the chances in the computer environment during execution time. More recently, these tools were extended to a second operating system: NT. In this paper, the problems associated with this application will be discussed. Also, the developed algorithms were combined with the load sharing capability of LSF to efficiently utilize workstation clusters for parallel computing. Finally, results will be presented on running a NASA based code ADPAC to demonstrate the developed tools for dynamic load balancing.
NASA Astrophysics Data System (ADS)
Elbaz, Reouven; Torres, Lionel; Sassatelli, Gilles; Guillemin, Pierre; Bardouillet, Michel; Martinez, Albert
The bus between the System on Chip (SoC) and the external memory is one of the weakest points of computer systems: an adversary can easily probe this bus in order to read private data (data confidentiality concern) or to inject data (data integrity concern). The conventional way to protect data against such attacks and to ensure data confidentiality and integrity is to implement two dedicated engines: one performing data encryption and another data authentication. This approach, while secure, prevents parallelizability of the underlying computations. In this paper, we introduce the concept of Block-Level Added Redundancy Explicit Authentication (BL-AREA) and we describe a Parallelized Encryption and Integrity Checking Engine (PE-ICE) based on this concept. BL-AREA and PE-ICE have been designed to provide an effective solution to ensure both security services while allowing for full parallelization on processor read and write operations and optimizing the hardware resources. Compared to standard encryption which ensures only confidentiality, we show that PE-ICE additionally guarantees code and data integrity for less than 4% of run-time performance overhead.
Optimizing SIEM Throughput on the Cloud Using Parallelization.
Alam, Masoom; Ihsan, Asif; Khan, Muazzam A; Javaid, Qaisar; Khan, Abid; Manzoor, Jawad; Akhundzada, Adnan; Khan, Muhammad Khurram; Farooq, Sajid
2016-01-01
Processing large amounts of data in real time for identifying security issues pose several performance challenges, especially when hardware infrastructure is limited. Managed Security Service Providers (MSSP), mostly hosting their applications on the Cloud, receive events at a very high rate that varies from a few hundred to a couple of thousand events per second (EPS). It is critical to process this data efficiently, so that attacks could be identified quickly and necessary response could be initiated. This paper evaluates the performance of a security framework OSTROM built on the Esper complex event processing (CEP) engine under a parallel and non-parallel computational framework. We explain three architectures under which Esper can be used to process events. We investigated the effect on throughput, memory and CPU usage in each configuration setting. The results indicate that the performance of the engine is limited by the number of events coming in rather than the queries being processed. The architecture where 1/4th of the total events are submitted to each instance and all the queries are processed by all the units shows best results in terms of throughput, memory and CPU usage.
40 CFR 1065.125 - Engine intake air.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 33 2011-07-01 2011-07-01 false Engine intake air. 1065.125 Section 1065.125 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Equipment Specifications § 1065.125 Engine intake air. (a) Use the intake-air...
INVITED TOPICAL REVIEW: Parallel magnetic resonance imaging
NASA Astrophysics Data System (ADS)
Larkman, David J.; Nunes, Rita G.
2007-04-01
Parallel imaging has been the single biggest innovation in magnetic resonance imaging in the last decade. The use of multiple receiver coils to augment the time consuming Fourier encoding has reduced acquisition times significantly. This increase in speed comes at a time when other approaches to acquisition time reduction were reaching engineering and human limits. A brief summary of spatial encoding in MRI is followed by an introduction to the problem parallel imaging is designed to solve. There are a large number of parallel reconstruction algorithms; this article reviews a cross-section, SENSE, SMASH, g-SMASH and GRAPPA, selected to demonstrate the different approaches. Theoretical (the g-factor) and practical (coil design) limits to acquisition speed are reviewed. The practical implementation of parallel imaging is also discussed, in particular coil calibration. How to recognize potential failure modes and their associated artefacts are shown. Well-established applications including angiography, cardiac imaging and applications using echo planar imaging are reviewed and we discuss what makes a good application for parallel imaging. Finally, active research areas where parallel imaging is being used to improve data quality by repairing artefacted images are also reviewed.
DMA shared byte counters in a parallel computer
Chen, Dong; Gara, Alan G.; Heidelberger, Philip; Vranas, Pavlos
2010-04-06
A parallel computer system is constructed as a network of interconnected compute nodes. Each of the compute nodes includes at least one processor, a memory and a DMA engine. The DMA engine includes a processor interface for interfacing with the at least one processor, DMA logic, a memory interface for interfacing with the memory, a DMA network interface for interfacing with the network, injection and reception byte counters, injection and reception FIFO metadata, and status registers and control registers. The injection FIFOs maintain memory locations of the injection FIFO metadata memory locations including its current head and tail, and the reception FIFOs maintain the reception FIFO metadata memory locations including its current head and tail. The injection byte counters and reception byte counters may be shared between messages.
NASA Technical Reports Server (NTRS)
Smith, Garrett; Phillips, Alan
2002-01-01
There are currently three dominant TSTO class architectures. These are Series Burn (SB), Parallel Burn with crossfeed (PBw/cf), and Parallel Burn without crossfeed (PBncf). The goal of this study was to determine what factors uniquely affect PBncf architectures, how each of these factors interact, and to determine from a performance perspective whether a PBncf vehicle could be competitive with a PBw/cf or SB vehicle using equivalent technology and assumptions. In all cases, performance was evaluated on a relative basis for a fixed payload and mission by comparing gross and dry vehicle masses of a closed vehicle. Propellant combinations studied were LOX: LH2 propelled orbiter and booster (HH) and LOX: Kerosene booster with LOX: LH2 orbiter (KH). The study conclusions were: 1) a PBncf orbiter should be throttled as deeply as possible after launch until the staging point. 2) a detailed structural model is essential to accurate architecture analysis and evaluation. 3) a PBncf TSTO architecture is feasible for systems that stage at mach 7. 3a) HH architectures can achieve a mass growth relative to PBw/cf of < 20%. 3b) KH architectures can achieve a mass growth relative to Series Burn of < 20%. 4) center of gravity (CG) control will be a major issue for a PBncf vehicle, due to the low orbiter specific thrust to weight ratio and to the position of the orbiter required to align the nozzle heights at liftoff. 5 ) thrust to weight ratios of 1.3 at liftoff and between 1.0 and 0.9 when staging at mach 7 appear to be close to ideal for PBncf vehicles. 6) performance for all vehicles studied is better when staged at mach 7 instead of mach 5. The study showed that a Series Burn architecture has the lowest gross mass for HH cases, and has the lowest dry mass for KH cases. The potential disadvantages of SB are the required use of an air-start for the orbiter engines and potential CG control issues. A Parallel Burn with crossfeed architecture solves both these problems, but the mechanics of a large bipropellant crossfeed system pose significant technical difficulties. Parallel Burn without crossfeed vehicles start both booster and orbiter engines on the ground and thus avoid both the risk of orbiter air-start and the complexity of a crossfeed system. The drawback is that the orbiter must use 20% to 35% of its propellant before reaching the staging point. This induces a weight penalty in the orbiter in order to carry additional propellant, which causes a further weight penalty in the booster to achieve the same staging point. One way to reduce the orbiter propellant consumption during the first stage is to throttle down the orbiter engines as much as possible. Another possibility is to use smaller or fewer engines. Throttling the orbiter engines soon after liftoff minimizes CG control problems due to a low orbiter liftoff thrust, but may result in an unnecessarily high orbiter thrust after staging. Reducing the number or size of engines size may cause CG control problems and drift at launch. The study suggested possible methods to maximize performance of PBncf vehicle architectures in order to meet mission design requirements.
Engineering and Biology: Counsel for a Continued Relationship
Levy, Arnon; Siegal, Mark L.; Soyer, Orkun S.; Wagner, Andreas
2015-01-01
Biologists frequently draw on ideas and terminology from engineering. Evolutionary systems biology—with its circuits, switches, and signal processing—is no exception. In parallel with the frequent links drawn between biology and engineering, there is ongoing criticism against this cross-fertilization, using the argument that over-simplistic metaphors from engineering are likely to mislead us as engineering is fundamentally different from biology. In this article, we clarify and reconfigure the link between biology and engineering, presenting it in a more favorable light. We do so by, first, arguing that critics operate with a narrow and incorrect notion of how engineering actually works, and of what the reliance on ideas from engineering entails. Second, we diagnose and diffuse one significant source of concern about appeals to engineering, namely that they are inherently and problematically metaphorical. We suggest that there is plenty of fertile ground left for a continued, healthy relationship between engineering and biology. PMID:26085824
Multivariable speed synchronisation for a parallel hybrid electric vehicle drivetrain
NASA Astrophysics Data System (ADS)
Alt, B.; Antritter, F.; Svaricek, F.; Schultalbers, M.
2013-03-01
In this article, a new drivetrain configuration of a parallel hybrid electric vehicle is considered and a novel model-based control design strategy is given. In particular, the control design covers the speed synchronisation task during a restart of the internal combustion engine. The proposed multivariable synchronisation strategy is based on feedforward and decoupled feedback controllers. The performance and the robustness properties of the closed-loop system are illustrated by nonlinear simulation results.
Parallel Implementation of the Wideband DOA Algorithm on the IBM Cell BE Processor
2010-05-01
Abstract—The Multiple Signal Classification ( MUSIC ) algorithm is a powerful technique for determining the Direction of Arrival (DOA) of signals...Broadband Engine Processor (Cell BE). The process of adapting the serial based MUSIC algorithm to the Cell BE will be analyzed in terms of parallelism and...using Multiple Signal Classification MUSIC algorithm [4] • Computation of Focus matrix • Computation of number of sources • Separation of Signal
Parallel computation and the basis system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, G.R.
1993-05-01
A software package has been written that can facilitate efforts to develop powerful, flexible, and easy-to use programs that can run in single-processor, massively parallel, and distributed computing environments. Particular attention has been given to the difficulties posed by a program consisting of many science packages that represent subsystems of a complicated, coupled system. Methods have been found to maintain independence of the packages by hiding data structures without increasing the communications costs in a parallel computing environment. Concepts developed in this work are demonstrated by a prototype program that uses library routines from two existing software systems, Basis andmore » Parallel Virtual Machine (PVM). Most of the details of these libraries have been encapsulated in routines and macros that could be rewritten for alternative libraries that possess certain minimum capabilities. The prototype software uses a flexible master-and-slaves paradigm for parallel computation and supports domain decomposition with message passing for partitioning work among slaves. Facilities are provided for accessing variables that are distributed among the memories of slaves assigned to subdomains. The software is named PROTOPAR.« less
Expressing Parallelism with ROOT
NASA Astrophysics Data System (ADS)
Piparo, D.; Tejedor, E.; Guiraud, E.; Ganis, G.; Mato, P.; Moneta, L.; Valls Pla, X.; Canal, P.
2017-10-01
The need for processing the ever-increasing amount of data generated by the LHC experiments in a more efficient way has motivated ROOT to further develop its support for parallelism. Such support is being tackled both for shared-memory and distributed-memory environments. The incarnations of the aforementioned parallelism are multi-threading, multi-processing and cluster-wide executions. In the area of multi-threading, we discuss the new implicit parallelism and related interfaces, as well as the new building blocks to safely operate with ROOT objects in a multi-threaded environment. Regarding multi-processing, we review the new MultiProc framework, comparing it with similar tools (e.g. multiprocessing module in Python). Finally, as an alternative to PROOF for cluster-wide executions, we introduce the efforts on integrating ROOT with state-of-the-art distributed data processing technologies like Spark, both in terms of programming model and runtime design (with EOS as one of the main components). For all the levels of parallelism, we discuss, based on real-life examples and measurements, how our proposals can increase the productivity of scientists.
Expressing Parallelism with ROOT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piparo, D.; Tejedor, E.; Guiraud, E.
The need for processing the ever-increasing amount of data generated by the LHC experiments in a more efficient way has motivated ROOT to further develop its support for parallelism. Such support is being tackled both for shared-memory and distributed-memory environments. The incarnations of the aforementioned parallelism are multi-threading, multi-processing and cluster-wide executions. In the area of multi-threading, we discuss the new implicit parallelism and related interfaces, as well as the new building blocks to safely operate with ROOT objects in a multi-threaded environment. Regarding multi-processing, we review the new MultiProc framework, comparing it with similar tools (e.g. multiprocessing module inmore » Python). Finally, as an alternative to PROOF for cluster-wide executions, we introduce the efforts on integrating ROOT with state-of-the-art distributed data processing technologies like Spark, both in terms of programming model and runtime design (with EOS as one of the main components). For all the levels of parallelism, we discuss, based on real-life examples and measurements, how our proposals can increase the productivity of scientists.« less
Parallel/distributed direct method for solving linear systems
NASA Technical Reports Server (NTRS)
Lin, Avi
1990-01-01
A new family of parallel schemes for directly solving linear systems is presented and analyzed. It is shown that these schemes exhibit a near optimal performance and enjoy several important features: (1) For large enough linear systems, the design of the appropriate paralleled algorithm is insensitive to the number of processors as its performance grows monotonically with them; (2) It is especially good for large matrices, with dimensions large relative to the number of processors in the system; (3) It can be used in both distributed parallel computing environments and tightly coupled parallel computing systems; and (4) This set of algorithms can be mapped onto any parallel architecture without any major programming difficulties or algorithmical changes.
Creating Learning Environment Connecting Engineering Design and 3D Printing
NASA Astrophysics Data System (ADS)
Pikkarainen, Ari; Salminen, Antti; Piili, Heidi
Engineering education in modern days require continuous development in didactics, pedagogics and used practical methods. 3D printing provides excellent opportunity to connect different engineering areas into practice and produce learning by doing applications. The 3D-printing technology used in this study is FDM (Fused deposition modeling). FDM is the most used 3D-printing technology by commercial numbers at the moment and the qualities of the technology makes it popular especially in academic environments. For achieving the best result possible, students will incorporate the principles of DFAM (Design for additive manufacturing) into their engineering design studies together with 3D printing. This paper presents a plan for creating learning environment for mechanical engineering students combining the aspects of engineering design, 3D-CAD learning and AM (additive manufacturing). As a result, process charts for carrying out the 3D printing process from technological point of view and design process for AM from engineering design point of view were created. These charts are used in engineering design education. The learning environment is developed to work also as a platform for Bachelor theses, work-training environment for students, prototyping service centre for cooperation partners and source of information for mechanical engineering education in Lapland University of Applied Sciences.
NASA Technical Reports Server (NTRS)
Veres, Joseph
2001-01-01
This report outlines the detailed simulation of Aircraft Turbofan Engine. The objectives were to develop a detailed flow model of a full turbofan engine that runs on parallel workstation clusters overnight and to develop an integrated system of codes for combustor design and analysis to enable significant reduction in design time and cost. The model will initially simulate the 3-D flow in the primary flow path including the flow and chemistry in the combustor, and ultimately result in a multidisciplinary model of the engine. The overnight 3-D simulation capability of the primary flow path in a complete engine will enable significant reduction in the design and development time of gas turbine engines. In addition, the NPSS (Numerical Propulsion System Simulation) multidisciplinary integration and analysis are discussed.
STS-26 Discovery, OV-103, SSME (2019) installed in position number one at KSC
1988-01-10
S88-29076 (10 Jan 1988) --- KSC employees work together to carefully guide a 7,000 pound main engine into the number one position in Discovery's aft compartment. Because of the engine's weight and size, special handling equipment is needed to perform the installation. Discovery is currently being prepared for the upcoming STS-26 mission in bay 1 of the Orbiter Processing Facility. This engine, 2019, arrived at KSC on Jan. 6 and was installed Jan. 10. The other two engines are scheduled to be installed later this month. The shuttle's three main liquid fueled engines provide the main propulsion for the orbiter vehicle. The cluster of three engines operate in parallel with the solid rocket boosters during the initial ascent.
GREEN SUPERCOMPUTING IN A DESKTOP BOX
DOE Office of Scientific and Technical Information (OSTI.GOV)
HSU, CHUNG-HSING; FENG, WU-CHUN; CHING, AVERY
2007-01-17
The computer workstation, introduced by Sun Microsystems in 1982, was the tool of choice for scientists and engineers as an interactive computing environment for the development of scientific codes. However, by the mid-1990s, the performance of workstations began to lag behind high-end commodity PCs. This, coupled with the disappearance of BSD-based operating systems in workstations and the emergence of Linux as an open-source operating system for PCs, arguably led to the demise of the workstation as we knew it. Around the same time, computational scientists started to leverage PCs running Linux to create a commodity-based (Beowulf) cluster that provided dedicatedmore » computer cycles, i.e., supercomputing for the rest of us, as a cost-effective alternative to large supercomputers, i.e., supercomputing for the few. However, as the cluster movement has matured, with respect to cluster hardware and open-source software, these clusters have become much more like their large-scale supercomputing brethren - a shared (and power-hungry) datacenter resource that must reside in a machine-cooled room in order to operate properly. Consequently, the above observations, when coupled with the ever-increasing performance gap between the PC and cluster supercomputer, provide the motivation for a 'green' desktop supercomputer - a turnkey solution that provides an interactive and parallel computing environment with the approximate form factor of a Sun SPARCstation 1 'pizza box' workstation. In this paper, they present the hardware and software architecture of such a solution as well as its prowess as a developmental platform for parallel codes. In short, imagine a 12-node personal desktop supercomputer that achieves 14 Gflops on Linpack but sips only 185 watts of power at load, resulting in a performance-power ratio that is over 300% better than their reference SMP platform.« less
40 CFR 35.935-18 - Value engineering.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 1 2012-07-01 2012-07-01 false Value engineering. 35.935-18 Section 35.935-18 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY GRANTS AND OTHER FEDERAL ASSISTANCE... engineering. A grantee must comply with the applicable value engineering requirements of § 35.926. ...
40 CFR 35.935-18 - Value engineering.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 1 2011-07-01 2011-07-01 false Value engineering. 35.935-18 Section 35.935-18 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY GRANTS AND OTHER FEDERAL ASSISTANCE... engineering. A grantee must comply with the applicable value engineering requirements of § 35.926. ...
40 CFR 35.935-18 - Value engineering.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 1 2014-07-01 2014-07-01 false Value engineering. 35.935-18 Section 35.935-18 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY GRANTS AND OTHER FEDERAL ASSISTANCE... engineering. A grantee must comply with the applicable value engineering requirements of § 35.926. ...
40 CFR 35.935-18 - Value engineering.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 1 2010-07-01 2010-07-01 false Value engineering. 35.935-18 Section 35.935-18 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY GRANTS AND OTHER FEDERAL ASSISTANCE... engineering. A grantee must comply with the applicable value engineering requirements of § 35.926. ...
40 CFR 35.935-18 - Value engineering.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 1 2013-07-01 2013-07-01 false Value engineering. 35.935-18 Section 35.935-18 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY GRANTS AND OTHER FEDERAL ASSISTANCE... engineering. A grantee must comply with the applicable value engineering requirements of § 35.926. ...
40 CFR 89.325 - Engine intake air temperature measurement.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Engine intake air temperature measurement. 89.325 Section 89.325 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR... Test Equipment Provisions § 89.325 Engine intake air temperature measurement. (a) Engine intake air...
ICASE Computer Science Program
NASA Technical Reports Server (NTRS)
1985-01-01
The Institute for Computer Applications in Science and Engineering computer science program is discussed in outline form. Information is given on such topics as problem decomposition, algorithm development, programming languages, and parallel architectures.
al3c: high-performance software for parameter inference using Approximate Bayesian Computation.
Stram, Alexander H; Marjoram, Paul; Chen, Gary K
2015-11-01
The development of Approximate Bayesian Computation (ABC) algorithms for parameter inference which are both computationally efficient and scalable in parallel computing environments is an important area of research. Monte Carlo rejection sampling, a fundamental component of ABC algorithms, is trivial to distribute over multiple processors but is inherently inefficient. While development of algorithms such as ABC Sequential Monte Carlo (ABC-SMC) help address the inherent inefficiencies of rejection sampling, such approaches are not as easily scaled on multiple processors. As a result, current Bayesian inference software offerings that use ABC-SMC lack the ability to scale in parallel computing environments. We present al3c, a C++ framework for implementing ABC-SMC in parallel. By requiring only that users define essential functions such as the simulation model and prior distribution function, al3c abstracts the user from both the complexities of parallel programming and the details of the ABC-SMC algorithm. By using the al3c framework, the user is able to scale the ABC-SMC algorithm in parallel computing environments for his or her specific application, with minimal programming overhead. al3c is offered as a static binary for Linux and OS-X computing environments. The user completes an XML configuration file and C++ plug-in template for the specific application, which are used by al3c to obtain the desired results. Users can download the static binaries, source code, reference documentation and examples (including those in this article) by visiting https://github.com/ahstram/al3c. astram@usc.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Biomorphic architectures for autonomous Nanosat designs
NASA Technical Reports Server (NTRS)
Hasslacher, Brosl; Tilden, Mark W.
1995-01-01
Modern space tool design is the science of making a machine both massively complex while at the same time extremely robust and dependable. We propose a novel nonlinear control technique that produces capable, self-organizing, micron-scale space machines at low cost and in large numbers by parallel silicon assembly. Experiments using biomorphic architectures (with ideal space attributes) have produced a wide spectrum of survival-oriented machines that are reliably domesticated for work applications in specific environments. In particular, several one-chip satellite prototypes show interesting control properties that can be turned into numerous application-specific machines for autonomous, disposable space tasks. We believe that the real power of these architectures lies in their potential to self-assemble into larger, robust, loosely coupled structures. Assembly takes place at hierarchical space scales, with different attendant properties, allowing for inexpensive solutions to many daunting work tasks. The nature of biomorphic control, design, engineering options, and applications are discussed.
Design of a novel bioreactor and application in vascular tissue engineering
NASA Astrophysics Data System (ADS)
Zhang, Zhi-Xiong; Xi, Ting-Fei; Wang, Ying-Jun; Chen, Xiao-Song; Zhang, Jian; Wang, Chun-Ren; Gu, Yong-Quan; Chen, Liang; Li, Jian-Xin; Chen, Bing
2008-11-01
Endothelial cells (ECs) detachment under high shear stress at the early period of transplantation resulted in thrombosis and occlusion. To solve this problem, we developed a novel bioreactor. The bioreactor mimicked the formation of pulsatile flow in physiological conditions. Human umbilical vein ECs were seeded onto the lumen of living tissue conduits grown within dog peritoneal cavity. The shear stress generated by the bioreactor was increased step by step from 1.5 ± 0.8 dyn/cm 2 to 5.3 ± 2.4 dyn/cm 2, and was applied to ECs after static culture for 2 days. The results showed that completely confluent monolayer ECs were elongated, and were oriented parallel to the flow direction. The bioreactor could provide good environment for formation of endothelium. Stepwise increase shear stress could strengthen cell-cell and cell-extracellular matrix. The flow conditions of the bioreactor play a key role to determine the quality of the ECs lining.
Parallelization of Rocket Engine System Software (Press)
NASA Technical Reports Server (NTRS)
Cezzar, Ruknet
1996-01-01
The main goal is to assess parallelization requirements for the Rocket Engine Numeric Simulator (RENS) project which, aside from gathering information on liquid-propelled rocket engines and setting forth requirements, involve a large FORTRAN based package at NASA Lewis Research Center and TDK software developed by SUBR/UWF. The ultimate aim is to develop, test, integrate, and suitably deploy a family of software packages on various aspects and facets of rocket engines using liquid-propellants. At present, all project efforts by the funding agency, NASA Lewis Research Center, and the HBCU participants are disseminated over the internet using world wide web home pages. Considering obviously expensive methods of actual field trails, the benefits of software simulators are potentially enormous. When realized, these benefits will be analogous to those provided by numerous CAD/CAM packages and flight-training simulators. According to the overall task assignments, Hampton University's role is to collect all available software, place them in a common format, assess and evaluate, define interfaces, and provide integration. Most importantly, the HU's mission is to see to it that the real-time performance is assured. This involves source code translations, porting, and distribution. The porting will be done in two phases: First, place all software on Cray XMP platform using FORTRAN. After testing and evaluation on the Cray X-MP, the code will be translated to C + + and ported to the parallel nCUBE platform. At present, we are evaluating another option of distributed processing over local area networks using Sun NFS, Ethernet, TCP/IP. Considering the heterogeneous nature of the present software (e.g., first started as an expert system using LISP machines) which now involve FORTRAN code, the effort is expected to be quite challenging.
Peng, C A; Palsson, B Ø
1996-06-05
Tissue function is comprised of a complex interplay between biological and physicochemical rate processes. The design of bioreactors for tissue engineering must account for these processes simultaneously in order to obtain a bioreactor that provides a uniform environment for tissue growth and development. In the present study we consider the effects of fluid flow and mass transfer on the growth of a tissue in a parallel-plate bioreactor configuration. The parenchymal cells grow on a preformed stromal (feeder) layer that secretes a growth factor that stimulates parenchymal stem cell replication and differentiation. The biological dynamics are described by a unilineage model that describes the replication and differentiation of the tissue stem cell. The physicochemical rates are described by the Navier-Stokes and convective-diffusion equations. The model equations are solved by a finite element method. Two dimensionless groups govern the behavior of the solution. One is the Graetz number (Gz) that describes the relative rates of convection and diffusion, and the other a new dimensionless ratio (designated by P) that describes the interplay of the growth factor production, diffusion, and stimulation. Four geometries (slab, gondola, diamond, and radial shapes) for the parallel-plate bioreactor are analyzed. The uniformity of cell growth is measured by a two-dimensional coefficient of variance. The concentration distribution of the stroma-derived growth factor was computed first based on fluid flow and bioreactor geometry. Then the concomitant cell density distribution was obtained by integrating the calculated growth factor concentration with the parenchymal cell growth and unilineage differentiation process. The spatiotemporal cell growth patterns in four different bioreactor configurations were investigated under a variety of combinations of Gz (10(-1), 10(0), and 10(1)) and P(10(-2), 10(-1), 10(0), 10(1), and 10(2)). The results indicate high cell density and uniformity can be achieved for parameter values of P = 0.01, ..., 0.1 and Gz = 0.1, ..., 1.0. Among the four geometries investigated the radial-flow-type bioreactor provides the most uniform environment in which parenchymal cells can grow and differentiate ex vivo due to the absence of walls that are parallel to the flow paths creating slow flowing regions.
Big Computing in Astronomy: Perspectives and Challenges
NASA Astrophysics Data System (ADS)
Pankratius, Victor
2014-06-01
Hardware progress in recent years has led to astronomical instruments gathering large volumes of data. In radio astronomy for instance, the current generation of antenna arrays produces data at Tbits per second, and forthcoming instruments will expand these rates much further. As instruments are increasingly becoming software-based, astronomers will get more exposed to computer science. This talk therefore outlines key challenges that arise at the intersection of computer science and astronomy and presents perspectives on how both communities can collaborate to overcome these challenges.Major problems are emerging due to increases in data rates that are much larger than in storage and transmission capacity, as well as humans being cognitively overwhelmed when attempting to opportunistically scan through Big Data. As a consequence, the generation of scientific insight will become more dependent on automation and algorithmic instrument control. Intelligent data reduction will have to be considered across the entire acquisition pipeline. In this context, the presentation will outline the enabling role of machine learning and parallel computing.BioVictor Pankratius is a computer scientist who joined MIT Haystack Observatory following his passion for astronomy. He is currently leading efforts to advance astronomy through cutting-edge computer science and parallel computing. Victor is also involved in projects such as ALMA Phasing to enhance the ALMA Observatory with Very-Long Baseline Interferometry capabilities, the Event Horizon Telescope, as well as in the Radio Array of Portable Interferometric Detectors (RAPID) to create an analysis environment using parallel computing in the cloud. He has an extensive track record of research in parallel multicore systems and software engineering, with contributions to auto-tuning, debugging, and empirical experiments studying programmers. Victor has worked with major industry partners such as Intel, Sun Labs, and Oracle. He holds a distinguished doctorate and a Habilitation degree in Computer Science from the University of Karlsruhe. Contact him at pankrat@mit.edu, victorpankratius.com, or Twitter @vpankratius.
Yokohama, Noriya
2013-07-01
This report was aimed at structuring the design of architectures and studying performance measurement of a parallel computing environment using a Monte Carlo simulation for particle therapy using a high performance computing (HPC) instance within a public cloud-computing infrastructure. Performance measurements showed an approximately 28 times faster speed than seen with single-thread architecture, combined with improved stability. A study of methods of optimizing the system operations also indicated lower cost.
40 CFR 1039.640 - What special provisions apply to branded engines?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 32 2010-07-01 2010-07-01 false What special provisions apply to branded engines? 1039.640 Section 1039.640 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... ENGINES Special Compliance Provisions § 1039.640 What special provisions apply to branded engines? The...
Parallelized direct execution simulation of message-passing parallel programs
NASA Technical Reports Server (NTRS)
Dickens, Phillip M.; Heidelberger, Philip; Nicol, David M.
1994-01-01
As massively parallel computers proliferate, there is growing interest in findings ways by which performance of massively parallel codes can be efficiently predicted. This problem arises in diverse contexts such as parallelizing computers, parallel performance monitoring, and parallel algorithm development. In this paper we describe one solution where one directly executes the application code, but uses a discrete-event simulator to model details of the presumed parallel machine such as operating system and communication network behavior. Because this approach is computationally expensive, we are interested in its own parallelization specifically the parallelization of the discrete-event simulator. We describe methods suitable for parallelized direct execution simulation of message-passing parallel programs, and report on the performance of such a system, Large Application Parallel Simulation Environment (LAPSE), we have built on the Intel Paragon. On all codes measured to date, LAPSE predicts performance well typically within 10 percent relative error. Depending on the nature of the application code, we have observed low slowdowns (relative to natively executing code) and high relative speedups using up to 64 processors.
Artemis: Integrating Scientific Data on the Grid (Preprint)
2004-07-01
Theseus execution engine [Barish and Knoblock 03] to efficiently execute the generated datalog program. The Theseus execution engine has a wide...variety of operations to query databases, web sources, and web services. Theseus also contains a wide variety of relational operations, such as...selection, union, or projection. Furthermore, Theseus optimizes the execution of an integration plan by querying several data sources in parallel and
NASA Astrophysics Data System (ADS)
Chieng Chen, Vincent Lee
2015-04-01
A very warm welcome to all participants of the 9th Curtin University Technology, Science and Engineering (CUTSE) Conference 2014. This annual conference dates back to 2006 when the first Curtin University of Technology Science and Engineering (CUTSE) Conference was held in Curtin University, Miri Sarawak. CUTSE Conference was initially intended for Curtin's undergraduates such that they are able to experience the presentation of their work in a conference environment. As time passes and following the urge of knowledge dissemination, CUTSE Conference is hence open to public. This year the Department of Mechanical Engineering has been given the honour to organize the 9th CUTSE Conference. It has been a pleasure to watch CUTSE grow from strength to strength over the years. This year, our theme is "Discovering, Innovating and Engineering". We hope that it is in this spirit that CUTSE participants may align their respective work, such that we all aim for a greater and better implementation of "Discovering, Innovating and Engineering". The 9th CUTSE Conference 2014 is an excellent avenue for researchers, engineers, scientists, academicians, professionals from industry and students to share their research findings and initiate further collaborations in their respective fields. Parallel sessions in Mechanical, Electrical, Computer, Civil and Chemical engineering as well as the sciences will be hosted over a period of two days. Each year, the conference attracts participation from a number of countries in addition to Malaysia and Australia. In addition, student participants will get the opportunity to present their research projects and gain valuable feedback from industry professionals. This year the Conference will be organised by the Department of Mechanical Engineering of Curtin Sarawak's School of Engineering and Science in collaboration with The Institute of Engineers Malaysia, Miri Branch. On behalf of the organizing committee, I would like to thank this year's sponsors and supporters. We appreciate your support for CUTSE 2014 and in research and development, and your foresight in nurturing cutting edge research into industrial applications. CUTSE 2014 would not be possible without the dedicated work and efforts of the organizing committee, who worked tirelessly in all aspects of the conference organization. I thank you for your hard work and commitment towards making CUTSE 2014 a success. Selamat Datai (Welcome) and enjoy the conference. Dr Vincent Lee Chieng Chen Organizing Chairperson, 9th CUTSE Conference 2014
Bhatt, Indu; Tripathi, Bhumi Nath
2011-01-01
Nanoparticles are the materials with at least two dimensions between 1 and 100 nm. Mostly these nanoparticles are natural products but their tremendous commercial use has boosted the artificial synthesis of these particles (engineered nanoparticles). Accelerated production and use of these engineered nanoparticles may cause their release in the environment and facilitate the frequent interactions with biotic and abiotic components of the ecosystems. Despite remarkable commercial benefits, their presence in the nature may cause hazardous biological effects. Therefore, detail understanding of their sources, release interaction with environment, and possible risk assessment would provide a basis for safer use of engineered nanoparticles with minimal or no hazardous impact on environment. Keeping all these points in mind the present review provides updated information on various aspects, e.g. sources, different types, synthesis, interaction with environment, possible strategies for risk management of engineered nanoparticles. Copyright © 2010 Elsevier Ltd. All rights reserved.
GLobal Integrated Design Environment (GLIDE): A Concurrent Engineering Application
NASA Technical Reports Server (NTRS)
McGuire, Melissa L.; Kunkel, Matthew R.; Smith, David A.
2010-01-01
The GLobal Integrated Design Environment (GLIDE) is a client-server software application purpose-built to mitigate issues associated with real time data sharing in concurrent engineering environments and to facilitate discipline-to-discipline interaction between multiple engineers and researchers. GLIDE is implemented in multiple programming languages utilizing standardized web protocols to enable secure parameter data sharing between engineers and researchers across the Internet in closed and/or widely distributed working environments. A well defined, HyperText Transfer Protocol (HTTP) based Application Programming Interface (API) to the GLIDE client/server environment enables users to interact with GLIDE, and each other, within common and familiar tools. One such common tool, Microsoft Excel (Microsoft Corporation), paired with its add-in API for GLIDE, is discussed in this paper. The top-level examples given demonstrate how this interface improves the efficiency of the design process of a concurrent engineering study while reducing potential errors associated with manually sharing information between study participants.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hirata, So
2003-11-20
We develop a symbolic manipulation program and program generator (Tensor Contraction Engine or TCE) that automatically derives the working equations of a well-defined model of second-quantized many-electron theories and synthesizes efficient parallel computer programs on the basis of these equations. Provided an ansatz of a many-electron theory model, TCE performs valid contractions of creation and annihilation operators according to Wick's theorem, consolidates identical terms, and reduces the expressions into the form of multiple tensor contractions acted by permutation operators. Subsequently, it determines the binary contraction order for each multiple tensor contraction with the minimal operation and memory cost, factorizes commonmore » binary contractions (defines intermediate tensors), and identifies reusable intermediates. The resulting ordered list of binary tensor contractions, additions, and index permutations is translated into an optimized program that is combined with the NWChem and UTChem computational chemistry software packages. The programs synthesized by TCE take advantage of spin symmetry, Abelian point-group symmetry, and index permutation symmetry at every stage of calculations to minimize the number of arithmetic operations and storage requirement, adjust the peak local memory usage by index range tiling, and support parallel I/O interfaces and dynamic load balancing for parallel executions. We demonstrate the utility of TCE through automatic derivation and implementation of parallel programs for various models of configuration-interaction theory (CISD, CISDT, CISDTQ), many-body perturbation theory [MBPT(2), MBPT(3), MBPT(4)], and coupled-cluster theory (LCCD, CCD, LCCSD, CCSD, QCISD, CCSDT, and CCSDTQ).« less
Environmental Engineering in the Slovak Republic
NASA Astrophysics Data System (ADS)
Stevulova, N.; Balintova, M.; Zelenakova, M.; Estokova, A.; Vilcekova, S.
2017-10-01
The fundamental role of environmental engineering is to protect human population and environment from impacts of human activities and to ensure environmental quality. It relates to achieving the environmental sustainability goals through advanced technologies for pollutants removing from air, water and soil in order to minimize risk in ecosystem and ensuring favourable conditions for life of humans and organisms. Nowadays, a critical analysis of the environment quality and innovative approaches to problem solving in order to achieve sustainability in environmental engineering, are necessary. This article presents an overview of the quality of the environment and progress in environmental engineering in Slovakia and gives information regarding the environmental engineering education at Faculty of Civil Engineering at Technical University in Kosice.
pcircle - A Suite of Scalable Parallel File System Tools
DOE Office of Scientific and Technical Information (OSTI.GOV)
WANG, FEIYI
2015-10-01
Most of the software related to file system are written for conventional local file system, they are serialized and can't take advantage of the benefit of a large scale parallel file system. "pcircle" software builds on top of ubiquitous MPI in cluster computing environment and "work-stealing" pattern to provide a scalable, high-performance suite of file system tools. In particular - it implemented parallel data copy and parallel data checksumming, with advanced features such as async progress report, checkpoint and restart, as well as integrity checking.
Acoustic simulation in architecture with parallel algorithm
NASA Astrophysics Data System (ADS)
Li, Xiaohong; Zhang, Xinrong; Li, Dan
2004-03-01
In allusion to complexity of architecture environment and Real-time simulation of architecture acoustics, a parallel radiosity algorithm was developed. The distribution of sound energy in scene is solved with this method. And then the impulse response between sources and receivers at frequency segment, which are calculated with multi-process, are combined into whole frequency response. The numerical experiment shows that parallel arithmetic can improve the acoustic simulating efficiency of complex scene.
Image Processing Using a Parallel Architecture.
1987-12-01
ENG/87D-25 Abstract This study developed a set o± low level image processing tools on a parallel computer that allows concurrent processing of images...environment, the set of tools offers a significant reduction in the time required to perform some commonly used image processing operations. vI IMAGE...step toward developing these systems, a structured set of image processing tools was implemented using a parallel computer. More important than
ERIC Educational Resources Information Center
Brownlee, Matt; Yerkes, Rita
2003-01-01
An emotionally safe environment helps campers participate in adventure activities. Staff development tips for creating a safe environment include using cooperative goal setting; using parallel training processes; developing working lesson plans that outline facilitation techniques for creating emotionally safe environments; and using co-created…
Optimization by nonhierarchical asynchronous decomposition
NASA Technical Reports Server (NTRS)
Shankar, Jayashree; Ribbens, Calvin J.; Haftka, Raphael T.; Watson, Layne T.
1992-01-01
Large scale optimization problems are tractable only if they are somehow decomposed. Hierarchical decompositions are inappropriate for some types of problems and do not parallelize well. Sobieszczanski-Sobieski has proposed a nonhierarchical decomposition strategy for nonlinear constrained optimization that is naturally parallel. Despite some successes on engineering problems, the algorithm as originally proposed fails on simple two dimensional quadratic programs. The algorithm is carefully analyzed for quadratic programs, and a number of modifications are suggested to improve its robustness.
Advances in Parallel Computing and Databases for Digital Pathology in Cancer Research
2016-11-13
these technologies and how we have used them in the past. We are interested in learning more about the needs of clinical pathologists as we continue to...such as image processing and correlation. Further, High Performance Computing (HPC) paradigms such as the Message Passing Interface (MPI) have been...Defense for Research and Engineering. such as pMatlab [4], or bcMPI [5] can significantly reduce the need for deep knowledge of parallel computing. In
Parallel Visualization Co-Processing of Overnight CFD Propulsion Applications
NASA Technical Reports Server (NTRS)
Edwards, David E.; Haimes, Robert
1999-01-01
An interactive visualization system pV3 is being developed for the investigation of advanced computational methodologies employing visualization and parallel processing for the extraction of information contained in large-scale transient engineering simulations. Visual techniques for extracting information from the data in terms of cutting planes, iso-surfaces, particle tracing and vector fields are included in this system. This paper discusses improvements to the pV3 system developed under NASA's Affordable High Performance Computing project.
Multicore: Fallout from a Computing Evolution
Yelick, Kathy [Director, NERSC
2017-12-09
July 22, 2008 Berkeley Lab lecture: Parallel computing used to be reserved for big science and engineering projects, but in two years that's all changed. Even laptops and hand-helds use parallel processors. Unfortunately, the software hasn't kept pace. Kathy Yelick, Director of the National Energy Research Scientific Computing Center at Berkeley Lab, describes the resulting chaos and the computing community's efforts to develop exciting applications that take advantage of tens or hundreds of processors on a single chip.
2008-05-30
Tribological behavior and graphitization of carbon nanotubes grown on 440C stainless steel . Tribo. Lett., 19(2):119-125, 2005. D-2 ...with a stainless steel parallel plate configuration as shown in figure 1. Due to the radial variation of the local shear stress T in the parallel...using a force transducer that is mounted below the surface. B-1 Exploded View Stainless Steel Plate Lower Fixture Microscale View Figure 1:
Ethics education in the consulting engineering environment: where do we start?
Elder, Keith E
2004-04-01
As a result of in-house discussions stimulated by previous Gonzaga engineering ethics conferences, Coffman Engineers began the implementation of what is to be a company-wide ethics training program. While preparing a curriculum aimed at consulting engineers, we found very little guidance as to how to proceed with most available literature being oriented towards the academic environment. We consulted a number of resources that address the teaching of engineering ethics in higher education, but questioned their applicability for the Consulting Engineering environment. This lack of guidance led us to informal research into the ethical knowledge and attitudes of both consulting engineers and engineering students. Some of our findings were unexpected, and suggest that a simpler approach to teaching ethics to working professionals might be preferred to that typically promoted in higher education.
NASA Technical Reports Server (NTRS)
Adamovsky, Grigory; Mackey, Jeffrey R.; Kren, Lawrence A.; Floyd, Bertram M.; Elam, Kristie A.; Martinez, Martel
2014-01-01
A High Temperature Fiber Optic Sensor (HTFOS) has been developed at NASA Glenn Research Center for aircraft engine applications. After fabrication and preliminary in-house performance evaluation, the HTFOS was tested in an engine environment at NASA Armstrong Flight Research Center. The engine tests enabled the performance of the HTFOS in real engine environments to be evaluated along with the ability of the sensor to respond to changes in the engine's operating condition. Data were collected prior, during, and after each test in order to observe the change in temperature from ambient to each of the various test point levels. An adequate amount of data was collected and analyzed to satisfy the research team that HTFOS operates properly while the engine was running. Temperature measurements made by HTFOS while the engine was running agreed with those anticipated.
Parallel block schemes for large scale least squares computations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Golub, G.H.; Plemmons, R.J.; Sameh, A.
1986-04-01
Large scale least squares computations arise in a variety of scientific and engineering problems, including geodetic adjustments and surveys, medical image analysis, molecular structures, partial differential equations and substructuring methods in structural engineering. In each of these problems, matrices often arise which possess a block structure which reflects the local connection nature of the underlying physical problem. For example, such super-large nonlinear least squares computations arise in geodesy. Here the coordinates of positions are calculated by iteratively solving overdetermined systems of nonlinear equations by the Gauss-Newton method. The US National Geodetic Survey will complete this year (1986) the readjustment ofmore » the North American Datum, a problem which involves over 540 thousand unknowns and over 6.5 million observations (equations). The observation matrix for these least squares computations has a block angular form with 161 diagnonal blocks, each containing 3 to 4 thousand unknowns. In this paper parallel schemes are suggested for the orthogonal factorization of matrices in block angular form and for the associated backsubstitution phase of the least squares computations. In addition, a parallel scheme for the calculation of certain elements of the covariance matrix for such problems is described. It is shown that these algorithms are ideally suited for multiprocessors with three levels of parallelism such as the Cedar system at the University of Illinois. 20 refs., 7 figs.« less
Managing the Software Development Process
NASA Technical Reports Server (NTRS)
Lubelczky, Jeffrey T.; Parra, Amy
1999-01-01
The goal of any software development project is to produce a product that is delivered on time, within the allocated budget, and with the capabilities expected by the customer and unfortunately, this goal is rarely achieved. However, a properly managed project in a mature software engineering environment can consistently achieve this goal. In this paper we provide an introduction to three project success factors, a properly managed project, a competent project manager, and a mature software engineering environment. We will also present an overview of the benefits of a mature software engineering environment based on 24 years of data from the Software Engineering Lab, and suggest some first steps that an organization can take to begin benefiting from this environment. The depth and breadth of software engineering exceeds this paper, various references are cited with a goal of raising awareness and encouraging further investigation into software engineering and project management practices.
Stochastic Stirling Engine Operating in Contact with Active Baths
NASA Astrophysics Data System (ADS)
Zakine, Ruben; Solon, Alexandre; Gingrich, Todd; van Wijland, Frédéric
2017-04-01
A Stirling engine made of a colloidal particle in contact with a nonequilibrium bath is considered and analyzed with the tools of stochastic energetics. We model the bath by non Gaussian persistent noise acting on the colloidal particle. Depending on the chosen definition of an isothermal transformation in this nonequilibrium setting, we find that either the energetics of the engine parallels that of its equilibrium counterpart or, in the simplest case, that it ends up being less efficient. Persistence, more than non Gaussian effects, are responsible for this result.
NASA Technical Reports Server (NTRS)
Armstrong, Wilbur C.
1992-01-01
The piping in a liquid rocket can assume complex configurations due to multiple tanks, multiple engines, and structures that must be piped around. The capability to handle some of these complex configurations have been incorporated into the ADMIT code. The capability to modify the input on line has been implemented. The configurations allowed include multiple tanks, multiple engines, the splitting of a pipe into unequal segments going to different (or the same) engines. This program will handle the following type elements: straight pipes, bends, inline accumulators, tuned stub accumulators, Helmholtz resonators, parallel resonators, pumps, split pipes, multiple tanks, and multiple engines.
NASA Technical Reports Server (NTRS)
Armstrong, Wilbur C.
1992-01-01
The piping in a liquid rocket can assume complex configurations due to multiple tanks, multiple engines, and structures that must be piped around. The capability to handle some of these complex configurations have been incorporated into the SSFREQ code. The capability to modify the input on line has been implemented. The configurations allowed include multiple tanks, multiple engines, the splitting of a pipe into equal segments going to different (or the same) engines. This program will handle the following type elements: straight pipes, bends, inline accumulators, tuned stub accumulators, Helmholtz resonators, parallel resonators, pumps, split pipes, multiple tanks, and multiple engines.
Virtual environment display for a 3D audio room simulation
NASA Astrophysics Data System (ADS)
Chapin, William L.; Foster, Scott
1992-06-01
Recent developments in virtual 3D audio and synthetic aural environments have produced a complex acoustical room simulation. The acoustical simulation models a room with walls, ceiling, and floor of selected sound reflecting/absorbing characteristics and unlimited independent localizable sound sources. This non-visual acoustic simulation, implemented with 4 audio ConvolvotronsTM by Crystal River Engineering and coupled to the listener with a Poihemus IsotrakTM, tracking the listener's head position and orientation, and stereo headphones returning binaural sound, is quite compelling to most listeners with eyes closed. This immersive effect should be reinforced when properly integrated into a full, multi-sensory virtual environment presentation. This paper discusses the design of an interactive, visual virtual environment, complementing the acoustic model and specified to: 1) allow the listener to freely move about the space, a room of manipulable size, shape, and audio character, while interactively relocating the sound sources; 2) reinforce the listener's feeling of telepresence into the acoustical environment with visual and proprioceptive sensations; 3) enhance the audio with the graphic and interactive components, rather than overwhelm or reduce it; and 4) serve as a research testbed and technology transfer demonstration. The hardware/software design of two demonstration systems, one installed and one portable, are discussed through the development of four iterative configurations. The installed system implements a head-coupled, wide-angle, stereo-optic tracker/viewer and multi-computer simulation control. The portable demonstration system implements a head-mounted wide-angle, stereo-optic display, separate head and pointer electro-magnetic position trackers, a heterogeneous parallel graphics processing system, and object oriented C++ program code.
Close Range Calibration of Long Focal Length Lenses in a Changing Environment
NASA Astrophysics Data System (ADS)
Robson, Stuart; MacDonald, Lindsay; Kyle, Stephen; Shortis, Mark R.
2016-06-01
University College London is currently developing a large-scale multi-camera system for dimensional control tasks in manufacturing, including part machining, assembly and tracking, as part of the Light Controlled Factory project funded by the UK Engineering and Physical Science Research Council. In parallel, as part of the EU LUMINAR project funded by the European Association of National Metrology Institutes, refraction models of the atmosphere in factory environments are being developed with the intent of modelling and eliminating the effects of temperature and other variations. The accuracy requirements for both projects are extremely demanding, so accordingly improvements in the modelling of both camera imaging and the measurement environment are essential. At the junction of these two projects lies close range camera calibration. The accurate and reliable calibration of cameras across a realistic range of atmospheric conditions in the factory environment is vital in order to eliminate systematic errors. This paper demonstrates the challenge of experimentally isolating environmental effects at the level of a few tens of microns. Longer lines of sight promote the use and calibration of a near perfect perspective projection from a Kern 75mm lens with maximum radial distortion of the order of 0.5m. Coordination of a reference target array, representing a manufactured part, is achieved to better than 0.1mm at a standoff of 8m. More widely, results contribute to better sensor understanding, improved mathematical modelling of factory environments and more reliable coordination of targets to 0.1mm and better over large volumes.
NASA Technical Reports Server (NTRS)
McLaughlin, Brian J.; Barrett, Larry K.
2012-01-01
Common practice in the development of simulation systems is meeting all user requirements within a single instantiation. The Joint Polar Satellite System (JPSS) presents a unique challenge to establish a simulation environment that meets the needs of a diverse user community while also spanning a multi-mission environment over decades of operation. In response, the JPSS Flight Vehicle Test Suite (FVTS) is architected with an extensible infrastructure that supports the operation of multiple observatory simulations for a single mission and multiple mission within a common system perimeter. For the JPSS-1 satellite, multiple fidelity flight observatory simulations are necessary to support the distinct user communities consisting of the Common Ground System development team, the Common Ground System Integration & Test team, and the Mission Rehearsal Team/Mission Operations Team. These key requirements present several challenges to FVTS development. First, the FVTS must ensure all critical user requirements are satisfied by at least one fidelity instance of the observatory simulation. Second, the FVTS must allow for tailoring of the system instances to function in diverse operational environments from the High-security operations environment at NOAA Satellite Operations Facility (NSOF) to the ground system factory floor. Finally, the FVTS must provide the ability to execute sustaining engineering activities on a subset of the system without impacting system availability to parallel users. The FVTS approach of allowing for multiple fidelity copies of observatory simulations represents a unique concept in simulator capability development and corresponds to the JPSS Ground System goals of establishing a capability that is flexible, extensible, and adaptable.
NASA Astrophysics Data System (ADS)
Moya, J. L.; Skocypec, R. D.; Thomas, R. K.
1993-09-01
Over the past 40 years, Sandia National Laboratories (SNL) has been actively engaged in research to improve the ability to accurately predict the response of engineered systems to abnormal thermal and structural environments. These engineered systems contain very hazardous materials. Assessing the degree of safety/risk afforded the public and environment by these engineered systems, therefore, is of upmost importance. The ability to accurately predict the response of these systems to accidents (to abnormal environments) is required to assess the degree of safety. Before the effect of the abnormal environment on these systems can be determined, it is necessary to ascertain the nature of the environment. Ascertaining the nature of the environment, in turn, requires the ability to physically characterize and numerically simulate the abnormal environment. Historically, SNL has demonstrated the level of safety provided by these engineered systems by either of two approaches: a purely regulatory approach, or by a probabilistic risk assessment (PRA). This paper will address the latter of the two approaches.
NASA Astrophysics Data System (ADS)
Beh, Kian Lim
2000-10-01
This study was designed to explore the effect of a typical traditional method of instruction in physics on the formation of useful mental models among college students for problem-solving using simple electric circuits as a context. The study was also aimed at providing a comprehensive description of the understanding regarding electric circuits among novices and experts. In order to achieve these objectives, the following two research approaches were employed: (1) A students survey to collect data from 268 physics students; and (2) An interview protocol to collect data from 23 physics students and 24 experts (including 10 electrical engineering graduates, 4 practicing electrical engineers, 2 secondary school physics teachers, 8 physics lecturers, and 4 electrical engineers). Among the major findings are: (1) Most students do not possess accurate models of simple electric circuits as presented implicitly in physics textbooks; (2) Most students display good procedural understanding for solving simple problems concerning electric circuits but have no in-depth conceptual understanding in terms of practical knowledge of current, voltage, resistance, and circuit connections; (3) Most students encounter difficulty in discerning parallel connections that are drawn in a non-conventional format; (4) After a year of college physics, students show significant improvement in areas, including practical knowledge of current and voltage, ability to compute effective resistance and capacitance, ability to identify circuit connections, and ability to solve problems; however, no significance was found in practical knowledge of resistance and ability to connect circuits; and (5) The differences and similarities between the physics students and the experts include: (a) Novices perceive parallel circuits more in terms of 'branch', 'current', and 'resistors with the same resistance' while experts perceive parallel circuits more in terms of 'node', 'voltage', and 'less resistance'; and (b) Both novices and experts use phrases such as 'side-by side' and 'one on top of the other' in describing parallel circuits which emphasize the geometry of the standard circuit drawing when describing parallel resistors.
Distributed computations in a dynamic, heterogeneous Grid environment
NASA Astrophysics Data System (ADS)
Dramlitsch, Thomas
2003-06-01
In order to face the rapidly increasing need for computational resources of various scientific and engineering applications one has to think of new ways to make more efficient use of the worlds current computational resources. In this respect, the growing speed of wide area networks made a new kind of distributed computing possible: Metacomputing or (distributed) Grid computing. This is a rather new and uncharted field in computational science. The rapidly increasing speed of networks even outperforms the average increase of processor speed: Processor speeds double on average each 18 month whereas network bandwidths double every 9 months. Due to this development of local and wide area networks Grid computing will certainly play a key role in the future of parallel computing. This type of distributed computing, however, distinguishes from the traditional parallel computing in many ways since it has to deal with many problems not occurring in classical parallel computing. Those problems are for example heterogeneity, authentication and slow networks to mention only a few. Some of those problems, e.g. the allocation of distributed resources along with the providing of information about these resources to the application have been already attacked by the Globus software. Unfortunately, as far as we know, hardly any application or middle-ware software takes advantage of this information, since most parallelizing algorithms for finite differencing codes are implicitly designed for single supercomputer or cluster execution. We show that although it is possible to apply classical parallelizing algorithms in a Grid environment, in most cases the observed efficiency of the executed code is very poor. In this work we are closing this gap. In our thesis, we will - show that an execution of classical parallel codes in Grid environments is possible but very slow - analyze this situation of bad performance, nail down bottlenecks in communication, remove unnecessary overhead and other reasons for low performance - develop new and advanced algorithms for parallelisation that are aware of a Grid environment in order to generelize the traditional parallelization schemes - implement and test these new methods, replace and compare with the classical ones - introduce dynamic strategies that automatically adapt the running code to the nature of the underlying Grid environment. The higher the performance one can achieve for a single application by manual tuning for a Grid environment, the lower the chance that those changes are widely applicable to other programs. In our analysis as well as in our implementation we tried to keep the balance between high performance and generality. None of our changes directly affect code on the application level which makes our algorithms applicable to a whole class of real world applications. The implementation of our work is done within the Cactus framework using the Globus toolkit, since we think that these are the most reliable and advanced programming frameworks for supporting computations in Grid environments. On the other hand, however, we tried to be as general as possible, i.e. all methods and algorithms discussed in this thesis are independent of Cactus or Globus. Die immer dichtere und schnellere Vernetzung von Rechnern und Rechenzentren über Hochgeschwindigkeitsnetzwerke ermöglicht eine neue Art des wissenschaftlich verteilten Rechnens, bei der geographisch weit auseinanderliegende Rechenkapazitäten zu einer Gesamtheit zusammengefasst werden können. Dieser so entstehende virtuelle Superrechner, der selbst aus mehreren Grossrechnern besteht, kann dazu genutzt werden Probleme zu berechnen, für die die einzelnen Grossrechner zu klein sind. Die Probleme, die numerisch mit heutigen Rechenkapazitäten nicht lösbar sind, erstrecken sich durch sämtliche Gebiete der heutigen Wissenschaft, angefangen von Astrophysik, Molekülphysik, Bioinformatik, Meteorologie, bis hin zur Zahlentheorie und Fluiddynamik um nur einige Gebiete zu nennen. Je nach Art der Problemstellung und des Lösungsverfahrens gestalten sich solche "Meta-Berechnungen" mehr oder weniger schwierig. Allgemein kann man sagen, dass solche Berechnungen um so schwerer und auch um so uneffizienter werden, je mehr Kommunikation zwischen den einzelnen Prozessen (oder Prozessoren) herrscht. Dies ist dadurch begründet, dass die Bandbreiten bzw. Latenzzeiten zwischen zwei Prozessoren auf demselben Grossrechner oder Cluster um zwei bis vier Grössenordnungen höher bzw. niedriger liegen als zwischen Prozessoren, welche hunderte von Kilometern entfernt liegen. Dennoch bricht nunmehr eine Zeit an, in der es möglich ist Berechnungen auf solch virtuellen Supercomputern auch mit kommunikationsintensiven Programmen durchzuführen. Eine grosse Klasse von kommunikations- und berechnungsintensiven Programmen ist diejenige, die die Lösung von Differentialgleichungen mithilfe von finiten Differenzen zum Inhalt hat. Gerade diese Klasse von Programmen und deren Betrieb in einem virtuellen Superrechner wird in dieser vorliegenden Dissertation behandelt. Methoden zur effizienteren Durchführung von solch verteilten Berechnungen werden entwickelt, analysiert und implementiert. Der Schwerpunkt liegt darin vorhandene, klassische Parallelisierungsalgorithmen zu analysieren und so zu erweitern, dass sie vorhandene Informationen (z.B. verfügbar durch das Globus Toolkit) über Maschinen und Netzwerke zur effizienteren Parallelisierung nutzen. Soweit wir wissen werden solche Zusatzinformationen kaum in relevanten Programmen genutzt, da der Grossteil aller Parallelisierungsalgorithmen implizit für die Ausführung auf Grossrechnern oder Clustern entwickelt wurde.
46 CFR 111.05-13 - Grounding connection.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) ELECTRICAL ENGINEERING ELECTRIC SYSTEMS-GENERAL REQUIREMENTS Equipment Ground, Ground Detection, and Grounded Systems § 111.05-13 Grounding... power sources operating in parallel in the system. ...
Rankine cycle waste heat recovery system
Ernst, Timothy C.; Nelson, Christopher R.
2015-09-22
A waste heat recovery (WHR) system connects a working fluid to fluid passages formed in an engine block and/or a cylinder head of an internal combustion engine, forming an engine heat exchanger. The fluid passages are formed near high temperature areas of the engine, subjecting the working fluid to sufficient heat energy to vaporize the working fluid while the working fluid advantageously cools the engine block and/or cylinder head, improving fuel efficiency. The location of the engine heat exchanger downstream from an EGR boiler and upstream from an exhaust heat exchanger provides an optimal position of the engine heat exchanger with respect to the thermodynamic cycle of the WHR system, giving priority to cooling of EGR gas. The configuration of valves in the WHR system provides the ability to select a plurality of parallel flow paths for optimal operation.
NASA Astrophysics Data System (ADS)
Tolba, Khaled Ibrahim; Morgenthal, Guido
2018-01-01
This paper presents an analysis of the scalability and efficiency of a simulation framework based on the vortex particle method. The code is applied for the numerical aerodynamic analysis of line-like structures. The numerical code runs on multicore CPU and GPU architectures using OpenCL framework. The focus of this paper is the analysis of the parallel efficiency and scalability of the method being applied to an engineering test case, specifically the aeroelastic response of a long-span bridge girder at the construction stage. The target is to assess the optimal configuration and the required computer architecture, such that it becomes feasible to efficiently utilise the method within the computational resources available for a regular engineering office. The simulations and the scalability analysis are performed on a regular gaming type computer.
Surrogates for numerical simulations; optimization of eddy-promoter heat exchangers
NASA Technical Reports Server (NTRS)
Patera, Anthony T.; Patera, Anthony
1993-01-01
Although the advent of fast and inexpensive parallel computers has rendered numerous previously intractable calculations feasible, many numerical simulations remain too resource-intensive to be directly inserted in engineering optimization efforts. An attractive alternative to direct insertion considers models for computational systems: the expensive simulation is evoked only to construct and validate a simplified, input-output model; this simplified input-output model then serves as a simulation surrogate in subsequent engineering optimization studies. A simple 'Bayesian-validated' statistical framework for the construction, validation, and purposive application of static computer simulation surrogates is presented. As an example, dissipation-transport optimization of laminar-flow eddy-promoter heat exchangers are considered: parallel spectral element Navier-Stokes calculations serve to construct and validate surrogates for the flowrate and Nusselt number; these surrogates then represent the originating Navier-Stokes equations in the ensuing design process.
Diagnostics of wear in aeronautical systems
NASA Technical Reports Server (NTRS)
Wedeven, L. D.
1979-01-01
The use of appropriate diagnostic tools for aircraft oil wetted components is reviewed, noting that it can reduce direct operating costs through reduced unscheduled maintenance, particularly in helicopter engine and transmission systems where bearing failures are a significant cost factor. Engine and transmission wear modes are described, and diagnostic methods for oil and wet particle analysis, the spectrometric oil analysis program, chip detectors, ferrography, in-line oil monitor and radioactive isotope tagging are discussed, noting that they are effective over a limited range of particle sizes but compliment each other if used in parallel. Fine filtration can potentially increase time between overhauls, but reduces the effectiveness of conventional oil monitoring techniques so that alternative diagnostic techniques must be used. It is concluded that the development of a diagnostic system should be parallel and integral with the development of a mechanical system.
Engineering rules for evaluating the efficiency of multiplexing traffic streams
NASA Astrophysics Data System (ADS)
Klincewicz, John G.
2004-09-01
It is common, either for a telecommunications service provider or for a corporate enterprise, to have multiple data networks. For example, both an IP network and an ATM or Frame Relay network could be in operation to serve different applications. This can result in parallel transport links between the same two locations, each carrying data traffic under a different protocol. In this paper, we consider some practical engineering rules, for particular situations, to evaluate whether or not it is advantageous to combine these parallel traffic streams onto a single transport link. Combining the streams requires additional overhead (a so-called "cell tax" ) but, in at least some situations, can result in more efficient use of modular transport capacity. Simple graphs can be used to summarize the analysis. Some interesting, and perhaps unexpected, observations can be made.
NASA Technical Reports Server (NTRS)
Simpkin, W. E.
1982-01-01
An approximately 0.25 scale model of the transition section of a tandem fan variable cycle engine nacelle was tested in the NASA Lewis Research Center 10-by-10 foot wind tunnel. Two 12-inch, tip-turbine driven fans were used to simulate a tandem fan engine. Three testing modes simulated a V/STOL tandem fan airplane. Parallel mode has two separate propulsion streams for maximum low speed performance. A front inlet, fan, and downward vectorable nozzle forms one stream. An auxilliary top inlet provides air to the aft fan - supplying the core engine and aft vectorable nozzle. Front nozzle and top inlet closure, and removal of a blocker door separating the two streams configures the tandem fan for series mode operations as a typical aircraft propulsion system. Transition mode operation is formed by intermediate settings of the front nozzle, blocker door, and top inlet. Emphasis was on the total pressure recovery and flow distortion at the aft fan face. A range of fan flow rates were tested at tunnel airspeeds from 0 to 240 knots, and angles-of-attack from -10 to 40 deg for all three modes. In addition to the model variables for the three modes, model variants of the top inlet were tested in the parallel mode only. These lip variables were: aft lip boundary layer bleed holes, and Three position turning vane. Also a bellmouth extension of the top inlet side lips was tested in parallel mode.
Trace: a high-throughput tomographic reconstruction engine for large-scale datasets.
Bicer, Tekin; Gürsoy, Doğa; Andrade, Vincent De; Kettimuthu, Rajkumar; Scullin, William; Carlo, Francesco De; Foster, Ian T
2017-01-01
Modern synchrotron light sources and detectors produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used imaging techniques that generates data at tens of gigabytes per second is computed tomography (CT). Although CT experiments result in rapid data generation, the analysis and reconstruction of the collected data may require hours or even days of computation time with a medium-sized workstation, which hinders the scientific progress that relies on the results of analysis. We present Trace, a data-intensive computing engine that we have developed to enable high-performance implementation of iterative tomographic reconstruction algorithms for parallel computers. Trace provides fine-grained reconstruction of tomography datasets using both (thread-level) shared memory and (process-level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations that we apply to the replicated reconstruction objects and evaluate them using tomography datasets collected at the Advanced Photon Source. Our experimental evaluations show that our optimizations and parallelization techniques can provide 158× speedup using 32 compute nodes (384 cores) over a single-core configuration and decrease the end-to-end processing time of a large sinogram (with 4501 × 1 × 22,400 dimensions) from 12.5 h to <5 min per iteration. The proposed tomographic reconstruction engine can efficiently process large-scale tomographic data using many compute nodes and minimize reconstruction times.
A Comparative Propulsion System Analysis for the High-Speed Civil Transport
NASA Technical Reports Server (NTRS)
Berton, Jeffrey J.; Haller, William J.; Senick, Paul F.; Jones, Scott M.; Seidel, Jonathan A.
2005-01-01
Six of the candidate propulsion systems for the High-Speed Civil Transport are the turbojet, turbine bypass engine, mixed flow turbofan, variable cycle engine, Flade engine, and the inverting flow valve engine. A comparison of these propulsion systems by NASA's Glenn Research Center, paralleling studies within the aircraft industry, is presented. This report describes the Glenn Aeropropulsion Analysis Office's contribution to the High-Speed Research Program's 1993 and 1994 propulsion system selections. A parametric investigation of each propulsion cycle's primary design variables is analytically performed. Performance, weight, and geometric data are calculated for each engine. The resulting engines are then evaluated on two airframer-derived supersonic commercial aircraft for a 5000 nautical mile, Mach 2.4 cruise design mission. The effects of takeoff noise, cruise emissions, and cycle design rules are examined.
Exploring Fuel-Saving Potential of Long-Haul Truck Hybridization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, Zhiming; LaClair, Tim J.; Smith, David E.
We report our comparisons on the simulated fuel economy for parallel, series, and dual-mode hybrid electric long-haul trucks, in addition to a conventional powertrain configuration, powered by a commercial 2010-compliant 15-L diesel engine over a freeway-dominated heavy-duty truck driving cycle. The driving cycle was obtained by measurement during normal driving conditions. The results indicated that both parallel and dual-mode hybrid powertrains were capable of improving fuel economy by 7% to 8%. But there was no significant fuel economy benefit for the series hybrid truck because of internal inefficiencies in energy exchange. When reduced aerodynamic drag and tire rolling resistance weremore » combined with hybridization, there was a synergistic fuel economy benefit for appropriate hybrids that increased the fuel economy benefit to more than 15%. Long-haul hybrid trucks with reduced aerodynamic drag and rolling resistance offered lower peak engine loads, better kinetic energy recovery, and reduced average engine power demand. Therefore, it is expected that hybridization with load reduction technologies offers important potential fuel energy savings for future long-haul trucks.« less
Exploring Fuel-Saving Potential of Long-Haul Truck Hybridization
Gao, Zhiming; LaClair, Tim J.; Smith, David E.; ...
2015-10-01
We report our comparisons on the simulated fuel economy for parallel, series, and dual-mode hybrid electric long-haul trucks, in addition to a conventional powertrain configuration, powered by a commercial 2010-compliant 15-L diesel engine over a freeway-dominated heavy-duty truck driving cycle. The driving cycle was obtained by measurement during normal driving conditions. The results indicated that both parallel and dual-mode hybrid powertrains were capable of improving fuel economy by 7% to 8%. But there was no significant fuel economy benefit for the series hybrid truck because of internal inefficiencies in energy exchange. When reduced aerodynamic drag and tire rolling resistance weremore » combined with hybridization, there was a synergistic fuel economy benefit for appropriate hybrids that increased the fuel economy benefit to more than 15%. Long-haul hybrid trucks with reduced aerodynamic drag and rolling resistance offered lower peak engine loads, better kinetic energy recovery, and reduced average engine power demand. Therefore, it is expected that hybridization with load reduction technologies offers important potential fuel energy savings for future long-haul trucks.« less
NASA Technical Reports Server (NTRS)
Fijany, Amir
1993-01-01
In this paper parallel 0(log N) algorithms for dynamic simulation of single closed-chain rigid multibody system as specialized to the case of a robot manipulatoar in contact with the environment are developed.
Zhan, X.
2005-01-01
A parallel Fortran-MPI (Message Passing Interface) software for numerical inversion of the Laplace transform based on a Fourier series method is developed to meet the need of solving intensive computational problems involving oscillatory water level's response to hydraulic tests in a groundwater environment. The software is a parallel version of ACM (The Association for Computing Machinery) Transactions on Mathematical Software (TOMS) Algorithm 796. Running 38 test examples indicated that implementation of MPI techniques with distributed memory architecture speedups the processing and improves the efficiency. Applications to oscillatory water levels in a well during aquifer tests are presented to illustrate how this package can be applied to solve complicated environmental problems involved in differential and integral equations. The package is free and is easy to use for people with little or no previous experience in using MPI but who wish to get off to a quick start in parallel computing. ?? 2004 Elsevier Ltd. All rights reserved.
Performance of a parallel code for the Euler equations on hypercube computers
NASA Technical Reports Server (NTRS)
Barszcz, Eric; Chan, Tony F.; Jesperson, Dennis C.; Tuminaro, Raymond S.
1990-01-01
The performance of hypercubes were evaluated on a computational fluid dynamics problem and the parallel environment issues were considered that must be addressed, such as algorithm changes, implementation choices, programming effort, and programming environment. The evaluation focuses on a widely used fluid dynamics code, FLO52, which solves the two dimensional steady Euler equations describing flow around the airfoil. The code development experience is described, including interacting with the operating system, utilizing the message-passing communication system, and code modifications necessary to increase parallel efficiency. Results from two hypercube parallel computers (a 16-node iPSC/2, and a 512-node NCUBE/ten) are discussed and compared. In addition, a mathematical model of the execution time was developed as a function of several machine and algorithm parameters. This model accurately predicts the actual run times obtained and is used to explore the performance of the code in interesting but yet physically realizable regions of the parameter space. Based on this model, predictions about future hypercubes are made.
Tobler, Ray; Hermisson, Joachim; Schlötterer, Christian
2015-01-01
Thermal stress is a pervasive selective agent in natural populations that impacts organismal growth, survival, and reproduction. Drosophila melanogaster exhibits a variety of putatively adaptive phenotypic responses to thermal stress in natural and experimental settings; however, accompanying assessments of fitness are typically lacking. Here, we quantify changes in fitness and known thermal tolerance traits in replicated experimental D. melanogaster populations following more than 40 generations of evolution to either cyclic cold or hot temperatures. By evaluating fitness for both evolved populations alongside a reconstituted starting population, we show that the evolved populations were the best adapted within their respective thermal environments. More strikingly, the evolved populations exhibited increased fitness in both environments and improved resistance to both acute heat and cold stress. This unexpected parallel response appeared to be an adaptation to the rapid temperature changes that drove the cycling thermal regimes, as parallel fitness changes were not observed when tested in a constant thermal environment. Our results add to a small, but growing group of studies that demonstrate the importance of fluctuating temperature changes for thermal adaptation and highlight the need for additional work in this area. PMID:26080903
XMOS XC-2 Development Board for Mechanical Control and Data Collection
NASA Technical Reports Server (NTRS)
Jarnot, Robert F.; Bowden, William J.
2011-01-01
The scanning microwave limb sounder (SMLS) will use technological improvements in low-noise mixers to provide precise data on the Earth s atmospheric composition with high spatial resolution. This project focuses on the design and implementation of a realtime control system needed for airborne engineering tests of the SMLS. The system must coordinate the actuation of optical components using four motors with encoder readback, while collecting synchronized telemetric data from a GPS receiver and 3-axis gyrometric system. A graphical user interface for testing the control system was also designed using Python. Although the system could have been implemented with an FPGA(fieldprogrammable gate array)-based setup, a processor development kit manufactured by XMOS was chosen. The XMOS architecture allows parallel execution of multiple tasks on separate threads, making it ideal for this application. It is easily programmed using XC (a subset of C). The necessary communication interfaces were implemented in software, including Ethernet, with significant cost and time reduction compared to an FPGA-based approach. A simple approach to control the chopper, calibration mirror, and gimbal for the airborne SMLS was needed. The XMOS board allows for multiple threads and real-time data acquisition. The XC-2 development kit is an attractive choice for synchronized, real-time, event-driven applications. The XMOS is based on the transputer microprocessor architecture developed for parallel computing, which is being revamped in this new platform. The XMOS device has multiple cores capable of running parallel applications on separate threads. The threads communicate with each other via user-defined channels capable of transmitting data within the device. XMOS provides a C-based development environment using XC, which eliminates the need for custom tool kits associated with FPGA programming. The XC-2 has four cores and necessary hardware for Ethernet I/O.
Tradeoffs Between Synchronization, Communication, and Work in Parallel Linear Algebra Computations
2014-01-25
Demmel Electrical Engineering and Computer Sciences University of California at Berkeley Technical Report No. UCB/EECS-2014- 8 http...www.eecs.berkeley.edu/Pubs/TechRpts/2014/EECS-2014- 8 .html January 25, 2014 Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the...University of California at Berkeley,Electrical Engineering and Computer Sciences,Berkeley,CA,94720 8 . PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING
NASA Technical Reports Server (NTRS)
Ali, Moonis; Whitehead, Bruce; Gupta, Uday K.; Ferber, Harry
1989-01-01
This paper describes an expert system which is designed to perform automatic data analysis, identify anomalous events, and determine the characteristic features of these events. We have employed both artificial intelligence and neural net approaches in the design of this expert system. The artificial intelligence approach is useful because it provides (1) the use of human experts' knowledge of sensor behavior and faulty engine conditions in interpreting data; (2) the use of engine design knowledge and physical sensor locations in establishing relationships among the events of multiple sensors; (3) the use of stored analysis of past data of faulty engine conditions; and (4) the use of knowledge-based reasoning in distinguishing sensor failure from actual faults. The neural network approach appears promising because neural nets (1) can be trained on extremely noisy data and produce classifications which are more robust under noisy conditions than other classification techniques; (2) avoid the necessity of noise removal by digital filtering and therefore avoid the need to make assumptions about frequency bands or other signal characteristics of anomalous behavior; (3) can, in effect, generate their own feature detectors based on the characteristics of the sensor data used in training; and (4) are inherently parallel and therefore are potentially implementable in special-purpose parallel hardware.
40 CFR 89.916 - Emergency-vessel exemption for marine engines below 37 kW.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 21 2013-07-01 2013-07-01 false Emergency-vessel exemption for marine engines below 37 kW. 89.916 Section 89.916 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... ENGINES Exemption Provisions § 89.916 Emergency-vessel exemption for marine engines below 37 kW. The...
40 CFR 89.916 - Emergency-vessel exemption for marine engines below 37 kW.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 21 2012-07-01 2012-07-01 false Emergency-vessel exemption for marine engines below 37 kW. 89.916 Section 89.916 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... ENGINES Exemption Provisions § 89.916 Emergency-vessel exemption for marine engines below 37 kW. The...
40 CFR 89.916 - Emergency-vessel exemption for marine engines below 37 kW.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 20 2014-07-01 2013-07-01 true Emergency-vessel exemption for marine engines below 37 kW. 89.916 Section 89.916 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... ENGINES Exemption Provisions § 89.916 Emergency-vessel exemption for marine engines below 37 kW. The...
40 CFR 89.916 - Emergency-vessel exemption for marine engines below 37 kW.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 20 2011-07-01 2011-07-01 false Emergency-vessel exemption for marine engines below 37 kW. 89.916 Section 89.916 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... ENGINES Exemption Provisions § 89.916 Emergency-vessel exemption for marine engines below 37 kW. The...
40 CFR 89.916 - Emergency-vessel exemption for marine engines below 37 kW.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Emergency-vessel exemption for marine engines below 37 kW. 89.916 Section 89.916 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... ENGINES Exemption Provisions § 89.916 Emergency-vessel exemption for marine engines below 37 kW. The...
Similarity constraints in testing of cooled engine parts
NASA Technical Reports Server (NTRS)
Colladay, R. S.; Stepka, F. S.
1974-01-01
A study is made of the effect of testing cooled parts of current and advanced gas turbine engines at the reduced temperature and pressure conditions which maintain similarity with the engine environment. Some of the problems facing the experimentalist in evaluating heat transfer and aerodynamic performance when hardware is tested at conditions other than the actual engine environment are considered. Low temperature and pressure test environments can simulate the performance of actual size prototype engine hardware within the tolerance of experimental accuracy if appropriate similarity conditions are satisfied. Failure to adhere to these similarity constraints because of test facility limitations or other reasons, can result in a number of serious errors in projecting the performance of test hardware to engine conditions.
NASA Technical Reports Server (NTRS)
Agrawal, Gagan; Sussman, Alan; Saltz, Joel
1993-01-01
Scientific and engineering applications often involve structured meshes. These meshes may be nested (for multigrid codes) and/or irregularly coupled (called multiblock or irregularly coupled regular mesh problems). A combined runtime and compile-time approach for parallelizing these applications on distributed memory parallel machines in an efficient and machine-independent fashion was described. A runtime library which can be used to port these applications on distributed memory machines was designed and implemented. The library is currently implemented on several different systems. To further ease the task of application programmers, methods were developed for integrating this runtime library with compilers for HPK-like parallel programming languages. How this runtime library was integrated with the Fortran 90D compiler being developed at Syracuse University is discussed. Experimental results to demonstrate the efficacy of our approach are presented. A multiblock Navier-Stokes solver template and a multigrid code were experimented with. Our experimental results show that our primitives have low runtime communication overheads. Further, the compiler parallelized codes perform within 20 percent of the code parallelized by manually inserting calls to the runtime library.
NASA Astrophysics Data System (ADS)
Han, Xu; Xie, Guangping; Laflen, Brandon; Jia, Ming; Song, Guiju; Harding, Kevin G.
2015-05-01
In the real application environment of field engineering, a large variety of metrology tools are required by the technician to inspect part profile features. However, some of these tools are burdensome and only address a sole application or measurement. In other cases, standard tools lack the capability of accessing irregular profile features. Customers of field engineering want the next generation metrology devices to have the ability to replace the many current tools with one single device. This paper will describe a method based on the ring optical gage concept to the measurement of numerous kinds of profile features useful for the field technician. The ring optical system is composed of a collimated laser, a conical mirror and a CCD camera. To be useful for a wide range of applications, the ring optical system requires profile feature extraction algorithms and data manipulation directed toward real world applications in field operation. The paper will discuss such practical applications as measuring the non-ideal round hole with both off-centered and oblique axes. The algorithms needed to analyze other features such as measuring the width of gaps, radius of transition fillets, fall of step surfaces, and surface parallelism will also be discussed in this paper. With the assistance of image processing and geometric algorithms, these features can be extracted with a reasonable performance. Tailoring the feature extraction analysis to this specific gage offers the potential for a wider application base beyond simple inner diameter measurements. The paper will present experimental results that are compared with standard gages to prove the performance and feasibility of the analysis in real world field engineering. Potential accuracy improvement methods, a new dual ring design and future work will be discussed at the end of this paper.
Dror, Adi; Shemesh, Einav; Dayan, Natali
2014-01-01
The abilities of enzymes to catalyze reactions in nonnatural environments of organic solvents have opened new opportunities for enzyme-based industrial processes. However, the main drawback of such processes is that most enzymes have a limited stability in polar organic solvents. In this study, we employed protein engineering methods to generate a lipase for enhanced stability in methanol, which is important for biodiesel production. Two protein engineering approaches, random mutagenesis (error-prone PCR) and structure-guided consensus, were applied in parallel on an unexplored lipase gene from Geobacillus stearothermophilus T6. A high-throughput colorimetric screening assay was used to evaluate lipase activity after an incubation period in high methanol concentrations. Both protein engineering approaches were successful in producing variants with elevated half-life values in 70% methanol. The best variant of the random mutagenesis library, Q185L, exhibited 23-fold-improved stability, yet its methanolysis activity was decreased by one-half compared to the wild type. The best variant from the consensus library, H86Y/A269T, exhibited 66-fold-improved stability in methanol along with elevated thermostability (+4.3°C) and a 2-fold-higher fatty acid methyl ester yield from soybean oil. Based on in silico modeling, we suggest that the Q185L substitution facilitates a closed lid conformation that limits access for both the methanol and substrate excess into the active site. The enhanced stability of H86Y/A269T was a result of formation of new hydrogen bonds. These improved characteristics make this variant a potential biocatalyst for biodiesel production. PMID:24362426
Human Systems Integration in Practice: Constellation Lessons Learned
NASA Technical Reports Server (NTRS)
Zumbado, Jennifer Rochlis
2012-01-01
NASA's Constellation program provided a unique testbed for Human Systems Integration (HSI) as a fundamental element of the Systems Engineering process. Constellation was the first major program to have HSI mandated by NASA's Human Rating document. Proper HSI is critical to the success of any project that relies on humans to function as operators, maintainers, or controllers of a system. HSI improves mission, system and human performance, significantly reduces lifecycle costs, lowers risk and minimizes re-design. Successful HSI begins with sufficient project schedule dedicated to the generation of human systems requirements, but is by no means solely a requirements management process. A top-down systems engineering process that recognizes throughout the organization, human factors as a technical discipline equal to traditional engineering disciplines with authority for the overall system. This partners with a bottoms-up mechanism for human-centered design and technical issue resolution. The Constellation Human Systems Integration Group (HSIG) was a part of the Systems Engineering and Integration (SE&I) organization within the program office, and existed alongside similar groups such as Flight Performance, Environments & Constraints, and Integrated Loads, Structures and Mechanisms. While the HSIG successfully managed, via influence leadership, a down-and-in Community of Practice to facilitate technical integration and issue resolution, it lacked parallel top-down authority to drive integrated design. This presentation will discuss how HSI was applied to Constellation, the lessons learned and best practices it revealed, and recommendations to future NASA program and project managers. This presentation will discuss how Human Systems Integration (HSI) was applied to NASA's Constellation program, the lessons learned and best practices it revealed, and recommendations to future NASA program and project managers on how to accomplish this critical function.
Simulating Effects of High Angle of Attack on Turbofan Engine Performance
NASA Technical Reports Server (NTRS)
Liu, Yuan; Claus, Russell W.; Litt, Jonathan S.; Guo, Ten-Huei
2013-01-01
A method of investigating the effects of high angle of attack (AOA) flight on turbofan engine performance is presented. The methodology involves combining a suite of diverse simulation tools. Three-dimensional, steady-state computational fluid dynamics (CFD) software is used to model the change in performance of a commercial aircraft-type inlet and fan geometry due to various levels of AOA. Parallel compressor theory is then applied to assimilate the CFD data with a zero-dimensional, nonlinear, dynamic turbofan engine model. The combined model shows that high AOA operation degrades fan performance and, thus, negatively impacts compressor stability margins and engine thrust. In addition, the engine response to high AOA conditions is shown to be highly dependent upon the type of control system employed.
Dispersion of turbojet engine exhaust in flight
NASA Technical Reports Server (NTRS)
Holdeman, J. D.
1973-01-01
The dispersion of the exhaust of turbojet engines into the atmosphere is estimated by using a model developed for the mixing of a round jet with a parallel flow. The analysis is appropriate for determining the spread and dilution of the jet exhaust from the engine exit until it is entrained in the aircraft trailing vortices. Chemical reactions are not expected to be important and are not included in the flow model. Calculations of the dispersion of the exhaust plumes of three aircraft turbojet engines with and without afterburning at typical flight conditions are presented. Calculated average concentrations for the exhaust plume from a single engine jet fighter are shown to be in good agreement with measurements made in the aircraft wake during flight.
PUP: An Architecture to Exploit Parallel Unification in Prolog
1988-03-01
environment stacking mo del similar to the Warren Abstract Machine [23] since it has been shown to be super ior to other known models (see [21]). The storage...execute in groups of independent operations. Unifications belonging to different group s may not overlap. Also unification operations belonging to the...since all parallel operations on the unification units must complete before any of the units can star t executing the next group of parallel
Fast adaptive composite grid methods on distributed parallel architectures
NASA Technical Reports Server (NTRS)
Lemke, Max; Quinlan, Daniel
1992-01-01
The fast adaptive composite (FAC) grid method is compared with the adaptive composite method (AFAC) under variety of conditions including vectorization and parallelization. Results are given for distributed memory multiprocessor architectures (SUPRENUM, Intel iPSC/2 and iPSC/860). It is shown that the good performance of AFAC and its superiority over FAC in a parallel environment is a property of the algorithm and not dependent on peculiarities of any machine.
Teaching English Engineering Terminology in a Hypermedia Environment.
ERIC Educational Resources Information Center
Stamison-Atmatzidi, M.; And Others
1995-01-01
Discusses a hypermedia prototype system constituting a hypermedia dictionary environment and a database of field-specific reading passages with related exercises, for utilization in the teaching of English engineering terminology in foreign language environments. (eight references) (CK)
NASA Astrophysics Data System (ADS)
Duan, Pengfei; Lei, Wenping
2017-11-01
A number of disciplines (mechanics, structures, thermal, and optics) are needed to design and build Space Camera. Separate design models are normally constructed by each discipline CAD/CAE tools. Design and analysis is conducted largely in parallel subject to requirements that have been levied on each discipline, and technical interaction between the different disciplines is limited and infrequent. As a result a unified view of the Space Camera design across discipline boundaries is not directly possible in the approach above, and generating one would require a large manual, and error-prone process. A collaborative environment that is built on abstract model and performance template allows engineering data and CAD/CAE results to be shared across above discipline boundaries within a common interface, so that it can help to attain speedy multivariate design and directly evaluate optical performance under environment loadings. A small interdisciplinary engineering team from Beijing Institute of Space Mechanics and Electricity has recently conducted a Structural/Thermal/Optical (STOP) analysis of a space camera with this collaborative environment. STOP analysis evaluates the changes in image quality that arise from the structural deformations when the thermal environment of the camera changes throughout its orbit. STOP analyses were conducted for four different test conditions applied during final thermal vacuum (TVAC) testing of the payload on the ground. The STOP Simulation Process begins with importing an integrated CAD model of the camera geometry into the collaborative environment, within which 1. Independent thermal and structural meshes are generated. 2. The thermal mesh and relevant engineering data for material properties and thermal boundary conditions are then used to compute temperature distributions at nodal points in both the thermal and structures mesh through Thermal Desktop, a COTS thermal design and analysis code. 3. Thermally induced structural deformations of the camera are then evaluated in Nastran, an industry standard code for structural design and analysis. 4. Thermal and structural results are next imported into SigFit, another COTS tool that computes deformation and best fit rigid body displacements for the optical surfaces. 5. SigFit creates a modified optical prescription that is imported into CODE V for evaluation of optical performance impacts. The integrated STOP analysis was validated using TVAC test data. For the four different TVAC tests, the relative errors between simulation and test data of measuring points temperatures were almost around 5%, while in some test conditions, they were even much lower to 1%. As to image quality MTF, relative error between simulation and test was 8.3% in the worst condition, others were all below 5%. Through the validation, it has been approved that the collaborative design and simulation environment can achieved the integrated STOP analysis of Space Camera efficiently. And further, the collaborative environment allows an interdisciplinary analysis that formerly might take several months to perform to be completed in two or three weeks, which is very adaptive to scheme demonstration of projects in earlier stages.
cljam: a library for handling DNA sequence alignment/map (SAM) with parallel processing.
Takeuchi, Toshiki; Yamada, Atsuo; Aoki, Takashi; Nishimura, Kunihiro
2016-01-01
Next-generation sequencing can determine DNA bases and the results of sequence alignments are generally stored in files in the Sequence Alignment/Map (SAM) format and the compressed binary version (BAM) of it. SAMtools is a typical tool for dealing with files in the SAM/BAM format. SAMtools has various functions, including detection of variants, visualization of alignments, indexing, extraction of parts of the data and loci, and conversion of file formats. It is written in C and can execute fast. However, SAMtools requires an additional implementation to be used in parallel with, for example, OpenMP (Open Multi-Processing) libraries. For the accumulation of next-generation sequencing data, a simple parallelization program, which can support cloud and PC cluster environments, is required. We have developed cljam using the Clojure programming language, which simplifies parallel programming, to handle SAM/BAM data. Cljam can run in a Java runtime environment (e.g., Windows, Linux, Mac OS X) with Clojure. Cljam can process and analyze SAM/BAM files in parallel and at high speed. The execution time with cljam is almost the same as with SAMtools. The cljam code is written in Clojure and has fewer lines than other similar tools.
The Design of Future Airbreathing Engine Systems within an Intelligent Synthesis Environment
NASA Technical Reports Server (NTRS)
Malone, J. B.; Housner, J. M.; Lytle, J. K.
1999-01-01
This paper describes a new Initiative proposed by the National Aeronautics and Space Administration (NASA). The purpose of this initiative is to develop a future design environment for engineering and science mission synthesis for use by NASA scientists and engineers. This new initiative is called the Intelligent Synthesis Environment (ISE). The paper describes the mission of NASA, future aerospace system characteristics, the current engineering design process, the ISE concept, and concludes with a description of possible ISE applications for the decision of air-breathing propulsion systems.
Parallel-Vector Algorithm For Rapid Structural Anlysis
NASA Technical Reports Server (NTRS)
Agarwal, Tarun R.; Nguyen, Duc T.; Storaasli, Olaf O.
1993-01-01
New algorithm developed to overcome deficiency of skyline storage scheme by use of variable-band storage scheme. Exploits both parallel and vector capabilities of modern high-performance computers. Gives engineers and designers opportunity to include more design variables and constraints during optimization of structures. Enables use of more refined finite-element meshes to obtain improved understanding of complex behaviors of aerospace structures leading to better, safer designs. Not only attractive for current supercomputers but also for next generation of shared-memory supercomputers.
Multicore: Fallout From a Computing Evolution (LBNL Summer Lecture Series)
Yelick, Kathy [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC)
2018-05-07
Summer Lecture Series 2008: Parallel computing used to be reserved for big science and engineering projects, but in two years that's all changed. Even laptops and hand-helds use parallel processors. Unfortunately, the software hasn't kept pace. Kathy Yelick, Director of the National Energy Research Scientific Computing Center at Berkeley Lab, describes the resulting chaos and the computing community's efforts to develop exciting applications that take advantage of tens or hundreds of processors on a single chip.
Jones, Ryan J. R.; Shinde, Aniketa; Guevarra, Dan; ...
2015-01-05
There are many energy technologies require electrochemical stability or preactivation of functional materials. Due to the long experiment duration required for either electrochemical preactivation or evaluation of operational stability, parallel screening is required to enable high throughput experimentation. We found that imposing operational electrochemical conditions to a library of materials in parallel creates several opportunities for experimental artifacts. We discuss the electrochemical engineering principles and operational parameters that mitigate artifacts int he parallel electrochemical treatment system. We also demonstrate the effects of resistive losses within the planar working electrode through a combination of finite element modeling and illustrative experiments. Operationmore » of the parallel-plate, membrane-separated electrochemical treatment system is demonstrated by exposing a composition library of mixed metal oxides to oxygen evolution conditions in 1M sulfuric acid for 2h. This application is particularly important because the electrolysis and photoelectrolysis of water are promising future energy technologies inhibited by the lack of highly active, acid-stable catalysts containing only earth abundant elements.« less
Ahn, Young Kwan; Lee, Hyung Jin; Kim, Yoon Young
2017-08-30
Conical refraction, which is quite well-known in electromagnetic waves, has not been explored well in elastic waves due to the lack of proper natural elastic media. Here, we propose and design a unique anisotropic elastic metamaterial slab that realizes conical refraction for horizontally incident longitudinal or transverse waves; the single-mode wave is split into two oblique coupled longitudinal-shear waves. As an interesting application, we carried out an experiment of parallel translation of an incident elastic wave system through the anisotropic metamaterial slab. The parallel translation can be useful for ultrasonic non-destructive testing of a system hidden by obstacles. While the parallel translation resembles light refraction through a parallel plate without angle deviation between entry and exit beams, this wave behavior cannot be achieved without the engineered metamaterial because an elastic wave incident upon a dissimilar medium is always split at different refraction angles into two different modes, longitudinal and shear.
NASA Technical Reports Server (NTRS)
Weed, Richard Allen; Sankar, L. N.
1994-01-01
An increasing amount of research activity in computational fluid dynamics has been devoted to the development of efficient algorithms for parallel computing systems. The increasing performance to price ratio of engineering workstations has led to research to development procedures for implementing a parallel computing system composed of distributed workstations. This thesis proposal outlines an ongoing research program to develop efficient strategies for performing three-dimensional flow analysis on distributed computing systems. The PVM parallel programming interface was used to modify an existing three-dimensional flow solver, the TEAM code developed by Lockheed for the Air Force, to function as a parallel flow solver on clusters of workstations. Steady flow solutions were generated for three different wing and body geometries to validate the code and evaluate code performance. The proposed research will extend the parallel code development to determine the most efficient strategies for unsteady flow simulations.
NASA Technical Reports Server (NTRS)
Shapiro, Linda G.; Tanimoto, Steven L.; Ahrens, James P.
1996-01-01
The goal of this task was to create a design and prototype implementation of a database environment that is particular suited for handling the image, vision and scientific data associated with the NASA's EOC Amazon project. The focus was on a data model and query facilities that are designed to execute efficiently on parallel computers. A key feature of the environment is an interface which allows a scientist to specify high-level directives about how query execution should occur.
Automotive applications of superconductors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ginsberg, M.
1987-01-01
These proceedings compile papers on supercomputers in the automobile industry. Titles include: An automotive engineer's guide to the effective use of scalar, vector, and parallel computers; fluid mechanics, finite elements, and supercomputers; and Automotive crashworthiness performance on a supercomputer.
Parallel language constructs for tensor product computations on loosely coupled architectures
NASA Technical Reports Server (NTRS)
Mehrotra, Piyush; Vanrosendale, John
1989-01-01
Distributed memory architectures offer high levels of performance and flexibility, but have proven awkard to program. Current languages for nonshared memory architectures provide a relatively low level programming environment, and are poorly suited to modular programming, and to the construction of libraries. A set of language primitives designed to allow the specification of parallel numerical algorithms at a higher level is described. Tensor product array computations are focused on along with a simple but important class of numerical algorithms. The problem of programming 1-D kernal routines is focused on first, such as parallel tridiagonal solvers, and then how such parallel kernels can be combined to form parallel tensor product algorithms is examined.
Optimizing SIEM Throughput on the Cloud Using Parallelization
Alam, Masoom; Ihsan, Asif; Javaid, Qaisar; Khan, Abid; Manzoor, Jawad; Akhundzada, Adnan; Khan, M Khurram; Farooq, Sajid
2016-01-01
Processing large amounts of data in real time for identifying security issues pose several performance challenges, especially when hardware infrastructure is limited. Managed Security Service Providers (MSSP), mostly hosting their applications on the Cloud, receive events at a very high rate that varies from a few hundred to a couple of thousand events per second (EPS). It is critical to process this data efficiently, so that attacks could be identified quickly and necessary response could be initiated. This paper evaluates the performance of a security framework OSTROM built on the Esper complex event processing (CEP) engine under a parallel and non-parallel computational framework. We explain three architectures under which Esper can be used to process events. We investigated the effect on throughput, memory and CPU usage in each configuration setting. The results indicate that the performance of the engine is limited by the number of events coming in rather than the queries being processed. The architecture where 1/4th of the total events are submitted to each instance and all the queries are processed by all the units shows best results in terms of throughput, memory and CPU usage. PMID:27851762
Oh, Yunjung; Park, Junhong; Lee, Jong Tae; Seo, Jigu; Park, Sungwook
2017-10-01
The purpose of this study is to investigate possible improvements in ICEVs by implementing fuzzy logic-based parallel hard-type power hybrid systems. Two types of conventional ICEVs (gasoline and diesel) and two types of HEVs (gasoline-electric, diesel electric) were generated using vehicle and powertrain simulation tools and a Matlab-Simulink application programming interface. For gasoline and gasoline-electric HEV vehicles, the prediction accuracy for four types of LDV models was validated by conducting comparative analysis with the chassis dynamometer and OBD test data. The predicted results show strong correlation with the test data. The operating points of internal combustion engines and electric motors are well controlled in the high efficiency region and battery SOC was well controlled within ±1.6%. However, for diesel vehicles, we generated virtual diesel-electric HEV vehicle because there is no available vehicles with similar engine and vehicle specifications with ICE vehicle. Using a fuzzy logic-based parallel hybrid system in conventional ICEVs demonstrated that HEVs showed superior performance in terms of fuel consumption and CO 2 emission in most driving modes. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Kopasakis, George; Connolly, Joseph W.; Cheng, Larry
2015-01-01
This paper covers the development of stage-by-stage and parallel flow path compressor modeling approaches for a Variable Cycle Engine. The stage-by-stage compressor modeling approach is an extension of a technique for lumped volume dynamics and performance characteristic modeling. It was developed to improve the accuracy of axial compressor dynamics over lumped volume dynamics modeling. The stage-by-stage compressor model presented here is formulated into a parallel flow path model that includes both axial and rotational dynamics. This is done to enable the study of compressor and propulsion system dynamic performance under flow distortion conditions. The approaches utilized here are generic and should be applicable for the modeling of any axial flow compressor design accurate time domain simulations. The objective of this work is as follows. Given the parameters describing the conditions of atmospheric disturbances, and utilizing the derived formulations, directly compute the transfer function poles and zeros describing these disturbances for acoustic velocity, temperature, pressure, and density. Time domain simulations of representative atmospheric turbulence can then be developed by utilizing these computed transfer functions together with the disturbance frequencies of interest.
Parallel hyperbolic PDE simulation on clusters: Cell versus GPU
NASA Astrophysics Data System (ADS)
Rostrup, Scott; De Sterck, Hans
2010-12-01
Increasingly, high-performance computing is looking towards data-parallel computational devices to enhance computational performance. Two technologies that have received significant attention are IBM's Cell Processor and NVIDIA's CUDA programming model for graphics processing unit (GPU) computing. In this paper we investigate the acceleration of parallel hyperbolic partial differential equation simulation on structured grids with explicit time integration on clusters with Cell and GPU backends. The message passing interface (MPI) is used for communication between nodes at the coarsest level of parallelism. Optimizations of the simulation code at the several finer levels of parallelism that the data-parallel devices provide are described in terms of data layout, data flow and data-parallel instructions. Optimized Cell and GPU performance are compared with reference code performance on a single x86 central processing unit (CPU) core in single and double precision. We further compare the CPU, Cell and GPU platforms on a chip-to-chip basis, and compare performance on single cluster nodes with two CPUs, two Cell processors or two GPUs in a shared memory configuration (without MPI). We finally compare performance on clusters with 32 CPUs, 32 Cell processors, and 32 GPUs using MPI. Our GPU cluster results use NVIDIA Tesla GPUs with GT200 architecture, but some preliminary results on recently introduced NVIDIA GPUs with the next-generation Fermi architecture are also included. This paper provides computational scientists and engineers who are considering porting their codes to accelerator environments with insight into how structured grid based explicit algorithms can be optimized for clusters with Cell and GPU accelerators. It also provides insight into the speed-up that may be gained on current and future accelerator architectures for this class of applications. Program summaryProgram title: SWsolver Catalogue identifier: AEGY_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGY_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPL v3 No. of lines in distributed program, including test data, etc.: 59 168 No. of bytes in distributed program, including test data, etc.: 453 409 Distribution format: tar.gz Programming language: C, CUDA Computer: Parallel Computing Clusters. Individual compute nodes may consist of x86 CPU, Cell processor, or x86 CPU with attached NVIDIA GPU accelerator. Operating system: Linux Has the code been vectorised or parallelized?: Yes. Tested on 1-128 x86 CPU cores, 1-32 Cell Processors, and 1-32 NVIDIA GPUs. RAM: Tested on Problems requiring up to 4 GB per compute node. Classification: 12 External routines: MPI, CUDA, IBM Cell SDK Nature of problem: MPI-parallel simulation of Shallow Water equations using high-resolution 2D hyperbolic equation solver on regular Cartesian grids for x86 CPU, Cell Processor, and NVIDIA GPU using CUDA. Solution method: SWsolver provides 3 implementations of a high-resolution 2D Shallow Water equation solver on regular Cartesian grids, for CPU, Cell Processor, and NVIDIA GPU. Each implementation uses MPI to divide work across a parallel computing cluster. Additional comments: Sub-program numdiff is used for the test run.
Dose dependence of helium bubble formation in nano-engineered SiC at 700 °C
Chen, Chien -Hung; Zhang, Yanwen; Wang, Yongqiang; ...
2016-02-03
Knowledge of radiation-induced helium bubble nucleation and growth in SiC is essential for applications in fusion and fission environments. Here we report the evolution of microstructure in nano-engineered (NE) 3C SiC, pre-implanted with helium, under heavy ion irradiation at 700 °C up to doses of 30 displacements per atom (dpa). Elastic recoil detection analysis confirms that the as-implanted helium depth profile does not change under irradiation to 30 dpa at 700 °C. While the helium bubble size distribution becomes narrower with increasing dose, the average size of bubbles remains unchanged and the density of bubbles increases somewhat with dose. Thesemore » results are consistent with a long helium bubble incubation process under continued irradiation at 700 °C up to 30 dpa, similar to that reported under dual and triple beam irradiation at much higher temperatures. The formation of bubbles at this low temperature is enhanced by the nano-layered stacking fault structure in the NE SiC, which enhances point defect mobility parallel to the stacking faults. Here, this stacking fault structure is stable at 700 °C up to 30 dpa and suppresses the formation of dislocation loops normally observed under these irradiation conditions.« less
Scolnik, Pablo A
2009-01-01
The twenty two monoclonal antibodies (mAbs) currently marketed in the U.S. have captured almost half of the top-20 U.S. therapeutic biotechnology sales for 2007. Eight of these products have annual sales each of more than $1 B, were developed in the relatively short average period of six years, qualified for FDA programs designed to accelerate drug approval, and their cost has been reimbursed liberally by payers. With growth of the product class driven primarily by advancements in protein engineering and the low probability of generic threats, mAbs are now the largest class of biological therapies under development. The high cost of these drugs and the lack of generic competition conflict with a financially stressed health system, setting reimbursement by payers as the major limiting factor to growth. Advances in mAb engineering are likely to result in more effective mAb drugs and an expansion of the therapeutic indications covered by the class. The parallel development of biomarkers for identifying the patient subpopulations most likely to respond to treatment may lead to a more cost-effective use of these drugs. To achieve the success of the current top-tier mAbs, companies developing new mAb products must adapt to a significantly more challenging commercial environment.
EMERGE: Engineered Materials that Create Environments for ReGeneration via Electric Field
2016-10-01
Recruitment of multiple cell lines by collagen-synthetic copolymer matrices in corneal regeneration ,” Biomaterials (2004). A) B) REDD-2016-537...AWARD NUMBER: W81XWH-14-1-0542 TITLE: EMERGE: Engineered Materials that Create Environments for ReGeneration via Electric Field PRINCIPAL...23 Sep 2016 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER EMERGE: Engineered Materials that Create Environments for ReGeneration via Electric Field
Programming parallel architectures: The BLAZE family of languages
NASA Technical Reports Server (NTRS)
Mehrotra, Piyush
1988-01-01
Programming multiprocessor architectures is a critical research issue. An overview is given of the various approaches to programming these architectures that are currently being explored. It is argued that two of these approaches, interactive programming environments and functional parallel languages, are particularly attractive since they remove much of the burden of exploiting parallel architectures from the user. Also described is recent work by the author in the design of parallel languages. Research on languages for both shared and nonshared memory multiprocessors is described, as well as the relations of this work to other current language research projects.
Rocket Engine Nozzle Side Load Transient Analysis Methodology: A Practical Approach
NASA Technical Reports Server (NTRS)
Shi, John J.
2005-01-01
At the sea level, a phenomenon common with all rocket engines, especially for a highly over-expanded nozzle, during ignition and shutdown is that of flow separation as the plume fills and empties the nozzle, Since the flow will be separated randomly. it will generate side loads, i.e. non-axial forces. Since rocket engines are designed to produce axial thrust to power the vehicles, it is not desirable to be excited by non-axial input forcing functions, In the past, several engine failures were attributed to side loads. During the development stage, in order to design/size the rocket engine components and to reduce the risks, the local dynamic environments as well as dynamic interface loads have to be defined. The methodology developed here is the way to determine the peak loads and shock environments for new engine components. In the past it is not feasible to predict the shock environments, e.g. shock response spectra, from one engine to the other, because it is not scaleable. Therefore, the problem has been resolved and the shock environments can be defined in the early stage of new engine development. Additional information is included in the original extended abstract.
Development of natural gas rotary engines
NASA Astrophysics Data System (ADS)
Mack, J. R.
1991-08-01
Development of natural gas-fueled rotary engines was pursued on the parallel paths of converted Mazda automotive engines and of establishing technology and demonstration of a test model of a larger John Deer Technologies Incorporated (JDTI) rotary engine with power capability of 250 HP per power section for future production of multi-rotor engines with power ratings 250, 500, and 1000 HP and upward. Mazda engines were converted to natural gas and were characterized by a laboratory which was followed by nearly 12,000 hours of testing in three different field installations. To develop technology for the larger JDTI engine, laboratory and engine materials testing was accomplished. Extensive combustion analysis computer codes were modified, verified, and utilized to predict engine performance, to guide parameters for actual engine design, and to identify further improvements. A single rotor test engine of 5.8 liter displacement was designed for natural gas operation based on the JDTI 580 engine series. This engine was built and tested. It ran well and essentially achieved predicted performance. Lean combustion and low NOW emission were demonstrated.
40 CFR 87.60 - Testing engines.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 21 2013-07-01 2013-07-01 false Testing engines. 87.60 Section 87.60 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF AIR... gearbox-mounted components required to drive aircraft systems. (d) Test engines must reach a steady...
40 CFR 87.60 - Testing engines.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 20 2014-07-01 2013-07-01 true Testing engines. 87.60 Section 87.60 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF AIR... gearbox-mounted components required to drive aircraft systems. (d) Test engines must reach a steady...
40 CFR 87.62 - Test procedure (propulsion engines).
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 20 2011-07-01 2011-07-01 false Test procedure (propulsion engines). 87.62 Section 87.62 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR... Exhaust Gaseous Emissions (Aircraft and Aircraft Gas Turbine Engines) § 87.62 Test procedure (propulsion...
40 CFR 87.62 - Test procedure (propulsion engines).
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Test procedure (propulsion engines). 87.62 Section 87.62 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR... Exhaust Gaseous Emissions (Aircraft and Aircraft Gas Turbine Engines) § 87.62 Test procedure (propulsion...
40 CFR 87.62 - Test procedure (propulsion engines).
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 21 2012-07-01 2012-07-01 false Test procedure (propulsion engines). 87.62 Section 87.62 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) Definitions. Test Procedures § 87.62 Test procedure (propulsion engines). Link to an...
40 CFR 1042.640 - Special provisions for branded engines.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Special provisions for branded engines. 1042.640 Section 1042.640 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR... Special Compliance Provisions § 1042.640 Special provisions for branded engines. The following provisions...
Design of object-oriented distributed simulation classes
NASA Technical Reports Server (NTRS)
Schoeffler, James D. (Principal Investigator)
1995-01-01
Distributed simulation of aircraft engines as part of a computer aided design package is being developed by NASA Lewis Research Center for the aircraft industry. The project is called NPSS, an acronym for 'Numerical Propulsion Simulation System'. NPSS is a flexible object-oriented simulation of aircraft engines requiring high computing speed. It is desirable to run the simulation on a distributed computer system with multiple processors executing portions of the simulation in parallel. The purpose of this research was to investigate object-oriented structures such that individual objects could be distributed. The set of classes used in the simulation must be designed to facilitate parallel computation. Since the portions of the simulation carried out in parallel are not independent of one another, there is the need for communication among the parallel executing processors which in turn implies need for their synchronization. Communication and synchronization can lead to decreased throughput as parallel processors wait for data or synchronization signals from other processors. As a result of this research, the following have been accomplished. The design and implementation of a set of simulation classes which result in a distributed simulation control program have been completed. The design is based upon MIT 'Actor' model of a concurrent object and uses 'connectors' to structure dynamic connections between simulation components. Connectors may be dynamically created according to the distribution of objects among machines at execution time without any programming changes. Measurements of the basic performance have been carried out with the result that communication overhead of the distributed design is swamped by the computation time of modules unless modules have very short execution times per iteration or time step. An analytical performance model based upon queuing network theory has been designed and implemented. Its application to realistic configurations has not been carried out.
Design of Object-Oriented Distributed Simulation Classes
NASA Technical Reports Server (NTRS)
Schoeffler, James D.
1995-01-01
Distributed simulation of aircraft engines as part of a computer aided design package being developed by NASA Lewis Research Center for the aircraft industry. The project is called NPSS, an acronym for "Numerical Propulsion Simulation System". NPSS is a flexible object-oriented simulation of aircraft engines requiring high computing speed. It is desirable to run the simulation on a distributed computer system with multiple processors executing portions of the simulation in parallel. The purpose of this research was to investigate object-oriented structures such that individual objects could be distributed. The set of classes used in the simulation must be designed to facilitate parallel computation. Since the portions of the simulation carried out in parallel are not independent of one another, there is the need for communication among the parallel executing processors which in turn implies need for their synchronization. Communication and synchronization can lead to decreased throughput as parallel processors wait for data or synchronization signals from other processors. As a result of this research, the following have been accomplished. The design and implementation of a set of simulation classes which result in a distributed simulation control program have been completed. The design is based upon MIT "Actor" model of a concurrent object and uses "connectors" to structure dynamic connections between simulation components. Connectors may be dynamically created according to the distribution of objects among machines at execution time without any programming changes. Measurements of the basic performance have been carried out with the result that communication overhead of the distributed design is swamped by the computation time of modules unless modules have very short execution times per iteration or time step. An analytical performance model based upon queuing network theory has been designed and implemented. Its application to realistic configurations has not been carried out.
NASA Astrophysics Data System (ADS)
de Faria Scheidt, Rafael; Vilain, Patrícia; Dantas, M. A. R.
2014-10-01
Petroleum reservoir engineering is a complex and interesting field that requires large amount of computational facilities to achieve successful results. Usually, software environments for this field are developed without taking care out of possible interactions and extensibilities required by reservoir engineers. In this paper, we present a research work which it is characterized by the design and implementation based on a software product line model for a real distributed reservoir engineering environment. Experimental results indicate successfully the utilization of this approach for the design of distributed software architecture. In addition, all components from the proposal provided greater visibility of the organization and processes for the reservoir engineers.
Aesthetics and ethics in engineering: insights from Polanyi.
Dias, Priyan
2011-06-01
Polanyi insisted that scientific knowledge was intensely personal in nature, though held with universal intent. His insights regarding the personal values of beauty and morality in science are first enunciated. These are then explored for their relevance to engineering. It is shown that the practice of engineering is also governed by aesthetics and ethics. For example, Polanyi's three spheres of morality in science--that of the individual scientist, the scientific community and the wider society--has parallel entities in engineering. The existence of shared values in engineering is also demonstrated, in aesthetics through an example that shows convergence of practitioner opinion to solutions that represent accepted models of aesthetics; and in ethics through the recognition that many professional engineering institutions hold that the safety of the public supersedes the interests of the client. Such professional consensus can be seen as justification for studying engineering aesthetics and ethics as inter-subjective disciplines.
Programmable stream prefetch with resource optimization
Boyle, Peter; Christ, Norman; Gara, Alan; Mawhinney, Robert; Ohmacht, Martin; Sugavanam, Krishnan
2013-01-08
A stream prefetch engine performs data retrieval in a parallel computing system. The engine receives a load request from at least one processor. The engine evaluates whether a first memory address requested in the load request is present and valid in a table. The engine checks whether there exists valid data corresponding to the first memory address in an array if the first memory address is present and valid in the table. The engine increments a prefetching depth of a first stream that the first memory address belongs to and fetching a cache line associated with the first memory address from the at least one cache memory device if there is not yet valid data corresponding to the first memory address in the array. The engine determines whether prefetching of additional data is needed for the first stream within its prefetching depth. The engine prefetches the additional data if the prefetching is needed.
Distributed computing feasibility in a non-dedicated homogeneous distributed system
NASA Technical Reports Server (NTRS)
Leutenegger, Scott T.; Sun, Xian-He
1993-01-01
The low cost and availability of clusters of workstations have lead researchers to re-explore distributed computing using independent workstations. This approach may provide better cost/performance than tightly coupled multiprocessors. In practice, this approach often utilizes wasted cycles to run parallel jobs. The feasibility of such a non-dedicated parallel processing environment assuming workstation processes have preemptive priority over parallel tasks is addressed. An analytical model is developed to predict parallel job response times. Our model provides insight into how significantly workstation owner interference degrades parallel program performance. A new term task ratio, which relates the parallel task demand to the mean service demand of nonparallel workstation processes, is introduced. It was proposed that task ratio is a useful metric for determining how large the demand of a parallel applications must be in order to make efficient use of a non-dedicated distributed system.
Rocket Engine Numerical Simulator (RENS)
NASA Technical Reports Server (NTRS)
Davidian, Kenneth O.
1997-01-01
Work is being done at three universities to help today's NASA engineers use the knowledge and experience of their Apolloera predecessors in designing liquid rocket engines. Ground-breaking work is being done in important subject areas to create a prototype of the most important functions for the Rocket Engine Numerical Simulator (RENS). The goal of RENS is to develop an interactive, realtime application that engineers can utilize for comprehensive preliminary propulsion system design functions. RENS will employ computer science and artificial intelligence research in knowledge acquisition, computer code parallelization and objectification, expert system architecture design, and object-oriented programming. In 1995, a 3year grant from the NASA Lewis Research Center was awarded to Dr. Douglas Moreman and Dr. John Dyer of Southern University at Baton Rouge, Louisiana, to begin acquiring knowledge in liquid rocket propulsion systems. Resources of the University of West Florida in Pensacola were enlisted to begin the process of enlisting knowledge from senior NASA engineers who are recognized experts in liquid rocket engine propulsion systems. Dr. John Coffey of the University of West Florida is utilizing his expertise in interviewing and concept mapping techniques to encode, classify, and integrate information obtained through personal interviews. The expertise extracted from the NASA engineers has been put into concept maps with supporting textual, audio, graphic, and video material. A fundamental concept map was delivered by the end of the first year of work and the development of maps containing increasing amounts of information is continuing. Find out more information about this work at the Southern University/University of West Florida. In 1996, the Southern University/University of West Florida team conducted a 4day group interview with a panel of five experts to discuss failures of the RL10 rocket engine in conjunction with the Centaur launch vehicle. The discussion was recorded on video and audio tape. Transcriptions of the entire proceedings and an abbreviated video presentation of the discussion highlights are under development. Also in 1996, two additional 3year grants were awarded to conduct parallel efforts that would complement the work being done by Southern University and the University of West Florida. Dr. Prem Bhalla of Jackson State University in Jackson, Mississippi, is developing the architectural framework for RENS. By employing the Rose Rational language and Booch Object Oriented Programming (OOP) technology, Dr. Bhalla is developing the basic structure of RENS by identifying and encoding propulsion system components, their individual characteristics, and cross-functionality and dependencies. Dr. Ruknet Cezzar of Hampton University, located in Hampton, Virginia, began working on the parallelization and objectification of rocket engine analysis and design codes. Dr. Cezzar will use the Turbo C++ OOP language to translate important liquid rocket engine computer codes from FORTRAN and permit their inclusion into the RENS framework being developed at Jackson State University. The Southern University/University of West Florida grant was extended by 1 year to coordinate the conclusion of all three efforts in 1999.
Semiannual report, 1 April - 30 September 1991
NASA Technical Reports Server (NTRS)
1991-01-01
The major categories of the current Institute for Computer Applications in Science and Engineering (ICASE) research program are: (1) numerical methods, with particular emphasis on the development and analysis of basic numerical algorithms; (2) control and parameter identification problems, with emphasis on effective numerical methods; (3) computational problems in engineering and the physical sciences, particularly fluid dynamics, acoustics, and structural analysis; and (4) computer systems and software for parallel computers. Research in these areas is discussed.
Assessment of Self-Efficacy in Systems Engineering as an Indicator of Competency Level Achievement
2014-06-01
11 C. RESEARCH ON SELF-EFFICACY IN INFORMATION TECHNOLOGY —A PARALLEL TO SYSTEMS ENGINEERING ......13 1. Stakeholder Analysis...Expectancy value theory FA Factor analysis IT Information technology KSA Knowledge, skills, abilities MIS Management information systems NPS...item in particular reflected statistically significant pre- and post-survey results at p<.001, which was the student’s ability to pick a technology for
Dynamic Imbalance Would Counter Offcenter Thrust
NASA Technical Reports Server (NTRS)
Mccanna, Jason
1994-01-01
Dynamic imbalance generated by offcenter thrust on rotating body eliminated by shifting some of mass of body to generate opposing dynamic imbalance. Technique proposed originally for spacecraft including massive crew module connected via long, lightweight intermediate structure to massive engine module, such that artificial gravitation in crew module generated by rotating spacecraft around axis parallel to thrust generated by engine. Also applicable to dynamic balancing of rotating terrestrial equipment to which offcenter forces applied.
A Higher-Order Trapezoidal Vector Vortex Panel for Subsonic Flow.
1980-12-01
Presented to the Faculty of the School of Engineering of the Air Force Institute of Technology Air University In Partial Fulfillment of the...Requirements for the Degree of Master of Science by Ronald E. Luther, B.S. Capt USAF Graduate Aeronautical Engineering December 1980 Approved for public... methd also permits analysis of cranked leading and/or trailiig edges. The root edge, tip edge and all chordwise boundaries are parallel to the x-axis
40 CFR 1045.601 - What compliance provisions apply to these engines?
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 33 2011-07-01 2011-07-01 false What compliance provisions apply to these engines? 1045.601 Section 1045.601 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS CONTROL OF EMISSIONS FROM SPARK-IGNITION PROPULSION MARINE ENGINES AND...
40 CFR 1045.230 - How do I select engine families?
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 34 2012-07-01 2012-07-01 false How do I select engine families? 1045.230 Section 1045.230 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS CONTROL OF EMISSIONS FROM SPARK-IGNITION PROPULSION MARINE ENGINES AND VESSELS Certifying...
40 CFR 1045.601 - What compliance provisions apply to these engines?
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 34 2012-07-01 2012-07-01 false What compliance provisions apply to these engines? 1045.601 Section 1045.601 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS CONTROL OF EMISSIONS FROM SPARK-IGNITION PROPULSION MARINE ENGINES AND...
40 CFR 1045.230 - How do I select engine families?
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 33 2011-07-01 2011-07-01 false How do I select engine families? 1045.230 Section 1045.230 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS CONTROL OF EMISSIONS FROM SPARK-IGNITION PROPULSION MARINE ENGINES AND VESSELS Certifying...
40 CFR 1045.110 - How must my engines diagnose malfunctions?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 32 2010-07-01 2010-07-01 false How must my engines diagnose malfunctions? 1045.110 Section 1045.110 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS CONTROL OF EMISSIONS FROM SPARK-IGNITION PROPULSION MARINE ENGINES AND VESSELS...
40 CFR 1045.640 - What special provisions apply to branded engines?
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 33 2014-07-01 2014-07-01 false What special provisions apply to branded engines? 1045.640 Section 1045.640 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS CONTROL OF EMISSIONS FROM SPARK-IGNITION PROPULSION MARINE ENGINES AND...
40 CFR 1045.110 - How must my engines diagnose malfunctions?
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 34 2013-07-01 2013-07-01 false How must my engines diagnose malfunctions? 1045.110 Section 1045.110 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS CONTROL OF EMISSIONS FROM SPARK-IGNITION PROPULSION MARINE ENGINES AND VESSELS...
40 CFR 1045.230 - How do I select engine families?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 32 2010-07-01 2010-07-01 false How do I select engine families? 1045.230 Section 1045.230 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS CONTROL OF EMISSIONS FROM SPARK-IGNITION PROPULSION MARINE ENGINES AND VESSELS Certifying...
40 CFR 1045.640 - What special provisions apply to branded engines?
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 33 2011-07-01 2011-07-01 false What special provisions apply to branded engines? 1045.640 Section 1045.640 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS CONTROL OF EMISSIONS FROM SPARK-IGNITION PROPULSION MARINE ENGINES AND...
40 CFR 1045.601 - What compliance provisions apply to these engines?
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 34 2013-07-01 2013-07-01 false What compliance provisions apply to these engines? 1045.601 Section 1045.601 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS CONTROL OF EMISSIONS FROM SPARK-IGNITION PROPULSION MARINE ENGINES AND...
40 CFR 1045.601 - What compliance provisions apply to these engines?
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 33 2014-07-01 2014-07-01 false What compliance provisions apply to these engines? 1045.601 Section 1045.601 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS CONTROL OF EMISSIONS FROM SPARK-IGNITION PROPULSION MARINE ENGINES AND...
40 CFR 1045.601 - What compliance provisions apply to these engines?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 32 2010-07-01 2010-07-01 false What compliance provisions apply to these engines? 1045.601 Section 1045.601 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS CONTROL OF EMISSIONS FROM SPARK-IGNITION PROPULSION MARINE ENGINES AND...
40 CFR 1045.230 - How do I select engine families?
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 33 2014-07-01 2014-07-01 false How do I select engine families? 1045.230 Section 1045.230 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS CONTROL OF EMISSIONS FROM SPARK-IGNITION PROPULSION MARINE ENGINES AND VESSELS Certifying...
40 CFR 1045.230 - How do I select engine families?
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 34 2013-07-01 2013-07-01 false How do I select engine families? 1045.230 Section 1045.230 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS CONTROL OF EMISSIONS FROM SPARK-IGNITION PROPULSION MARINE ENGINES AND VESSELS Certifying...
40 CFR 1045.110 - How must my engines diagnose malfunctions?
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 33 2014-07-01 2014-07-01 false How must my engines diagnose malfunctions? 1045.110 Section 1045.110 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS CONTROL OF EMISSIONS FROM SPARK-IGNITION PROPULSION MARINE ENGINES AND VESSELS...
40 CFR 1045.110 - How must my engines diagnose malfunctions?
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 34 2012-07-01 2012-07-01 false How must my engines diagnose malfunctions? 1045.110 Section 1045.110 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS CONTROL OF EMISSIONS FROM SPARK-IGNITION PROPULSION MARINE ENGINES AND VESSELS...
40 CFR 1045.640 - What special provisions apply to branded engines?
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 34 2012-07-01 2012-07-01 false What special provisions apply to branded engines? 1045.640 Section 1045.640 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS CONTROL OF EMISSIONS FROM SPARK-IGNITION PROPULSION MARINE ENGINES AND...
40 CFR 1045.110 - How must my engines diagnose malfunctions?
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 33 2011-07-01 2011-07-01 false How must my engines diagnose malfunctions? 1045.110 Section 1045.110 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS CONTROL OF EMISSIONS FROM SPARK-IGNITION PROPULSION MARINE ENGINES AND VESSELS...
40 CFR 1045.640 - What special provisions apply to branded engines?
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 34 2013-07-01 2013-07-01 false What special provisions apply to branded engines? 1045.640 Section 1045.640 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS CONTROL OF EMISSIONS FROM SPARK-IGNITION PROPULSION MARINE ENGINES AND...
40 CFR 90.309 - Engine intake air temperature measurement.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Engine intake air temperature measurement. 90.309 Section 90.309 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR... Emission Test Equipment Provisions § 90.309 Engine intake air temperature measurement. (a) The measurement...
Simulation/Emulation Techniques: Compressing Schedules With Parallel (HW/SW) Development
NASA Technical Reports Server (NTRS)
Mangieri, Mark L.; Hoang, June
2014-01-01
NASA has always been in the business of balancing new technologies and techniques to achieve human space travel objectives. NASA's Kedalion engineering analysis lab has been validating and using many contemporary avionics HW/SW development and integration techniques, which represent new paradigms to NASA's heritage culture. Kedalion has validated many of the Orion HW/SW engineering techniques borrowed from the adjacent commercial aircraft avionics solution space, inserting new techniques and skills into the Multi - Purpose Crew Vehicle (MPCV) Orion program. Using contemporary agile techniques, Commercial-off-the-shelf (COTS) products, early rapid prototyping, in-house expertise and tools, and extensive use of simulators and emulators, NASA has achieved cost effective paradigms that are currently serving the Orion program effectively. Elements of long lead custom hardware on the Orion program have necessitated early use of simulators and emulators in advance of deliverable hardware to achieve parallel design and development on a compressed schedule.
High-performance parallel analysis of coupled problems for aircraft propulsion
NASA Technical Reports Server (NTRS)
Felippa, C. A.; Farhat, C.; Lanteri, S.; Maman, N.; Piperno, S.; Gumaste, U.
1994-01-01
This research program deals with the application of high-performance computing methods for the analysis of complete jet engines. We have entitled this program by applying the two dimensional parallel aeroelastic codes to the interior gas flow problem of a bypass jet engine. The fluid mesh generation, domain decomposition, and solution capabilities were successfully tested. We then focused attention on methodology for the partitioned analysis of the interaction of the gas flow with a flexible structure and with the fluid mesh motion that results from these structural displacements. This is treated by a new arbitrary Lagrangian-Eulerian (ALE) technique that models the fluid mesh motion as that of a fictitious mass-spring network. New partitioned analysis procedures to treat this coupled three-component problem are developed. These procedures involved delayed corrections and subcycling. Preliminary results on the stability, accuracy, and MPP computational efficiency are reported.
Plouchart, Diane; Guizard, Guillaume; Latrille, Eric
2018-01-01
Continuous cultures in chemostats have proven their value in microbiology, microbial ecology, systems biology and bioprocess engineering, among others. In these systems, microbial growth and ecosystem performance can be quantified under stable and defined environmental conditions. This is essential when linking microbial diversity to ecosystem function. Here, a new system to test this link in anaerobic, methanogenic microbial communities is introduced. Rigorously replicated experiments or a suitable experimental design typically require operating several chemostats in parallel. However, this is labor intensive, especially when measuring biogas production. Commercial solutions for multiplying reactors performing continuous anaerobic digestion exist but are expensive and use comparably large reactor volumes, requiring the preparation of substantial amounts of media. Here, a flexible system of Lab-scale Automated and Multiplexed Anaerobic Chemostat system (LAMACs) with a working volume of 200 mL is introduced. Sterile feeding, biomass wasting and pressure monitoring are automated. One module containing six reactors fits the typical dimensions of a lab bench. Thanks to automation, time required for reactor operation and maintenance are reduced compared to traditional lab-scale systems. Several modules can be used together, and so far the parallel operation of 30 reactors was demonstrated. The chemostats are autoclavable. Parameters like reactor volume, flow rates and operating temperature can be freely set. The robustness of the system was tested in a two-month long experiment in which three inocula in four replicates, i.e., twelve continuous digesters were monitored. Statistically significant differences in the biogas production between inocula were observed. In anaerobic digestion, biogas production and consequently pressure development in a closed environment is a proxy for ecosystem performance. The precision of the pressure measurement is thus crucial. The measured maximum and minimum rates of gas production could be determined at the same precision. The LAMACs is a tool that enables us to put in practice the often-demanded need for replication and rigorous testing in microbial ecology as well as bioprocess engineering. PMID:29518106
NASA Astrophysics Data System (ADS)
Yamamoto, H.; Nakajima, K.; Zhang, K.; Nanai, S.
2015-12-01
Powerful numerical codes that are capable of modeling complex coupled processes of physics and chemistry have been developed for predicting the fate of CO2 in reservoirs as well as its potential impacts on groundwater and subsurface environments. However, they are often computationally demanding for solving highly non-linear models in sufficient spatial and temporal resolutions. Geological heterogeneity and uncertainties further increase the challenges in modeling works. Two-phase flow simulations in heterogeneous media usually require much longer computational time than that in homogeneous media. Uncertainties in reservoir properties may necessitate stochastic simulations with multiple realizations. Recently, massively parallel supercomputers with more than thousands of processors become available in scientific and engineering communities. Such supercomputers may attract attentions from geoscientist and reservoir engineers for solving the large and non-linear models in higher resolutions within a reasonable time. However, for making it a useful tool, it is essential to tackle several practical obstacles to utilize large number of processors effectively for general-purpose reservoir simulators. We have implemented massively-parallel versions of two TOUGH2 family codes (a multi-phase flow simulator TOUGH2 and a chemically reactive transport simulator TOUGHREACT) on two different types (vector- and scalar-type) of supercomputers with a thousand to tens of thousands of processors. After completing implementation and extensive tune-up on the supercomputers, the computational performance was measured for three simulations with multi-million grid models, including a simulation of the dissolution-diffusion-convection process that requires high spatial and temporal resolutions to simulate the growth of small convective fingers of CO2-dissolved water to larger ones in a reservoir scale. The performance measurement confirmed that the both simulators exhibit excellent scalabilities showing almost linear speedup against number of processors up to over ten thousand cores. Generally this allows us to perform coupled multi-physics (THC) simulations on high resolution geologic models with multi-million grid in a practical time (e.g., less than a second per time step).
Natural Environments Definition for Design
NASA Technical Reports Server (NTRS)
Justh, H. L.; Altino, K. M.; Decker, R. K.; Koehler, H. M.; Leahy, F. B.; Minow, J. I.; Roberts, B. C.; Suggs, R. M.; Suggs, R. J.; White, P. W.;
2016-01-01
Planning for future National Aeronautics and Space Administration (NASA) missions will encompass a variety of operational and engineering activities that involve a multitude of issues, constraints, and influences derived from the natural environment. This Technical Memorandum (TM) presents a definition of the natural environment, i.e., a description in engineering handbook format of models and data specifically selected to support the architecture development, engineering design, and technology development for NASA's Exploration Systems Development (ESD) initiatives.
Further two-dimensional code development for Stirling space engine components
NASA Technical Reports Server (NTRS)
Ibrahim, Mounir; Tew, Roy C.; Dudenhoefer, James E.
1990-01-01
The development of multidimensional models of Stirling engine components is described. Two-dimensional parallel plate models of an engine regenerator and a cooler were used to study heat transfer under conditions of laminar, incompressible oscillating flow. Substantial differences in the nature of the temperature variations in time over the cycle were observed for the cooler as contrasted with the regenerator. When the two-dimensional cooler model was used to calculate a heat transfer coefficient, it yields a very different result from that calculated using steady-flow correlations. Simulation results for the regenerator and the cooler are presented.
NASA Astrophysics Data System (ADS)
Mizumoto, Ikuro; Tsunematsu, Junpei; Fujii, Seiya
2016-09-01
In this paper, a design method of an output feedback control system with a simple feedforward input for a combustion model of diesel engine will be proposed based on the almost strictly positive real-ness (ASPR-ness) of the controlled system for a combustion control of diesel engines. A parallel feedforward compensator (PFC) design scheme which renders the resulting augmented controlled system ASPR will also be proposed in order to design a stable output feedback control system for the considered combustion model. The effectiveness of our proposed method will be confirmed through numerical simulations.
Parallel volume ray-casting for unstructured-grid data on distributed-memory architectures
NASA Technical Reports Server (NTRS)
Ma, Kwan-Liu
1995-01-01
As computing technology continues to advance, computational modeling of scientific and engineering problems produces data of increasing complexity: large in size and unstructured in shape. Volume visualization of such data is a challenging problem. This paper proposes a distributed parallel solution that makes ray-casting volume rendering of unstructured-grid data practical. Both the data and the rendering process are distributed among processors. At each processor, ray-casting of local data is performed independent of the other processors. The global image composing processes, which require inter-processor communication, are overlapped with the local ray-casting processes to achieve maximum parallel efficiency. This algorithm differs from previous ones in four ways: it is completely distributed, less view-dependent, reasonably scalable, and flexible. Without using dynamic load balancing, test results on the Intel Paragon using from two to 128 processors show, on average, about 60% parallel efficiency.
Job Management Requirements for NAS Parallel Systems and Clusters
NASA Technical Reports Server (NTRS)
Saphir, William; Tanner, Leigh Ann; Traversat, Bernard
1995-01-01
A job management system is a critical component of a production supercomputing environment, permitting oversubscribed resources to be shared fairly and efficiently. Job management systems that were originally designed for traditional vector supercomputers are not appropriate for the distributed-memory parallel supercomputers that are becoming increasingly important in the high performance computing industry. Newer job management systems offer new functionality but do not solve fundamental problems. We address some of the main issues in resource allocation and job scheduling we have encountered on two parallel computers - a 160-node IBM SP2 and a cluster of 20 high performance workstations located at the Numerical Aerodynamic Simulation facility. We describe the requirements for resource allocation and job management that are necessary to provide a production supercomputing environment on these machines, prioritizing according to difficulty and importance, and advocating a return to fundamental issues.
Schmideder, Andreas; Cremer, Johannes H; Weuster-Botz, Dirk
2016-11-01
In general, fed-batch processes are applied for recombinant protein production with Escherichia coli (E. coli). However, state of the art methods for identifying suitable reaction conditions suffer from severe drawbacks, i.e. direct transfer of process information from parallel batch studies is often defective and sequential fed-batch studies are time-consuming and cost-intensive. In this study, continuously operated stirred-tank reactors on a milliliter scale were applied to identify suitable reaction conditions for fed-batch processes. Isopropyl β-d-1-thiogalactopyranoside (IPTG) induction strategies were varied in parallel-operated stirred-tank bioreactors to study the effects on the continuous production of the recombinant protein photoactivatable mCherry (PAmCherry) with E. coli. Best-performing induction strategies were transferred from the continuous processes on a milliliter scale to liter scale fed-batch processes. Inducing recombinant protein expression by dynamically increasing the IPTG concentration to 100 µM led to an increase in the product concentration of 21% (8.4 g L -1 ) compared to an implemented high-performance production process with the most frequently applied induction strategy by a single addition of 1000 µM IPGT. Thus, identifying feasible reaction conditions for fed-batch processes in parallel continuous studies on a milliliter scale was shown to be a powerful, novel method to accelerate bioprocess design in a cost-reducing manner. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:1426-1435, 2016. © 2016 American Institute of Chemical Engineers.
Space vehicle engine and heat shield environment review. Volume 1: Engineering analysis
NASA Technical Reports Server (NTRS)
Mcanelly, W. B.; Young, C. T. K.
1973-01-01
Methods for predicting the base heating characteristics of a multiple rocket engine installation are discussed. The environmental data is applied to the design of adequate protection system for the engine components. The methods for predicting the base region thermal environment are categorized as: (1) scale model testing, (2) extrapolation of previous and related flight test results, and (3) semiempirical analytical techniques.
Using A Model-Based Systems Engineering Approach For Exploration Medical System Development
NASA Technical Reports Server (NTRS)
Hanson, A.; Mindock, J.; McGuire, K.; Reilly, J.; Cerro, J.; Othon, W.; Rubin, D.; Urbina, M.; Canga, M.
2017-01-01
NASA's Human Research Program's Exploration Medical Capabilities (ExMC) element is defining the medical system needs for exploration class missions. ExMC's Systems Engineering (SE) team will play a critical role in successful design and implementation of the medical system into exploration vehicles. The team's mission is to "Define, develop, validate, and manage the technical system design needed to implement exploration medical capabilities for Mars and test the design in a progression of proving grounds." Development of the medical system is being conducted in parallel with exploration mission architecture and vehicle design development. Successful implementation of the medical system in this environment will require a robust systems engineering approach to enable technical communication across communities to create a common mental model of the emergent engineering and medical systems. Model-Based Systems Engineering (MBSE) improves shared understanding of system needs and constraints between stakeholders and offers a common language for analysis. The ExMC SE team is using MBSE techniques to define operational needs, decompose requirements and architecture, and identify medical capabilities needed to support human exploration. Systems Modeling Language (SysML) is the specific language the SE team is utilizing, within an MBSE approach, to model the medical system functional needs, requirements, and architecture. Modeling methods are being developed through the practice of MBSE within the team, and tools are being selected to support meta-data exchange as integration points to other system models are identified. Use of MBSE is supporting the development of relationships across disciplines and NASA Centers to build trust and enable teamwork, enhance visibility of team goals, foster a culture of unbiased learning and serving, and be responsive to customer needs. The MBSE approach to medical system design offers a paradigm shift toward greater integration between vehicle and the medical system and directly supports the transition of Earth-reliant ISS operations to the Earth-independent operations envisioned for Mars. Here, we describe the methods and approach to building this integrated model.