Science.gov

Sample records for advanced computational infrastructure

  1. Computational Infrastructure for Nuclear Astrophysics

    SciTech Connect

    Smith, Michael S.; Hix, W. Raphael; Bardayan, Daniel W.; Blackmon, Jeffery C.; Lingerfelt, Eric J.; Scott, Jason P.; Nesaraja, Caroline D.; Chae, Kyungyuk; Guidry, Michael W.; Koura, Hiroyuki; Meyer, Richard A.

    2006-07-12

    A Computational Infrastructure for Nuclear Astrophysics has been developed to streamline the inclusion of the latest nuclear physics data in astrophysics simulations. The infrastructure consists of a platform-independent suite of computer codes that is freely available online at nucastrodata.org. Features of, and future plans for, this software suite are given.

  2. Advanced Artificial Science. The development of an artificial science and engineering research infrastructure to facilitate innovative computational modeling, analysis, and application to interdisciplinary areas of scientific investigation.

    SciTech Connect

    Saffer, Shelley I.

    2014-12-01

    This is a final report of the DOE award DE-SC0001132, Advanced Artificial Science. The development of an artificial science and engineering research infrastructure to facilitate innovative computational modeling, analysis, and application to interdisciplinary areas of scientific investigation. This document describes the achievements of the goals, and resulting research made possible by this award.

  3. @neurIST: infrastructure for advanced disease management through integration of heterogeneous data, computing, and complex processing services.

    PubMed

    Benkner, Siegfried; Arbona, Antonio; Berti, Guntram; Chiarini, Alessandro; Dunlop, Robert; Engelbrecht, Gerhard; Frangi, Alejandro F; Friedrich, Christoph M; Hanser, Susanne; Hasselmeyer, Peer; Hose, Rod D; Iavindrasana, Jimison; Köhler, Martin; Iacono, Luigi Lo; Lonsdale, Guy; Meyer, Rodolphe; Moore, Bob; Rajasekaran, Hariharan; Summers, Paul E; Wöhrer, Alexander; Wood, Steven

    2010-11-01

    The increasing volume of data describing human disease processes and the growing complexity of understanding, managing, and sharing such data presents a huge challenge for clinicians and medical researchers. This paper presents the @neurIST system, which provides an infrastructure for biomedical research while aiding clinical care, by bringing together heterogeneous data and complex processing and computing services. Although @neurIST targets the investigation and treatment of cerebral aneurysms, the system's architecture is generic enough that it could be adapted to the treatment of other diseases. Innovations in @neurIST include confining the patient data pertaining to aneurysms inside a single environment that offers clinicians the tools to analyze and interpret patient data and make use of knowledge-based guidance in planning their treatment. Medical researchers gain access to a critical mass of aneurysm related data due to the system's ability to federate distributed information sources. A semantically mediated grid infrastructure ensures that both clinicians and researchers are able to seamlessly access and work on data that is distributed across multiple sites in a secure way in addition to providing computing resources on demand for performing computationally intensive simulations for treatment planning and research.

  4. Advanced computing

    NASA Technical Reports Server (NTRS)

    1991-01-01

    Advanced concepts in hardware, software and algorithms are being pursued for application in next generation space computers and for ground based analysis of space data. The research program focuses on massively parallel computation and neural networks, as well as optical processing and optical networking which are discussed under photonics. Also included are theoretical programs in neural and nonlinear science, and device development for magnetic and ferroelectric memories.

  5. Scientific computing infrastructure and services in Moldova

    NASA Astrophysics Data System (ADS)

    Bogatencov, P. P.; Secrieru, G. V.; Degteariov, N. V.; Iliuha, N. P.

    2016-09-01

    In recent years distributed information processing and high-performance computing technologies (HPC, distributed Cloud and Grid computing infrastructures) for solving complex tasks with high demands of computing resources are actively developing. In Moldova the works on creation of high-performance and distributed computing infrastructures were started relatively recently due to participation in implementation of a number of international projects. Research teams from Moldova participated in a series of regional and pan-European projects that allowed them to begin forming the national heterogeneous computing infrastructure, get access to regional and European computing resources, and expand the range and areas of solving tasks.

  6. A Computing Infrastructure for Supporting Climate Studies

    NASA Astrophysics Data System (ADS)

    Yang, C.; Bambacus, M.; Freeman, S. M.; Huang, Q.; Li, J.; Sun, M.; Xu, C.; Wojcik, G. S.; Cahalan, R. F.; NASA Climate @ Home Project Team

    2011-12-01

    Climate change is one of the major challenges facing us on the Earth planet in the 21st century. Scientists build many models to simulate the past and predict the climate change for the next decades or century. Most of the models are at a low resolution with some targeting high resolution in linkage to practical climate change preparedness. To calibrate and validate the models, millions of model runs are needed to find the best simulation and configuration. This paper introduces the NASA effort on Climate@Home project to build a supercomputer based-on advanced computing technologies, such as cloud computing, grid computing, and others. Climate@Home computing infrastructure includes several aspects: 1) a cloud computing platform is utilized to manage the potential spike access to the centralized components, such as grid computing server for dispatching and collecting models runs results; 2) a grid computing engine is developed based on MapReduce to dispatch models, model configuration, and collect simulation results and contributing statistics; 3) a portal serves as the entry point for the project to provide the management, sharing, and data exploration for end users; 4) scientists can access customized tools to configure model runs and visualize model results; 5) the public can access twitter and facebook to get the latest about the project. This paper will introduce the latest progress of the project and demonstrate the operational system during the AGU fall meeting. It will also discuss how this technology can become a trailblazer for other climate studies and relevant sciences. It will share how the challenges in computation and software integration were solved.

  7. Cloud computing can simplify HIT infrastructure management.

    PubMed

    Glaser, John

    2011-08-01

    Software as a Service (SaaS), built on cloud computing technology, is emerging as the forerunner in IT infrastructure because it helps healthcare providers reduce capital investments. Cloud computing leads to predictable, monthly, fixed operating expenses for hospital IT staff. Outsourced cloud computing facilities are state-of-the-art data centers boasting some of the most sophisticated networking equipment on the market. The SaaS model helps hospitals safeguard against technology obsolescence, minimizes maintenance requirements, and simplifies management.

  8. Cloud computing can simplify HIT infrastructure management.

    PubMed

    Glaser, John

    2011-08-01

    Software as a Service (SaaS), built on cloud computing technology, is emerging as the forerunner in IT infrastructure because it helps healthcare providers reduce capital investments. Cloud computing leads to predictable, monthly, fixed operating expenses for hospital IT staff. Outsourced cloud computing facilities are state-of-the-art data centers boasting some of the most sophisticated networking equipment on the market. The SaaS model helps hospitals safeguard against technology obsolescence, minimizes maintenance requirements, and simplifies management. PMID:21866720

  9. Energy Theft in the Advanced Metering Infrastructure

    NASA Astrophysics Data System (ADS)

    McLaughlin, Stephen; Podkuiko, Dmitry; McDaniel, Patrick

    Global energy generation and delivery systems are transitioning to a new computerized "smart grid". One of the principle components of the smart grid is an advanced metering infrastructure (AMI). AMI replaces the analog meters with computerized systems that report usage over digital communication interfaces, e.g., phone lines. However, with this infrastructure comes new risk. In this paper, we consider adversary means of defrauding the electrical grid by manipulating AMI systems. We document the methods adversaries will use to attempt to manipulate energy usage data, and validate the viability of these attacks by performing penetration testing on commodity devices. Through these activities, we demonstrate that not only is theft still possible in AMI systems, but that current AMI devices introduce a myriad of new vectors for achieving it.

  10. Building an infrastructure for urgent computing.

    SciTech Connect

    Beckman, P.; Beschastnikh, I.; Nadella, S.; Trebon, N.; Mathematics and Computer Science; Univ. of Chicago; Univ. of Washington

    2007-01-01

    Large-scale scientific computing is playing an ever-increasing role in critical decision-making and dynamic, event-driven systems. While some computation can simply wait in a job queue until resources become available, key decisions concerning life-threatening situations must be made quickly. A computation to predict the flow of airborne contaminants from a ruptured railcar must guide evacuation rapidly, before the results are meaningless. Although not as urgent, on-demand computing is often required to leverage a scientific opportunity. For example, a burst of data from seismometers could trigger urgent computation that then redirects instruments to focus data collection on specific regions, before the opportunity is lost. This paper describes the challenges of building an infrastructure to support urgent computing. We focus on three areas: the needs and requirements of an urgent computing system, a prototype urgent computing system called SPRUCE currently deployed on the TeraGrid, and future technologies and directions for urgent computing. Since the days of the first supercomputers, computation has been playing an everincreasing role in science.

  11. Integrity Management Infrastructure for Trusted Computing

    NASA Astrophysics Data System (ADS)

    Munetoh, Seiji; Nakamura, Megumi; Yoshihama, Sachiko; Kudo, Michiharu

    Computer security concerns have been rapidly increasing because of repeated security breaches and leakages of sensitive personal information. Such security breaches are mainly caused by an inappropriate management of the PCs, so maintaining integrity of the platform configuration is essential, and, verifying the integrity of the computer platform and software becomes more significant. To address these problems, the Trusted Computing Group (TCG) has developed various specifications that are used to measure the integrity of the platform based on hardware trust. In the trusted computing technology, the integrity data of each component running on the platform is recorded in the security chip and they are securely checked by a remote attestation. The infrastructure working group in the TCG is trying to define an Integrity Management Infrastructure in which the Platform Trust Services (PTS) is a new key component which deals with an Integrity Report. When we use the PTS in the target platform, it is a service component that collects and measures the runtime integrity of the target platform in a secure way. The PTS can also be used to validate the Integrity Reports. We introduce the notion of the Platform Validation Authority, a trusted third party, which verifies the composition of the integrity measurement of the target platform in the Integrity Reports. The Platform Validation Authority complements the role of the current Certificate Authority in the Public Key Infrastructure which attests to the integrity of the user identity as well as to related artifacts such as digital signatures. In this paper, we cover the research topics in this new area, the relevant technologies and open issues of the trusted computing, and the detail of our PTS implementation.

  12. National Computational Infrastructure for Lattice Gauge Theory

    SciTech Connect

    Brower, Richard C.

    2014-04-15

    SciDAC-2 Project The Secret Life of Quarks: National Computational Infrastructure for Lattice Gauge Theory, from March 15, 2011 through March 14, 2012. The objective of this project is to construct the software needed to study quantum chromodynamics (QCD), the theory of the strong interactions of sub-atomic physics, and other strongly coupled gauge field theories anticipated to be of importance in the energy regime made accessible by the Large Hadron Collider (LHC). It builds upon the successful efforts of the SciDAC-1 project National Computational Infrastructure for Lattice Gauge Theory, in which a QCD Applications Programming Interface (QCD API) was developed that enables lattice gauge theorists to make effective use of a wide variety of massively parallel computers. This project serves the entire USQCD Collaboration, which consists of nearly all the high energy and nuclear physicists in the United States engaged in the numerical study of QCD and related strongly interacting quantum field theories. All software developed in it is publicly available, and can be downloaded from a link on the USQCD Collaboration web site, or directly from the github repositories with entrance linke http://usqcd-software.github.io

  13. Computational infrastructure for law enforcement. Final report

    SciTech Connect

    Lades, M.; Kunz, C.; Strikos, I.

    1997-02-01

    This project planned to demonstrate the leverage of enhanced computational infrastructure for law enforcement by demonstrating the face recognition capability at LLNL. The project implemented a face finder module extending the segmentation capabilities of the current face recognition so it was capable of processing different image formats and sizes and create the pilot of a network-accessible image database for the demonstration of face recognition capabilities. The project was funded at $40k (2 man-months) for a feasibility study. It investigated several essential components of a networked face recognition system which could help identify, apprehend, and convict criminals.

  14. National Computational Infrastructure for Lattice Gauge Theory

    SciTech Connect

    Reed, Daniel, A

    2008-05-30

    In this document we describe work done under the SciDAC-1 Project National Computerational Infrastructure for Lattice Gauge Theory. The objective of this project was to construct the computational infrastructure needed to study quantim chromodynamics (QCD). Nearly all high energy and nuclear physicists in the United States working on the numerical study of QCD are involved in the project, as are Brookhaven National Laboratory (BNL), Fermi National Accelerator Laboratory (FNAL), and Thomas Jefferson National Accelerator Facility (JLab). A list of the serior participants is given in Appendix A.2. The project includes the development of community software for the effective use of the terascale computers, and the research and development of commodity clusters optimized for the study of QCD. The software developed as part of this effort is pubicly available, and is being widely used by physicists in the United States and abroad. The prototype clusters built with SciDAC-1 fund have been used to test the software, and are available to lattice guage theorists in the United States on a peer reviewed basis.

  15. German contributions to the CMS computing infrastructure

    NASA Astrophysics Data System (ADS)

    Scheurer, A.; German CMS Community

    2010-04-01

    The CMS computing model anticipates various hierarchically linked tier centres to counter the challenges provided by the enormous amounts of data which will be collected by the CMS detector at the Large Hadron Collider, LHC, at CERN. During the past years, various computing exercises were performed to test the readiness of the computing infrastructure, the Grid middleware and the experiment's software for the startup of the LHC which took place in September 2008. In Germany, several tier sites are set up to allow for an efficient and reliable way to simulate possible physics processes as well as to reprocess, analyse and interpret the numerous stored collision events of the experiment. It will be shown that the German computing sites played an important role during the experiment's preparation phase and during data-taking of CMS and, therefore, scientific groups in Germany will be ready to compete for discoveries in this new era of particle physics. This presentation focuses on the German Tier-1 centre GridKa, located at Forschungszentrum Karlsruhe, the German CMS Tier-2 federation DESY/RWTH with installations at the University of Aachen and the research centre DESY. In addition, various local computing resources in Aachen, Hamburg and Karlsruhe are briefly introduced as well. It will be shown that an excellent cooperation between the different German institutions and physicists led to well established computing sites which cover all parts of the CMS computing model. Therefore, the following topics are discussed and the achieved goals and the gained knowledge are depicted: data management and distribution among the different tier sites, Grid-based Monte Carlo production at the Tier-2 as well as Grid-based and locally submitted inhomogeneous user analyses at the Tier-3s. Another important task is to ensure a proper and reliable operation 24 hours a day, especially during the time of data-taking. For this purpose, the meta-monitoring tool "HappyFace", which was

  16. Interfaces for Advanced Computing.

    ERIC Educational Resources Information Center

    Foley, James D.

    1987-01-01

    Discusses the coming generation of supercomputers that will have the power to make elaborate "artificial realities" that facilitate user-computer communication. Illustrates these technological advancements with examples of the use of head-mounted monitors which are connected to position and orientation sensors, and gloves that track finger and…

  17. RF Technologies for Advancing Space Communication Infrastructure

    NASA Technical Reports Server (NTRS)

    Romanofsky, Robert R.; Bibyk, Irene K.; Wintucky, Edwin G.

    2006-01-01

    This paper will address key technologies under development at the NASA Glenn Research Center designed to provide architecture-level impacts. Specifically, we will describe deployable antennas, a new type of phased array antenna and novel power amplifiers. The evaluation of architectural influence can be conducted from two perspectives where said architecture can be analyzed from either the top-down to determine the areas where technology improvements will be most beneficial or from the bottom-up where each technology s performance advancement can affect the overall architecture s performance. This paper will take the latter approach with focus on some technology improvement challenges and address architecture impacts. For example, using data rate as a performance metric, future exploration scenarios are expected to demand data rates possibly exceeding 1 Gbps. To support these advancements in a Mars scenario, as an example, Ka-band and antenna aperture sizes on the order of 10 meters will be required from Mars areostationary platforms. Key technical challenges for a large deployable antenna include maximizing the ratio of deployed-to-packaged volume, minimizing aerial density, maintaining RMS surface accuracy to within 1/20 of a wavelength or better, and developing reflector rigidization techniques. Moreover, the high frequencies and large apertures manifest a new problem for microwave engineers that are familiar to optical communications specialists: pointing. The fine beam widths and long ranges dictate the need for electronic or mechanical feed articulation to compensate for spacecraft attitude control limitations.

  18. Network and computing infrastructure for scientific applications in Georgia

    NASA Astrophysics Data System (ADS)

    Kvatadze, R.; Modebadze, Z.

    2016-09-01

    Status of network and computing infrastructure and available services for research and education community of Georgia are presented. Research and Educational Networking Association - GRENA provides the following network services: Internet connectivity, network services, cyber security, technical support, etc. Computing resources used by the research teams are located at GRENA and at major state universities. GE-01-GRENA site is included in European Grid infrastructure. Paper also contains information about programs of Learning Center and research and development projects in which GRENA is participating.

  19. Advanced Wireless Power Transfer Vehicle and Infrastructure Analysis (Presentation)

    SciTech Connect

    Gonder, J.; Brooker, A.; Burton, E.; Wang, J.; Konan, A.

    2014-06-01

    This presentation discusses current research at NREL on advanced wireless power transfer vehicle and infrastructure analysis. The potential benefits of E-roadway include more electrified driving miles from battery electric vehicles, plug-in hybrid electric vehicles, or even properly equipped hybrid electric vehicles (i.e., more electrified miles could be obtained from a given battery size, or electrified driving miles could be maintained while using smaller and less expensive batteries, thereby increasing cost competitiveness and potential market penetration). The system optimization aspect is key given the potential impact of this technology on the vehicles, the power grid and the road infrastructure.

  20. An Advanced Decision Support Tool for Electricity Infrastructure Operations

    SciTech Connect

    Chen, Yousu; Huang, Zhenyu; Wong, Pak C.; Mackey, Patrick S.; Allwardt, Craig H.; Ma, Jian; Greitzer, Frank L.

    2010-01-31

    Electricity infrastructure, as one of the most critical infrastructures in the U.S., plays an important role in modern societies. Its failure would lead to significant disruption of people’s lives, industry and commercial activities, and result in massive economic losses. Reliable operation of electricity infrastructure is an extremely challenging task because human operators need to consider thousands of possible configurations in near real-time to choose the best option and operate the network effectively. In today’s practice, electricity infrastructure operation is largely based on operators’ experience with very limited real-time decision support, resulting in inadequate management of complex predictions and the inability to anticipate, recognize, and respond to situations caused by human errors, natural disasters, or cyber attacks. Therefore, a systematic approach is needed to manage the complex operational paradigms and choose the best option in a near-real-time manner. This paper proposes an advanced decision support tool for electricity infrastructure operations. The tool has the functions of turning large amount of data into actionable information to help operators monitor power grid status in real time; performing trend analysis to indentify system trend at the regional level or system level to help the operator to foresee and discern emergencies, studying clustering analysis to assist operators to identify the relationships between system configurations and affected assets, and interactively evaluating the alternative remedial actions to aid operators to make effective and timely decisions. This tool can provide significant decision support on electricity infrastructure operations and lead to better reliability in power grids. This paper presents examples with actual electricity infrastructure data to demonstrate the capability of this tool.

  1. Towards a Production Volunteer Computing Infrastructure for HEP

    NASA Astrophysics Data System (ADS)

    Høimyr, N.; Marquina, M.; Asp, T.; Jones, P.; Gonzalez, A.; Field, L.

    2015-12-01

    Following the successful inclusion of virtualisation to volunteer computing for theory simulations back in 2011, the use of volunteer computing with BOINC and CernVM has been extended to cover simulations for the LHC experiments ATLAS, CMS and LHCb. This paper describes the status of the BOINC volunteer computing platform at CERN used for LHC@home and how it has been designed to address a heterogeneous environment of different user communities with different computing infrastructure. The aim of the recent developments is to provide a volunteer computing platform that the experiments can build upon to exploit opportunistic resources. Furthermore, new developments of common solutions to span user authentication domains are explained.

  2. Advanced Computing for Medicine.

    ERIC Educational Resources Information Center

    Rennels, Glenn D.; Shortliffe, Edward H.

    1987-01-01

    Discusses contributions that computers and computer networks are making to the field of medicine. Emphasizes the computer's speed in storing and retrieving data. Suggests that doctors may soon be able to use computers to advise on diagnosis and treatment. (TW)

  3. SAGA: A standardized access layer to heterogeneous Distributed Computing Infrastructure

    NASA Astrophysics Data System (ADS)

    Merzky, Andre; Weidner, Ole; Jha, Shantenu

    2015-09-01

    Distributed Computing Infrastructure is characterized by interfaces that are heterogeneous-syntactically and semantically. SAGA represents the most comprehensive community effort to date to address the heterogeneity by defining a simple, uniform access layer. In this paper, we describe the basic concepts underpinning its design and development. We also discuss RADICAL-SAGA which is the most widely used implementation of SAGA.

  4. Autonomic Management of Application Workflows on Hybrid Computing Infrastructure

    DOE PAGES

    Kim, Hyunjoo; el-Khamra, Yaakoub; Rodero, Ivan; Jha, Shantenu; Parashar, Manish

    2011-01-01

    In this paper, we present a programming and runtime framework that enables the autonomic management of complex application workflows on hybrid computing infrastructures. The framework is designed to address system and application heterogeneity and dynamics to ensure that application objectives and constraints are satisfied. The need for such autonomic system and application management is becoming critical as computing infrastructures become increasingly heterogeneous, integrating different classes of resources from high-end HPC systems to commodity clusters and clouds. For example, the framework presented in this paper can be used to provision the appropriate mix of resources based on application requirements and constraints.more » The framework also monitors the system/application state and adapts the application and/or resources to respond to changing requirements or environment. To demonstrate the operation of the framework and to evaluate its ability, we employ a workflow used to characterize an oil reservoir executing on a hybrid infrastructure composed of TeraGrid nodes and Amazon EC2 instances of various types. Specifically, we show how different applications objectives such as acceleration, conservation and resilience can be effectively achieved while satisfying deadline and budget constraints, using an appropriate mix of dynamically provisioned resources. Our evaluations also demonstrate that public clouds can be used to complement and reinforce the scheduling and usage of traditional high performance computing infrastructure.« less

  5. Building a High Performance Computing Infrastructure for Novosibirsk Scientific Center

    NASA Astrophysics Data System (ADS)

    Adakin, A.; Belov, S.; Chubarov, D.; Kalyuzhny, V.; Kaplin, V.; Kuchin, N.; Lomakin, S.; Nikultsev, V.; Sukharev, A.; Zaytsev, A.

    2011-12-01

    Novosibirsk Scientific Center (NSC), also known worldwide as Akademgorodok, is one of the largest Russian scientific centers hosting Novosibirsk State University (NSU) and more than 35 research organizations of the Siberian Branch of Russian Academy of Sciences including Budker Institute of Nuclear Physics (BINP), Institute of Computational Technologies (ICT), and Institute of Computational Mathematics and Mathematical Geophysics (ICM&MG). Since each institute has specific requirements on the architecture of the computing farms involved in its research field, currentiy we've got several computing facilities hosted by NSC institutes, each optimized for the particular set of tasks, of which the largest are the NSU Supercomputer Center, Siberian Supercomputer Center (ICM&MG), and a Grid Computing Facility of BINP. Recendy a dedicated optical network with the initial bandwidth of 10 Gbps connecting these three facilities was built in order to make it possible to share the computing resources among the research communities of participating institutes, thus providing a common platform for building the computing infrastructure for various scientific projects. Unification of the computing infrastructure is achieved by extensive use of virtualization technologies based on XEN and KVM platforms. The solution implemented was tested thoroughly within the computing environment of KEDR detector experiment which is being carried out at BINP, and foreseen to be applied to the use cases of other HEP experiments in the upcoming future.

  6. High-throughput neuroimaging-genetics computational infrastructure

    PubMed Central

    Dinov, Ivo D.; Petrosyan, Petros; Liu, Zhizhong; Eggert, Paul; Hobel, Sam; Vespa, Paul; Woo Moon, Seok; Van Horn, John D.; Franco, Joseph; Toga, Arthur W.

    2014-01-01

    Many contemporary neuroscientific investigations face significant challenges in terms of data management, computational processing, data mining, and results interpretation. These four pillars define the core infrastructure necessary to plan, organize, orchestrate, validate, and disseminate novel scientific methods, computational resources, and translational healthcare findings. Data management includes protocols for data acquisition, archival, query, transfer, retrieval, and aggregation. Computational processing involves the necessary software, hardware, and networking infrastructure required to handle large amounts of heterogeneous neuroimaging, genetics, clinical, and phenotypic data and meta-data. Data mining refers to the process of automatically extracting data features, characteristics and associations, which are not readily visible by human exploration of the raw dataset. Result interpretation includes scientific visualization, community validation of findings and reproducible findings. In this manuscript we describe the novel high-throughput neuroimaging-genetics computational infrastructure available at the Institute for Neuroimaging and Informatics (INI) and the Laboratory of Neuro Imaging (LONI) at University of Southern California (USC). INI and LONI include ultra-high-field and standard-field MRI brain scanners along with an imaging-genetics database for storing the complete provenance of the raw and derived data and meta-data. In addition, the institute provides a large number of software tools for image and shape analysis, mathematical modeling, genomic sequence processing, and scientific visualization. A unique feature of this architecture is the Pipeline environment, which integrates the data management, processing, transfer, and visualization. Through its client-server architecture, the Pipeline environment provides a graphical user interface for designing, executing, monitoring validating, and disseminating of complex protocols that utilize

  7. Advanced Computing for Science.

    ERIC Educational Resources Information Center

    Hut, Piet; Sussman, Gerald Jay

    1987-01-01

    Discusses some of the contributions that high-speed computing is making to the study of science. Emphasizes the use of computers in exploring complicated systems without the simplification required in traditional methods of observation and experimentation. Provides examples of computer assisted investigations in astronomy and physics. (TW)

  8. Workflow4Metabolomics: a collaborative research infrastructure for computational metabolomics

    PubMed Central

    Giacomoni, Franck; Le Corguillé, Gildas; Monsoor, Misharl; Landi, Marion; Pericard, Pierre; Pétéra, Mélanie; Duperier, Christophe; Tremblay-Franco, Marie; Martin, Jean-François; Jacob, Daniel; Goulitquer, Sophie; Thévenot, Etienne A.; Caron, Christophe

    2015-01-01

    Summary: The complex, rapidly evolving field of computational metabolomics calls for collaborative infrastructures where the large volume of new algorithms for data pre-processing, statistical analysis and annotation can be readily integrated whatever the language, evaluated on reference datasets and chained to build ad hoc workflows for users. We have developed Workflow4Metabolomics (W4M), the first fully open-source and collaborative online platform for computational metabolomics. W4M is a virtual research environment built upon the Galaxy web-based platform technology. It enables ergonomic integration, exchange and running of individual modules and workflows. Alternatively, the whole W4M framework and computational tools can be downloaded as a virtual machine for local installation. Availability and implementation: http://workflow4metabolomics.org homepage enables users to open a private account and access the infrastructure. W4M is developed and maintained by the French Bioinformatics Institute (IFB) and the French Metabolomics and Fluxomics Infrastructure (MetaboHUB). Contact: contact@workflow4metabolomics.org PMID:25527831

  9. Integrating multiple scientific computing needs via a Private Cloud infrastructure

    NASA Astrophysics Data System (ADS)

    Bagnasco, S.; Berzano, D.; Brunetti, R.; Lusso, S.; Vallero, S.

    2014-06-01

    In a typical scientific computing centre, diverse applications coexist and share a single physical infrastructure. An underlying Private Cloud facility eases the management and maintenance of heterogeneous use cases such as multipurpose or application-specific batch farms, Grid sites catering to different communities, parallel interactive data analysis facilities and others. It allows to dynamically and efficiently allocate resources to any application and to tailor the virtual machines according to the applications' requirements. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques; for example, rolling updates can be performed easily and minimizing the downtime. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 site and a dynamically expandable PROOF-based Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The Private Cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem (used in two different configurations for worker- and service-class hypervisors) and the OpenWRT Linux distribution (used for network virtualization). A future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and by using mainstream contextualization tools like CloudInit.

  10. Analysis of CERN computing infrastructure and monitoring data

    NASA Astrophysics Data System (ADS)

    Nieke, C.; Lassnig, M.; Menichetti, L.; Motesnitsalis, E.; Duellmann, D.

    2015-12-01

    Optimizing a computing infrastructure on the scale of LHC requires a quantitative understanding of a complex network of many different resources and services. For this purpose the CERN IT department and the LHC experiments are collecting a large multitude of logs and performance probes, which are already successfully used for short-term analysis (e.g. operational dashboards) within each group. The IT analytics working group has been created with the goal to bring data sources from different services and on different abstraction levels together and to implement a suitable infrastructure for mid- to long-term statistical analysis. It further provides a forum for joint optimization across single service boundaries and the exchange of analysis methods and tools. To simplify access to the collected data, we implemented an automated repository for cleaned and aggregated data sources based on the Hadoop ecosystem. This contribution describes some of the challenges encountered, such as dealing with heterogeneous data formats, selecting an efficient storage format for map reduce and external access, and will describe the repository user interface. Using this infrastructure we were able to quantitatively analyze the relationship between CPU/wall fraction, latency/throughput constraints of network and disk and the effective job throughput. In this contribution we will first describe the design of the shared analysis infrastructure and then present a summary of first analysis results from the combined data sources.

  11. Advanced Electrical, Optical and Data Communication Infrastructure Development

    SciTech Connect

    Simon Cobb

    2011-04-30

    The implementation of electrical and IT infrastructure systems at the North Carolina Center for Automotive Research , Inc. (NCCAR) has achieved several key objectives in terms of system functionality, operational safety and potential for ongoing research and development. Key conclusions include: (1) The proven ability to operate a high speed wireless data network over a large 155 acre area; (2) Node to node wireless transfers from access points are possible at speeds of more than 50 mph while maintaining high volume bandwidth; (3) Triangulation of electronic devices/users is possible in areas with overlapping multiple access points, outdoor areas with reduced overlap of access point coverage considerably reduces triangulation accuracy; (4) Wireless networks can be adversely affected by tree foliage, pine needles are a particular challenge due to the needle length relative to the transmission frequency/wavelength; and (5) Future research will use the project video surveillance and wireless systems to further develop automated image tracking functionality for the benefit of advanced vehicle safety monitoring and autonomous vehicle control through 'vehicle-to-vehicle' and 'vehicle-to-infrastructure' communications. A specific advantage realized from this IT implementation at NCCAR is that NC State University is implementing a similar wireless network across Centennial Campus, Raleigh, NC in 2011 and has benefited from lessons learned during this project. Consequently, students, researchers and members of the public will be able to benefit from a large scale IT implementation with features and improvements derived from this NCCAR project.

  12. Infrastructure sensing.

    PubMed

    Soga, Kenichi; Schooling, Jennifer

    2016-08-01

    Design, construction, maintenance and upgrading of civil engineering infrastructure requires fresh thinking to minimize use of materials, energy and labour. This can only be achieved by understanding the performance of the infrastructure, both during its construction and throughout its design life, through innovative monitoring. Advances in sensor systems offer intriguing possibilities to radically alter methods of condition assessment and monitoring of infrastructure. In this paper, it is hypothesized that the future of infrastructure relies on smarter information; the rich information obtained from embedded sensors within infrastructure will act as a catalyst for new design, construction, operation and maintenance processes for integrated infrastructure systems linked directly with user behaviour patterns. Some examples of emerging sensor technologies for infrastructure sensing are given. They include distributed fibre-optics sensors, computer vision, wireless sensor networks, low-power micro-electromechanical systems, energy harvesting and citizens as sensors. PMID:27499845

  13. Infrastructure sensing.

    PubMed

    Soga, Kenichi; Schooling, Jennifer

    2016-08-01

    Design, construction, maintenance and upgrading of civil engineering infrastructure requires fresh thinking to minimize use of materials, energy and labour. This can only be achieved by understanding the performance of the infrastructure, both during its construction and throughout its design life, through innovative monitoring. Advances in sensor systems offer intriguing possibilities to radically alter methods of condition assessment and monitoring of infrastructure. In this paper, it is hypothesized that the future of infrastructure relies on smarter information; the rich information obtained from embedded sensors within infrastructure will act as a catalyst for new design, construction, operation and maintenance processes for integrated infrastructure systems linked directly with user behaviour patterns. Some examples of emerging sensor technologies for infrastructure sensing are given. They include distributed fibre-optics sensors, computer vision, wireless sensor networks, low-power micro-electromechanical systems, energy harvesting and citizens as sensors.

  14. HPC Infrastructure for Solid Earth Simulation on Parallel Computers

    NASA Astrophysics Data System (ADS)

    Nakajima, K.; Chen, L.; Okuda, H.

    2004-12-01

    Recently, various types of parallel computers with various types of architectures and processing elements (PE) have emerged, which include PC clusters and the Earth Simulator. Moreover, users can easily access to these computer resources through network on Grid environment. It is well-known that thorough tuning is required for programmers to achieve excellent performance on each computer. The method for tuning strongly depends on the type of PE and architecture. Optimization by tuning is a very tough work, especially for developers of applications. Moreover, parallel programming using message passing library such as MPI is another big task for application programmers. In GeoFEM project (http://gefeom.tokyo.rist.or.jp), authors have developed a parallel FEM platform for solid earth simulation on the Earth Simulator, which supports parallel I/O, parallel linear solvers and parallel visualization. This platform can efficiently hide complicated procedures for parallel programming and optimization on vector processors from application programmers. This type of infrastructure is very useful. Source codes developed on PC with single processor is easily optimized on massively parallel computer by linking the source code to the parallel platform installed on the target computer. This parallel platform, called HPC Infrastructure will provide dramatic efficiency, portability and reliability in development of scientific simulation codes. For example, line number of the source codes is expected to be less than 10,000 and porting legacy codes to parallel computer takes 2 or 3 weeks. Original GeoFEM platform supports only I/O, linear solvers and visualization. In the present work, further development for adaptive mesh refinement (AMR) and dynamic load-balancing (DLB) have been carried out. In this presentation, examples of large-scale solid earth simulation using the Earth Simulator will be demonstrated. Moreover, recent results of a parallel computational steering tool using an

  15. Advanced computer languages

    SciTech Connect

    Bryce, H.

    1984-05-03

    If software is to become an equal partner in the so-called fifth generation of computers-which of course it must-programming languages and the human interface will need to clear some high hurdles. Again, the solutions being sought turn to cerebral emulation-here, the way that human beings understand language. The result would be natural or English-like languages that would allow a person to communicate with a computer much as he or she does with another person. In the discussion the authors look at fourth level languages and fifth level languages, used in meeting the goal of AI. The higher level languages aim to be non procedural. Application of LISP, and Forth to natural language interface are described as well as programs such as natural link technology package, written in C.

  16. The High-Performance Computing and Communications program, the national information infrastructure and health care.

    PubMed

    Lindberg, D A; Humphreys, B L

    1995-01-01

    The High-Performance Computing and Communications (HPCC) program is a multiagency federal effort to advance the state of computing and communications and to provide the technologic platform on which the National Information Infrastructure (NII) can be built. The HPCC program supports the development of high-speed computers, high-speed telecommunications, related software and algorithms, education and training, and information infrastructure technology and applications. The vision of the NII is to extend access to high-performance computing and communications to virtually every U.S. citizen so that the technology can be used to improve the civil infrastructure, lifelong learning, energy management, health care, etc. Development of the NII will require resolution of complex economic and social issues, including information privacy. Health-related applications supported under the HPCC program and NII initiatives include connection of health care institutions to the Internet; enhanced access to gene sequence data; the "Visible Human" Project; and test-bed projects in telemedicine, electronic patient records, shared informatics tool development, and image systems.

  17. Performance evaluation of cognitive radio in advanced metering infrastructure communication

    NASA Astrophysics Data System (ADS)

    Hiew, Yik-Kuan; Mohd Aripin, Norazizah; Din, Norashidah Md

    2016-03-01

    Smart grid is an intelligent electricity grid system. A reliable two-way communication system is required to transmit both critical and non-critical smart grid data. However, it is difficult to locate a huge chunk of dedicated spectrum for smart grid communications. Hence, cognitive radio based communication is applied. Cognitive radio allows smart grid users to access licensed spectrums opportunistically with the constraint of not causing harmful interference to licensed users. In this paper, a cognitive radio based smart grid communication framework is proposed. Smart grid framework consists of Home Area Network (HAN) and Advanced Metering Infrastructure (AMI), while AMI is made up of Neighborhood Area Network (NAN) and Wide Area Network (WAN). In this paper, the authors only report the findings for AMI communication. AMI is smart grid domain that comprises smart meters, data aggregator unit, and billing center. Meter data are collected by smart meters and transmitted to data aggregator unit by using cognitive 802.11 technique; data aggregator unit then relays the data to billing center using cognitive WiMAX and TV white space. The performance of cognitive radio in AMI communication is investigated using Network Simulator 2. Simulation results show that cognitive radio improves the latency and throughput performances of AMI. Besides, cognitive radio also improves spectrum utilization efficiency of WiMAX band from 5.92% to 9.24% and duty cycle of TV band from 6.6% to 10.77%.

  18. The Computing and Data Grid Approach: Infrastructure for Distributed Science Applications

    NASA Technical Reports Server (NTRS)

    Johnston, William E.

    2002-01-01

    With the advent of Grids - infrastructure for using and managing widely distributed computing and data resources in the science environment - there is now an opportunity to provide a standard, large-scale, computing, data, instrument, and collaboration environment for science that spans many different projects and provides the required infrastructure and services in a relatively uniform and supportable way. Grid technology has evolved over the past several years to provide the services and infrastructure needed for building 'virtual' systems and organizations. We argue that Grid technology provides an excellent basis for the creation of the integrated environments that can combine the resources needed to support the large- scale science projects located at multiple laboratories and universities. We present some science case studies that indicate that a paradigm shift in the process of science will come about as a result of Grids providing transparent and secure access to advanced and integrated information and technologies infrastructure: powerful computing systems, large-scale data archives, scientific instruments, and collaboration tools. These changes will be in the form of services that can be integrated with the user's work environment, and that enable uniform and highly capable access to these computers, data, and instruments, regardless of the location or exact nature of these resources. These services will integrate transient-use resources like computing systems, scientific instruments, and data caches (e.g., as they are needed to perform a simulation or analyze data from a single experiment); persistent-use resources. such as databases, data catalogues, and archives, and; collaborators, whose involvement will continue for the lifetime of a project or longer. While we largely address large-scale science in this paper, Grids, particularly when combined with Web Services, will address a broad spectrum of science scenarios. both large and small scale.

  19. Center for Advanced Computational Technology

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.

    2000-01-01

    The Center for Advanced Computational Technology (ACT) was established to serve as a focal point for diverse research activities pertaining to application of advanced computational technology to future aerospace systems. These activities include the use of numerical simulations, artificial intelligence methods, multimedia and synthetic environments, and computational intelligence, in the modeling, analysis, sensitivity studies, optimization, design and operation of future aerospace systems. The Center is located at NASA Langley and is an integral part of the School of Engineering and Applied Science of the University of Virginia. The Center has four specific objectives: 1) conduct innovative research on applications of advanced computational technology to aerospace systems; 2) act as pathfinder by demonstrating to the research community what can be done (high-potential, high-risk research); 3) help in identifying future directions of research in support of the aeronautical and space missions of the twenty-first century; and 4) help in the rapid transfer of research results to industry and in broadening awareness among researchers and engineers of the state-of-the-art in applications of advanced computational technology to the analysis, design prototyping and operations of aerospace and other high-performance engineering systems. In addition to research, Center activities include helping in the planning and coordination of the activities of a multi-center team of NASA and JPL researchers who are developing an intelligent synthesis environment for future aerospace systems; organizing workshops and national symposia; as well as writing state-of-the-art monographs and NASA special publications on timely topics.

  20. Recent advances in computational aerodynamics

    NASA Astrophysics Data System (ADS)

    Agarwal, Ramesh K.; Desse, Jerry E.

    1991-04-01

    The current state of the art in computational aerodynamics is described. Recent advances in the discretization of surface geometry, grid generation, and flow simulation algorithms have led to flowfield predictions for increasingly complex and realistic configurations. As a result, computational aerodynamics is emerging as a crucial enabling technology for the development and design of flight vehicles. Examples illustrating the current capability for the prediction of aircraft, launch vehicle and helicopter flowfields are presented. Unfortunately, accurate modeling of turbulence remains a major difficulty in the analysis of viscosity-dominated flows. In the future inverse design methods, multidisciplinary design optimization methods, artificial intelligence technology and massively parallel computer technology will be incorporated into computational aerodynamics, opening up greater opportunities for improved product design at substantially reduced costs.

  1. Advanced flight computer. Special study

    NASA Technical Reports Server (NTRS)

    Coo, Dennis

    1995-01-01

    This report documents a special study to define a 32-bit radiation hardened, SEU tolerant flight computer architecture, and to investigate current or near-term technologies and development efforts that contribute to the Advanced Flight Computer (AFC) design and development. An AFC processing node architecture is defined. Each node may consist of a multi-chip processor as needed. The modular, building block approach uses VLSI technology and packaging methods that demonstrate a feasible AFC module in 1998 that meets that AFC goals. The defined architecture and approach demonstrate a clear low-risk, low-cost path to the 1998 production goal, with intermediate prototypes in 1996.

  2. The Einstein Toolkit: a community computational infrastructure for relativistic astrophysics

    NASA Astrophysics Data System (ADS)

    Löffler, Frank; Faber, Joshua; Bentivegna, Eloisa; Bode, Tanja; Diener, Peter; Haas, Roland; Hinder, Ian; Mundim, Bruno C.; Ott, Christian D.; Schnetter, Erik; Allen, Gabrielle; Campanelli, Manuela; Laguna, Pablo

    2012-06-01

    We describe the Einstein Toolkit, a community-driven, freely accessible computational infrastructure intended for use in numerical relativity, relativistic astrophysics, and other applications. The toolkit, developed by a collaboration involving researchers from multiple institutions around the world, combines a core set of components needed to simulate astrophysical objects such as black holes, compact objects, and collapsing stars, as well as a full suite of analysis tools. The Einstein Toolkit is currently based on the Cactus framework for high-performance computing and the Carpet adaptive mesh refinement driver. It implements spacetime evolution via the BSSN evolution system and general relativistic hydrodynamics in a finite-volume discretization. The toolkit is under continuous development and contains many new code components that have been publicly released for the first time and are described in this paper. We discuss the motivation behind the release of the toolkit, the philosophy underlying its development, and the goals of the project. A summary of the implemented numerical techniques is included, as are results of numerical test covering a variety of sample astrophysical problems.

  3. Advanced Decentralized Water/Energy Network Design for Sustainable Infrastructure

    EPA Science Inventory

    In order to provide a water infrastructure that is more sustainable into and beyond the 21st century, drinking water distribution systems and wastewater collection systems must account for our diminishing water supply, increasing demands, climate change, energy cost and availabil...

  4. Using cloud computing infrastructure with CloudBioLinux, CloudMan, and Galaxy.

    PubMed

    Afgan, Enis; Chapman, Brad; Jadan, Margita; Franke, Vedran; Taylor, James

    2012-06-01

    Cloud computing has revolutionized availability and access to computing and storage resources, making it possible to provision a large computational infrastructure with only a few clicks in a Web browser. However, those resources are typically provided in the form of low-level infrastructure components that need to be procured and configured before use. In this unit, we demonstrate how to utilize cloud computing resources to perform open-ended bioinformatic analyses, with fully automated management of the underlying cloud infrastructure. By combining three projects, CloudBioLinux, CloudMan, and Galaxy, into a cohesive unit, we have enabled researchers to gain access to more than 100 preconfigured bioinformatics tools and gigabytes of reference genomes on top of the flexible cloud computing infrastructure. The protocol demonstrates how to set up the available infrastructure and how to use the tools via a graphical desktop interface, a parallel command-line interface, and the Web-based Galaxy interface.

  5. Advanced communication infrastructure for pre-hospital EMS care.

    PubMed

    Orthner, Helmuth; Mazza, Giovanni; Mazza, Giovanni Giorgio; Shenvi, Rohit; Battles, Marcie

    2008-11-06

    The traditional communication infrastructure of the pre-hospital Emergency Medical System (EMS) is limited to voice communication using radio or cell phone technologies. With the emergence of 3rd Generation wireless networks (3G) and enhanced mobile devices capable of data communication (e.g., mobile tablets, PDAs with cell phones, or cell phones with PDA capabilities), the voice communication can be enhanced with interactive data messaging and perhaps even with interactive video communication. However, video requires substantially more bandwidth which 4th Generation (4G) systems are promising. However, their availability is limited. We present an infrastructure that allows dynamic selection of the best data transport mode in the pre-hospital EMS environment.

  6. CARMA: Management infrastructure and middleware for multi-paradigm computing

    NASA Astrophysics Data System (ADS)

    Troxel, Ian A.

    A recent trend toward multi-paradigm systems that couple General-Purpose Processors (GPPs) and other special-purpose devices such as Field-Programmable Gate Arrays (FPGAs) has shown impressive speedups. In fact, some researchers see such systems as vital to extending an anticipated slowing of Moore's law. However, extracting the full potential of large-scale, multi-paradigm, High-Performance Computing (HPC) systems has proven difficult due in part to a lack of traditional job management services upon which HPC users have come to rely. In this dissertation, the Comprehensive Approach to Reconfigurable Management Architecture (CARMA) is introduced and investigated. CARMA attempts to address the issue of large-scale, multi-paradigm system utilization by (1) providing a scalable, modular framework standard upon which other scholarly work can build; (2) conducting feasibility and tradeoff analysis of key components to determine strategies for reducing overhead and increasing performance and functionality; and (3) deploying initial prototypes with baseline services to receive feedback from the research community. In order to serve the needs of a wide range of HPC users, versions of CARMA tailored to two unique processing paradigms have been deployed and studied on an embedded space system and Beowulf cluster. Several scholarly contributions have been made by developing and analyzing the first comprehensive framework to date for job and resource management and services. CARMA is a first step toward developing an open standard with a modular design that will spur inventiveness and foster industry, government and scholarly collaboration much like Condor, Globus and the Open Systems Interconnect model have done for the cluster, grid and networking environments. Developing and evaluating components to fill out the CARMA infrastructure provided key insight into how future multi-paradigm runtime management services should be structured. In addition, deploying initial prototypes

  7. Advances in infrastructure support for flat panel display manufacturing

    NASA Astrophysics Data System (ADS)

    Bardsley, James N.; Ciesinski, Michael F.; Pinnel, M. Robert

    1997-07-01

    The success of the US display industry, both in providing high-performance displays for the US Department of Defense at reasonable cost and in capturing a significant share of the global civilian market, depends on maintaining technological leadership and on building efficient manufacturing capabilities. The US Display Consortium (USDC) was set up in 1993 by the US Government and private industry to guide the development of the infrastructure needed to support the manufacturing of flat panel displays. This mainly involves the supply of equipment and materials, but also includes the formation of partnerships and the training of a skilled labor force. Examples are given of successful development projects, some involving USDC participation, others through independent efforts of its member companies. These examples show that US-based companies can achieve leadership positions in this young and rapidly growing global market.

  8. The computing and data infrastructure to interconnect EEE stations

    NASA Astrophysics Data System (ADS)

    Noferini, F.

    2016-07-01

    The Extreme Energy Event (EEE) experiment is devoted to the search of high energy cosmic rays through a network of telescopes installed in about 50 high schools distributed throughout the Italian territory. This project requires a peculiar data management infrastructure to collect data registered in stations very far from each other and to allow a coordinated analysis. Such an infrastructure is realized at INFN-CNAF, which operates a Cloud facility based on the OpenStack opensource Cloud framework and provides Infrastructure as a Service (IaaS) for its users. In 2014 EEE started to use it for collecting, monitoring and reconstructing the data acquired in all the EEE stations. For the synchronization between the stations and the INFN-CNAF infrastructure we used BitTorrent Sync, a free peer-to-peer software designed to optimize data syncronization between distributed nodes. All data folders are syncronized with the central repository in real time to allow an immediate reconstruction of the data and their publication in a monitoring webpage. We present the architecture and the functionalities of this data management system that provides a flexible environment for the specific needs of the EEE project.

  9. Elements of a computational infrastructure for social simulation.

    PubMed

    Birkin, Mark; Procter, Rob; Allan, Rob; Bechhofer, Sean; Buchan, Iain; Goble, Carole; Hudson-Smith, Andy; Lambert, Paul; De Roure, David; Sinnott, Richard

    2010-08-28

    Applications of simulation modelling in social science domains are varied and increasingly widespread. The effective deployment of simulation models depends on access to diverse datasets, the use of analysis capabilities, the ability to visualize model outcomes and to capture, share and re-use simulations as evidence in research and policy-making. We describe three applications of e-social science that promote social simulation modelling, data management and visualization. An example is outlined in which the three components are brought together in a transport planning context. We discuss opportunities and benefits for the combination of these and other components into an e-infrastructure for social simulation and review recent progress towards the establishment of such an infrastructure.

  10. Advances in computational solvation thermodynamics

    NASA Astrophysics Data System (ADS)

    Wyczalkowski, Matthew A.

    The aim of this thesis is to develop improved methods for calculating the free energy, entropy and enthalpy of solvation from molecular simulations. Solvation thermodynamics of model compounds provides quantitative measurements used to analyze the stability of protein conformations in aqueous milieus. Solvation free energies govern the favorability of the solvation process, while entropy and enthalpy decompositions give insight into the molecular mechanisms by which the process occurs. Computationally, a coupling parameter lambda modulates solute-solvent interactions to simulate an insertion process, and multiple lengthy simulations at a fixed lambda value are typically required for free energy calculations to converge; entropy and enthalpy decompositions generally take 10-100 times longer. This thesis presents three advances which accelerate the convergence of such calculations: (1) Development of entropy and enthalpy estimators which combine data from multiple simulations; (2) Optimization of lambda schedules, or the set of parameter values associated with each simulation; (3) Validation of Hamiltonian replica exchange, a technique which swaps lambda values between two otherwise independent simulations. Taken together, these techniques promise to increase the accuracy and precision of free energy, entropy and enthalpy calculations. Improved estimates, in turn, can be used to investigate the validity and limits of existing solvation models and refine force field parameters, with the goal of understanding better the collapse transition and aggregation behavior of polypeptides.

  11. A Distributed Computing Infrastructure for Computational Thermodynamic Calculations of Solid-Liquid Phase Equilibria

    NASA Astrophysics Data System (ADS)

    Ghiorso, M. S.; Kress, V. C.

    2004-12-01

    Software tools like MELTS (Ghiorso and Sack, 1995, CMP 119:197) and its derivatives (Ghiorso et al., 2002, G3 3:10.1029/2001GC000217) are sophisticated calculators used by geoscientists to quantify the chemistry of melt production, transport and storage. These tools utilize computational thermodynamics to evaluate the equilibrium state of the system under specified external conditions by minimizing a suitably constructed thermodynamic potential. Like any thermodynamically based tool, the principal advantage in employing these techniques to model igneous processes is the intrinsic ability to couple the chemistry and energetics of the evolution of the system in a self consistent and rigorous formalism. Access to MELTS is normally accomplished via a standalone X11-based executable or as a Java-based web applet. The latter is a dedicated client-server application rooted at the University of Chicago. Our on-going objective is the development of a distributed computing infrastructure to provide "MELTS-like" computations on demand to remote network users by utilizing a language independent client-server protocol based on CORBA. The advantages of this model are numerous. First, the burden of implementing and executing MELTS computations is centralized with a software implementation optimized to a compute cluster dedicated for that purpose. Improvements and updates to MELTS software are handled locally on the server side without intervention of the user and the server-model lessens the burden of supporting the computational code on a variety of hardware and OS platforms. Second, the client hardware platform does not incur the computational cost of performing a MELTS simulation and the remote user can focus on the task of incorporating results into their model. Third, the client user can write software in a computer language of their choosing and procedural calls to the MELTS library can be executed transparently over the network as if a local language-compatible library of

  12. Building Efficient Wireless Infrastructures for Pervasive Computing Environments

    ERIC Educational Resources Information Center

    Sheng, Bo

    2010-01-01

    Pervasive computing is an emerging concept that thoroughly brings computing devices and the consequent technology into people's daily life and activities. Most of these computing devices are very small, sometimes even "invisible", and often embedded into the objects surrounding people. In addition, these devices usually are not isolated, but…

  13. An Infrastructure for Web-Based Computer Assisted Learning

    ERIC Educational Resources Information Center

    Joy, Mike; Muzykantskii, Boris; Rawles, Simon; Evans, Michael

    2002-01-01

    We describe an initiative under way at Warwick to provide a technical foundation for computer aided learning and computer-assisted assessment tools, which allows a rich dialogue sensitive to individual students' response patterns. The system distinguishes between dialogues for individual problems and the linking of problems. This enables a subject…

  14. A secure communications infrastructure for high-performance distributed computing

    SciTech Connect

    Foster, I.; Koenig, G.; Tuecke, S.

    1997-08-01

    Applications that use high-speed networks to connect geographically distributed supercomputers, databases, and scientific instruments may operate over open networks and access valuable resources. Hence, they can require mechanisms for ensuring integrity and confidentially of communications and for authenticating both users and resources. Security solutions developed for traditional client-server applications do not provide direct support for the program structures, programming tools, and performance requirements encountered in these applications. The authors address these requirements via a security-enhanced version of the Nexus communication library; which they use to provide secure versions of parallel libraries and languages, including the Message Passing Interface. These tools permit a fine degree of control over what, where, and when security mechanisms are applied. In particular, a single application can mix secure and nonsecure communication, allowing the programmer to make fine-grained security/performance tradeoffs. The authors present performance results that quantify the performance of their infrastructure.

  15. Elastic Cloud Computing Infrastructures in the Open Cirrus Testbed Implemented via Eucalyptus

    NASA Astrophysics Data System (ADS)

    Baun, Christian; Kunze, Marcel

    Cloud computing realizes the advantages and overcomes some restrictionsof the grid computing paradigm. Elastic infrastructures can easily be createdand managed by cloud users. In order to accelerate the research ondata center management and cloud services the OpenCirrusTM researchtestbed has been started by HP, Intel and Yahoo!. Although commercialcloud offerings are proprietary, Open Source solutions exist in the field ofIaaS with Eucalyptus, PaaS with AppScale and at the applications layerwith Hadoop MapReduce. This paper examines the I/O performance ofcloud computing infrastructures implemented with Eucalyptus in contrastto Amazon S3.

  16. Advancing manufacturing through computational chemistry

    SciTech Connect

    Noid, D.W.; Sumpter, B.G.; Tuzun, R.E.

    1995-12-31

    The capabilities of nanotechnology and computational chemistry are reaching a point of convergence. New computer hardware and novel computational methods have created opportunities to test proposed nanometer-scale devices, investigate molecular manufacturing and model and predict properties of new materials. Experimental methods are also beginning to provide new capabilities that make the possibility of manufacturing various devices with atomic precision tangible. In this paper, we will discuss some of the novel computational methods we have used in molecular dynamics simulations of polymer processes, neural network predictions of new materials, and simulations of proposed nano-bearings and fluid dynamics in nano- sized devices.

  17. Managing a tier-2 computer centre with a private cloud infrastructure

    NASA Astrophysics Data System (ADS)

    Bagnasco, Stefano; Berzano, Dario; Brunetti, Riccardo; Lusso, Stefano; Vallero, Sara

    2014-06-01

    In a typical scientific computing centre, several applications coexist and share a single physical infrastructure. An underlying Private Cloud infrastructure eases the management and maintenance of such heterogeneous applications (such as multipurpose or application-specific batch farms, Grid sites, interactive data analysis facilities and others), allowing dynamic allocation resources to any application. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques. Such infrastructures are being deployed in some large centres (see e.g. the CERN Agile Infrastructure project), but with several open-source tools reaching maturity this is becoming viable also for smaller sites. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 centre, an Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The private cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem and the OpenWRT Linux distribution (used for network virtualization); a future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and OCCI.

  18. Quantum chromodynamics with advanced computing

    SciTech Connect

    Kronfeld, Andreas S.; /Fermilab

    2008-07-01

    We survey results in lattice quantum chromodynamics from groups in the USQCD Collaboration. The main focus is on physics, but many aspects of the discussion are aimed at an audience of computational physicists.

  19. Aerodynamic Analyses Requiring Advanced Computers, Part 1

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Papers are presented which deal with results of theoretical research on aerodynamic flow problems requiring the use of advanced computers. Topics discussed include: viscous flows, boundary layer equations, turbulence modeling and Navier-Stokes equations, and internal flows.

  20. Aerodynamic Analyses Requiring Advanced Computers, part 2

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Papers given at the conference present the results of theoretical research on aerodynamic flow problems requiring the use of advanced computers. Topics discussed include two-dimensional configurations, three-dimensional configurations, transonic aircraft, and the space shuttle.

  1. Bringing Advanced Computational Techniques to Energy Research

    SciTech Connect

    Mitchell, Julie C

    2012-11-17

    Please find attached our final technical report for the BACTER Institute award. BACTER was created as a graduate and postdoctoral training program for the advancement of computational biology applied to questions of relevance to bioenergy research.

  2. The Mediated Museum: Computer-Based Technology and Museum Infrastructure.

    ERIC Educational Resources Information Center

    Sterman, Nanette T.; Allen, Brockenbrough S.

    1991-01-01

    Describes the use of computer-based tools and techniques in museums. The integration of realia with media-based advice and interpretation is described, electronic replicas of ancient Greek vases in the J. Paul Getty Museum are explained, examples of mediated exhibits are presented, and the use of hypermedia is discussed. (five references) (LRW)

  3. Advanced algorithm for orbit computation

    NASA Technical Reports Server (NTRS)

    Szenbehely, V.

    1983-01-01

    Computational and analytical techniques which simplify the solution of complex problems in orbit mechanics, Astrodynamics and Celestial Mechanics were developed. The major tool of the simplification is the substitution of transformations in place of numerical or analytical integrations. In this way the rather complicated equations of orbit mechanics might sometimes be reduced to linear equations representing harmonic oscillators with constant coefficients.

  4. 77 FR 62231 - DOE/Advanced Scientific Computing Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-12

    .../Advanced Scientific Computing Advisory Committee AGENCY: Office of Science, Department of Energy. ACTION: Notice of Open Meeting. SUMMARY: This notice announces a meeting of the Advanced Scientific Computing...: Melea Baker, Office of Advanced Scientific Computing Research; SC-21/Germantown Building;...

  5. 76 FR 31945 - Advanced Scientific Computing Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-02

    ... Advanced Scientific Computing Advisory Committee AGENCY: Department of Energy, Office of Science. ACTION... the Advanced Scientific Computing Advisory Committee (ASCAC). The Federal Advisory Committee Act (Pub... INFORMATION CONTACT: Melea Baker, Office of Advanced Scientific Computing Research; SC-21/Germantown...

  6. Advanced Biomedical Computing Center (ABCC) | DSITP

    Cancer.gov

    The Advanced Biomedical Computing Center (ABCC), located in Frederick Maryland (MD), provides HPC resources for both NIH/NCI intramural scientists and the extramural biomedical research community. Its mission is to provide HPC support, to provide collaborative research, and to conduct in-house research in various areas of computational biology and biomedical research.

  7. Advances and trends in computational structural mechanics

    NASA Technical Reports Server (NTRS)

    Noor, A. K.

    1986-01-01

    Recent developments in computational structural mechanics are reviewed with reference to computational needs for future structures technology, advances in computational models for material behavior, discrete element technology, assessment and control of numerical simulations of structural response, hybrid analysis, and techniques for large-scale optimization. Research areas in computational structural mechanics which have high potential for meeting future technological needs are identified. These include prediction and analysis of the failure of structural components made of new materials, development of computational strategies and solution methodologies for large-scale structural calculations, and assessment of reliability and adaptive improvement of response predictions.

  8. Opportunities in computational mechanics: Advances in parallel computing

    SciTech Connect

    Lesar, R.A.

    1999-02-01

    In this paper, the authors will discuss recent advances in computing power and the prospects for using these new capabilities for studying plasticity and failure. They will first review the new capabilities made available with parallel computing. They will discuss how these machines perform and how well their architecture might work on materials issues. Finally, they will give some estimates on the size of problems possible using these computers.

  9. Disaster Management Based on Urgent Computing and Dynamic Data Driven Application Services over European e-Infrastructures

    NASA Astrophysics Data System (ADS)

    Schiffers, Michael; Kranzlmüller, Dieter

    2014-05-01

    The study of natural hazards in Hydro-Meteorological Research (HMR) requires the execution of various meteorological, hydrological, and hydraulic models -- either standalone or as well-orchestrated chains (workflows) -- in order to better understand the natural phenomena leading to hazardous events. When hazards turn into disasters, exposure and vulnerability models need to be considered in addition for appropriate emergency response and recovery. While it is possible to mitigate the impact of disasters through education, policies, infrastructure resilience, and real-time decision support systems, the need to go beyond the pure communication of timely information by the advanced use of optimization and simulation technology is increasingly accepted. In general, the focus of today's decision-support systems for disaster management is on pure situational awareness (i.e., communicating to decision makers the situation in the field as accurately as possible). Experience from recent disaster cases, however, underpins the necessity of a novel, holistic approach to disaster management which not only needs to factor in the dependability of high performance computing resources and current advances in computational sciences (simulation, optimization), but also the dynamic integration of widely distributed (currently collected and archival) data into executing simulations to steer both the application model and further measurements (instrumentation and control). The first paradigm is commonly subsumed under "urgent computing (UC)", the latter one under "dynamic data driven application services (DDDAS)". Both paradigms suffer from severe research challenges. In the talk we first introduce both paradigms using several disaster cases as examples. We then deduct from these cases the inherent research questions and present a brief survey of the state-of-the-art. The main part of the talk, however, will be devoted to the presentation of a "disaster management infrastructure

  10. National Computational Infrastructure for Lattice Gauge Theory SciDAC-2 Closeout Report Indiana University Component

    SciTech Connect

    Gottlieb, Steven Arthur; DeTar, Carleton; Tousaint, Doug

    2014-07-24

    This is the closeout report for the Indiana University portion of the National Computational Infrastructure for Lattice Gauge Theory project supported by the United States Department of Energy under the SciDAC program. It includes information about activities at Indian University, the University of Arizona, and the University of Utah, as those three universities coordinated their activities.

  11. Network Computing Infrastructure to Share Tools and Data in Global Nuclear Energy Partnership

    NASA Astrophysics Data System (ADS)

    Kim, Guehee; Suzuki, Yoshio; Teshima, Naoya

    CCSE/JAEA (Center for Computational Science and e-Systems/Japan Atomic Energy Agency) integrated a prototype system of a network computing infrastructure for sharing tools and data to support the U.S. and Japan collaboration in GNEP (Global Nuclear Energy Partnership). We focused on three technical issues to apply our information process infrastructure, which are accessibility, security, and usability. In designing the prototype system, we integrated and improved both network and Web technologies. For the accessibility issue, we adopted SSL-VPN (Security Socket Layer-Virtual Private Network) technology for the access beyond firewalls. For the security issue, we developed an authentication gateway based on the PKI (Public Key Infrastructure) authentication mechanism to strengthen the security. Also, we set fine access control policy to shared tools and data and used shared key based encryption method to protect tools and data against leakage to third parties. For the usability issue, we chose Web browsers as user interface and developed Web application to provide functions to support sharing tools and data. By using WebDAV (Web-based Distributed Authoring and Versioning) function, users can manipulate shared tools and data through the Windows-like folder environment. We implemented the prototype system in Grid infrastructure for atomic energy research: AEGIS (Atomic Energy Grid Infrastructure) developed by CCSE/JAEA. The prototype system was applied for the trial use in the first period of GNEP.

  12. Advanced e-Infrastructures for Civil Protection applications: the CYCLOPS Project

    NASA Astrophysics Data System (ADS)

    Mazzetti, P.; Nativi, S.; Verlato, M.; Ayral, P. A.; Fiorucci, P.; Pina, A.; Oliveira, J.; Sorani, R.

    2009-04-01

    During the full cycle of the emergency management, Civil Protection operative procedures involve many actors belonging to several institutions (civil protection agencies, public administrations, research centers, etc.) playing different roles (decision-makers, data and service providers, emergency squads, etc.). In this context the sharing of information is a vital requirement to make correct and effective decisions. Therefore a European-wide technological infrastructure providing a distributed and coordinated access to different kinds of resources (data, information, services, expertise, etc.) could enhance existing Civil Protection applications and even enable new ones. Such European Civil Protection e-Infrastructure should be designed taking into account the specific requirements of Civil Protection applications and the state-of-the-art in the scientific and technological disciplines which could make the emergency management more effective. In the recent years Grid technologies have reached a mature state providing a platform for secure and coordinated resource sharing between the participants collected in the so-called Virtual Organizations. Moreover the Earth and Space Sciences Informatics provide the conceptual tools for modeling the geospatial information shared in Civil Protection applications during its entire lifecycle. Therefore a European Civil Protection e-infrastructure might be based on a Grid platform enhanced with Earth Sciences services. In the context of the 6th Framework Programme the EU co-funded Project CYCLOPS (CYber-infrastructure for CiviL protection Operative ProcedureS), ended in December 2008, has addressed the problem of defining the requirements and identifying the research strategies and innovation guidelines towards an advanced e-Infrastructure for Civil Protection. Starting from the requirement analysis CYCLOPS has proposed an architectural framework for a European Civil Protection e-Infrastructure. This architectural framework has

  13. Role of HPC in Advancing Computational Aeroelasticity

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.

    2004-01-01

    On behalf of the High Performance Computing and Modernization Program (HPCMP) and NASA Advanced Supercomputing Division (NAS) a study is conducted to assess the role of supercomputers on computational aeroelasticity of aerospace vehicles. The study is mostly based on the responses to a web based questionnaire that was designed to capture the nuances of high performance computational aeroelasticity, particularly on parallel computers. A procedure is presented to assign a fidelity-complexity index to each application. Case studies based on major applications using HPCMP resources are presented.

  14. Advances and trends in computational structures technology

    NASA Technical Reports Server (NTRS)

    Noor, A. K.; Venneri, S. L.

    1990-01-01

    The major goals of computational structures technology (CST) are outlined, and recent advances in CST are examined. These include computational material modeling, stochastic-based modeling, computational methods for articulated structural dynamics, strategies and numerical algorithms for new computing systems, multidisciplinary analysis and optimization. The role of CST in the future development of structures technology and the multidisciplinary design of future flight vehicles is addressed, and the future directions of CST research in the prediction of failures of structural components, the solution of large-scale structural problems, and quality assessment and control of numerical simulations are discussed.

  15. America's most computer advanced healthcare facilities.

    PubMed

    1993-02-01

    Healthcare Informatics polled industry experts for nominations for this listing of America's Most Computer-Advanced Healthcare Facilities. Nominations were reviewed for extent of departmental automation, leading-edge applications, advanced point-of-care technologies, and networking communications capabilities. Additional consideration was given to smaller facilities automated beyond "normal expectations." Facility representatives who believe their organizations should be included in our next listing, please contact Healthcare Informatics for a nomination form.

  16. Advances and Challenges in Computational Plasma Science

    SciTech Connect

    W.M. Tang; V.S. Chan

    2005-01-03

    Scientific simulation, which provides a natural bridge between theory and experiment, is an essential tool for understanding complex plasma behavior. Recent advances in simulations of magnetically-confined plasmas are reviewed in this paper with illustrative examples chosen from associated research areas such as microturbulence, magnetohydrodynamics, and other topics. Progress has been stimulated in particular by the exponential growth of computer speed along with significant improvements in computer technology.

  17. High-Performance Computing for Advanced Smart Grid Applications

    SciTech Connect

    Huang, Zhenyu; Chen, Yousu

    2012-07-06

    The power grid is becoming far more complex as a result of the grid evolution meeting an information revolution. Due to the penetration of smart grid technologies, the grid is evolving as an unprecedented speed and the information infrastructure is fundamentally improved with a large number of smart meters and sensors that produce several orders of magnitude larger amounts of data. How to pull data in, perform analysis, and put information out in a real-time manner is a fundamental challenge in smart grid operation and planning. The future power grid requires high performance computing to be one of the foundational technologies in developing the algorithms and tools for the significantly increased complexity. New techniques and computational capabilities are required to meet the demands for higher reliability and better asset utilization, including advanced algorithms and computing hardware for large-scale modeling, simulation, and analysis. This chapter summarizes the computational challenges in smart grid and the need for high performance computing, and present examples of how high performance computing might be used for future smart grid operation and planning.

  18. Computational Biology, Advanced Scientific Computing, and Emerging Computational Architectures

    SciTech Connect

    2007-06-27

    This CRADA was established at the start of FY02 with $200 K from IBM and matching funds from DOE to support post-doctoral fellows in collaborative research between International Business Machines and Oak Ridge National Laboratory to explore effective use of emerging petascale computational architectures for the solution of computational biology problems. 'No cost' extensions of the CRADA were negotiated with IBM for FY03 and FY04.

  19. Advanced simulation for analysis of critical infrastructure : abstract cascades, the electric power grid, and Fedwire.

    SciTech Connect

    Glass, Robert John, Jr.; Stamber, Kevin Louis; Beyeler, Walter Eugene

    2004-08-01

    Critical Infrastructures are formed by a large number of components that interact within complex networks. As a rule, infrastructures contain strong feedbacks either explicitly through the action of hardware/software control, or implicitly through the action/reaction of people. Individual infrastructures influence others and grow, adapt, and thus evolve in response to their multifaceted physical, economic, cultural, and political environments. Simply put, critical infrastructures are complex adaptive systems. In the Advanced Modeling and Techniques Investigations (AMTI) subgroup of the National Infrastructure Simulation and Analysis Center (NISAC), we are studying infrastructures as complex adaptive systems. In one of AMTI's efforts, we are focusing on cascading failure as can occur with devastating results within and between infrastructures. Over the past year we have synthesized and extended the large variety of abstract cascade models developed in the field of complexity science and have started to apply them to specific infrastructures that might experience cascading failure. In this report we introduce our comprehensive model, Polynet, which simulates cascading failure over a wide range of network topologies, interaction rules, and adaptive responses as well as multiple interacting and growing networks. We first demonstrate Polynet for the classical Bac, Tang, and Wiesenfeld or BTW sand-pile in several network topologies. We then apply Polynet to two very different critical infrastructures: the high voltage electric power transmission system which relays electricity from generators to groups of distribution-level consumers, and Fedwire which is a Federal Reserve service for sending large-value payments between banks and other large financial institutions. For these two applications, we tailor interaction rules to represent appropriate unit behavior and consider the influence of random transactions within two stylized networks: a regular homogeneous array and a

  20. Advanced networks and computing in healthcare

    PubMed Central

    Ackerman, Michael

    2011-01-01

    As computing and network capabilities continue to rise, it becomes increasingly important to understand the varied applications for using them to provide healthcare. The objective of this review is to identify key characteristics and attributes of healthcare applications involving the use of advanced computing and communication technologies, drawing upon 45 research and development projects in telemedicine and other aspects of healthcare funded by the National Library of Medicine over the past 12 years. Only projects publishing in the professional literature were included in the review. Four projects did not publish beyond their final reports. In addition, the authors drew on their first-hand experience as project officers, reviewers and monitors of the work. Major themes in the corpus of work were identified, characterizing key attributes of advanced computing and network applications in healthcare. Advanced computing and network applications are relevant to a range of healthcare settings and specialties, but they are most appropriate for solving a narrower range of problems in each. Healthcare projects undertaken primarily to explore potential have also demonstrated effectiveness and depend on the quality of network service as much as bandwidth. Many applications are enabling, making it possible to provide service or conduct research that previously was not possible or to achieve outcomes in addition to those for which projects were undertaken. Most notable are advances in imaging and visualization, collaboration and sense of presence, and mobility in communication and information-resource use. PMID:21486877

  1. Predictive Dynamic Security Assessment through Advanced Computing

    SciTech Connect

    Huang, Zhenyu; Diao, Ruisheng; Jin, Shuangshuang; Chen, Yousu

    2014-11-30

    Abstract— Traditional dynamic security assessment is limited by several factors and thus falls short in providing real-time information to be predictive for power system operation. These factors include the steady-state assumption of current operating points, static transfer limits, and low computational speed. This addresses these factors and frames predictive dynamic security assessment. The primary objective of predictive dynamic security assessment is to enhance the functionality and computational process of dynamic security assessment through the use of high-speed phasor measurements and the application of advanced computing technologies for faster-than-real-time simulation. This paper presents algorithms, computing platforms, and simulation frameworks that constitute the predictive dynamic security assessment capability. Examples of phasor application and fast computation for dynamic security assessment are included to demonstrate the feasibility and speed enhancement for real-time applications.

  2. Recent advances in computer image generation simulation.

    PubMed

    Geltmacher, H E

    1988-11-01

    An explosion in flight simulator technology over the past 10 years is revolutionizing U.S. Air Force (USAF) operational training. The single, most important development has been in computer image generation. However, other significant advances are being made in simulator handling qualities, real-time computation systems, and electro-optical displays. These developments hold great promise for achieving high fidelity combat mission simulation. This article reviews the progress to date and predicts its impact, along with that of new computer science advances such as very high speed integrated circuits (VHSIC), on future USAF aircrew simulator training. Some exciting possibilities are multiship, full-mission simulators at replacement training units, miniaturized unit level mission rehearsal training simulators, onboard embedded training capability, and national scale simulator networking.

  3. WRF4G project: Adaptation of WRF Model to Distributed Computing Infrastructures

    NASA Astrophysics Data System (ADS)

    Cofino, Antonio S.; Fernández Quiruelas, Valvanuz; García Díez, Markel; Blanco Real, Jose C.; Fernández, Jesús

    2013-04-01

    Nowadays Grid Computing is powerful computational tool which is ready to be used for scientific community in different areas (such as biomedicine, astrophysics, climate, etc.). However, the use of this distributed computing infrastructures (DCI) is not yet common practice in climate research, and only a few teams and applications in this area take advantage of this infrastructure. Thus, the first objective of this project is to popularize the use of this technology in the atmospheric sciences area. In order to achieve this objective, one of the most used applications has been taken (WRF; a limited- area model, successor of the MM5 model), that has a user community formed by more than 8000 researchers worldwide. This community develop its research activity on different areas and could benefit from the advantages of Grid resources (case study simulations, regional hind-cast/forecast, sensitivity studies, etc.). The WRF model is been used as input by many energy and natural hazards community, therefore those community will also benefit. However, Grid infrastructures have some drawbacks for the execution of applications that make an intensive use of CPU and memory for a long period of time. This makes necessary to develop a specific framework (middleware). This middleware encapsulates the application and provides appropriate services for the monitoring and management of the jobs and the data. Thus, the second objective of the project consists on the development of a generic adaptation of WRF for Grid (WRF4G), to be distributed as open-source and to be integrated in the official WRF development cycle. The use of this WRF adaptation should be transparent and useful to face any of the previously described studies, and avoid any of the problems of the Grid infrastructure. Moreover it should simplify the access to the Grid infrastructures for the research teams, and also to free them from the technical and computational aspects of the use of the Grid. Finally, in order to

  4. Application of large-scale computing infrastructure for diverse environmental research applications using GC3Pie

    NASA Astrophysics Data System (ADS)

    Maffioletti, Sergio; Dawes, Nicholas; Bavay, Mathias; Sarni, Sofiane; Lehning, Michael

    2013-04-01

    The Swiss Experiment platform (SwissEx: http://www.swiss-experiment.ch) provides a distributed storage and processing infrastructure for environmental research experiments. The aim of the second phase project (the Open Support Platform for Environmental Research, OSPER, 2012-2015) is to develop the existing infrastructure to provide scientists with an improved workflow. This improved workflow will include pre-defined, documented and connected processing routines. A large-scale computing and data facility is required to provide reliable and scalable access to data for analysis, and it is desirable that such an infrastructure should be free of traditional data handling methods. Such an infrastructure has been developed using the cloud-based part of the Swiss national infrastructure SMSCG (http://www.smscg.ch) and Academic Cloud. The infrastructure under construction supports two main usage models: 1) Ad-hoc data analysis scripts: These scripts are simple processing scripts, written by the environmental researchers themselves, which can be applied to large data sets via the high power infrastructure. Examples of this type of script are spatial statistical analysis scripts (R-based scripts), mostly computed on raw meteorological and/or soil moisture data. These provide processed output in the form of a grid, a plot, or a kml. 2) Complex models: A more intense data analysis pipeline centered (initially) around the physical process model, Alpine3D, and the MeteoIO plugin; depending on the data set, this may require a tightly coupled infrastructure. SMSCG already supports Alpine3D executions as both regular grid jobs and as virtual software appliances. A dedicated appliance with the Alpine3D specific libraries has been created and made available through the SMSCG infrastructure. The analysis pipelines are activated and supervised by simple control scripts that, depending on the data fetched from the meteorological stations, launch new instances of the Alpine3D appliance

  5. Designing and Operating Through Compromise: Architectural Analysis of CKMS for the Advanced Metering Infrastructure

    SciTech Connect

    Duren, Mike; Aldridge, Hal; Abercrombie, Robert K; Sheldon, Frederick T

    2013-01-01

    Compromises attributable to the Advanced Persistent Threat (APT) highlight the necessity for constant vigilance. The APT provides a new perspective on security metrics (e.g., statistics based cyber security) and quantitative risk assessments. We consider design principals and models/tools that provide high assurance for energy delivery systems (EDS) operations regardless of the state of compromise. Cryptographic keys must be securely exchanged, then held and protected on either end of a communications link. This is challenging for a utility with numerous substations that must secure the intelligent electronic devices (IEDs) that may comprise complex control system of systems. For example, distribution and management of keys among the millions of intelligent meters within the Advanced Metering Infrastructure (AMI) is being implemented as part of the National Smart Grid initiative. Without a means for a secure cryptographic key management system (CKMS) no cryptographic solution can be widely deployed to protect the EDS infrastructure from cyber-attack. We consider 1) how security modeling is applied to key management and cyber security concerns on a continuous basis from design through operation, 2) how trusted models and key management architectures greatly impact failure scenarios, and 3) how hardware-enabled trust is a critical element to detecting, surviving, and recovering from attack.

  6. Geospatial-enabled Data Exploration and Computation through Data Infrastructure Building Blocks

    NASA Astrophysics Data System (ADS)

    Song, C. X.; Biehl, L. L.; Merwade, V.; Villoria, N.

    2015-12-01

    Geospatial data are present everywhere today with the proliferation of location-aware computing devices and sensors. This is especially true in the scientific community where large amounts of data are driving research and education activities in many domains. Collaboration over geospatial data, for example, in modeling, data analysis and visualization, must still overcome the barriers of specialized software and expertise among other challenges. The GABBs project aims at enabling broader access to geospatial data exploration and computation by developing spatial data infrastructure building blocks that leverage capabilities of end-to-end application service and virtualized computing framework in HUBzero. Funded by NSF Data Infrastructure Building Blocks (DIBBS) initiative, GABBs provides a geospatial data architecture that integrates spatial data management, mapping and visualization and will make it available as open source. The outcome of the project will enable users to rapidly create tools and share geospatial data and tools on the web for interactive exploration of data without requiring significant software development skills, GIS expertise or IT administrative privileges. This presentation will describe the development of geospatial data infrastructure building blocks and the scientific use cases that help drive the software development, as well as seek feedback from the user communities.

  7. Simulation methods for advanced scientific computing

    SciTech Connect

    Booth, T.E.; Carlson, J.A.; Forster, R.A.

    1998-11-01

    This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The objective of the project was to create effective new algorithms for solving N-body problems by computer simulation. The authors concentrated on developing advanced classical and quantum Monte Carlo techniques. For simulations of phase transitions in classical systems, they produced a framework generalizing the famous Swendsen-Wang cluster algorithms for Ising and Potts models. For spin-glass-like problems, they demonstrated the effectiveness of an extension of the multicanonical method for the two-dimensional, random bond Ising model. For quantum mechanical systems, they generated a new method to compute the ground-state energy of systems of interacting electrons. They also improved methods to compute excited states when the diffusion quantum Monte Carlo method is used and to compute longer time dynamics when the stationary phase quantum Monte Carlo method is used.

  8. Research Institute for Advanced Computer Science

    NASA Technical Reports Server (NTRS)

    Gross, Anthony R. (Technical Monitor); Leiner, Barry M.

    2000-01-01

    The Research Institute for Advanced Computer Science (RIACS) carries out basic research and technology development in computer science, in support of the National Aeronautics and Space Administration's missions. RIACS is located at the NASA Ames Research Center. It currently operates under a multiple year grant/cooperative agreement that began on October 1, 1997 and is up for renewal in the year 2002. Ames has been designated NASA's Center of Excellence in Information Technology. In this capacity, Ames is charged with the responsibility to build an Information Technology Research Program that is preeminent within NASA. RIACS serves as a bridge between NASA Ames and the academic community, and RIACS scientists and visitors work in close collaboration with NASA scientists. RIACS has the additional goal of broadening the base of researchers in these areas of importance to the nation's space and aeronautics enterprises. RIACS research focuses on the three cornerstones of information technology research necessary to meet the future challenges of NASA missions: (1) Automated Reasoning for Autonomous Systems. Techniques are being developed enabling spacecraft that will be self-guiding and self-correcting to the extent that they will require little or no human intervention. Such craft will be equipped to independently solve problems as they arise, and fulfill their missions with minimum direction from Earth; (2) Human-Centered Computing. Many NASA missions require synergy between humans and computers, with sophisticated computational aids amplifying human cognitive and perceptual abilities; (3) High Performance Computing and Networking. Advances in the performance of computing and networking continue to have major impact on a variety of NASA endeavors, ranging from modeling and simulation to data analysis of large datasets to collaborative engineering, planning and execution. In addition, RIACS collaborates with NASA scientists to apply information technology research to a

  9. Airborne Advanced Reconfigurable Computer System (ARCS)

    NASA Technical Reports Server (NTRS)

    Bjurman, B. E.; Jenkins, G. M.; Masreliez, C. J.; Mcclellan, K. L.; Templeman, J. E.

    1976-01-01

    A digital computer subsystem fault-tolerant concept was defined, and the potential benefits and costs of such a subsystem were assessed when used as the central element of a new transport's flight control system. The derived advanced reconfigurable computer system (ARCS) is a triple-redundant computer subsystem that automatically reconfigures, under multiple fault conditions, from triplex to duplex to simplex operation, with redundancy recovery if the fault condition is transient. The study included criteria development covering factors at the aircraft's operation level that would influence the design of a fault-tolerant system for commercial airline use. A new reliability analysis tool was developed for evaluating redundant, fault-tolerant system availability and survivability; and a stringent digital system software design methodology was used to achieve design/implementation visibility.

  10. Advanced Neuropsychological Diagnostics Infrastructure (ANDI): A Normative Database Created from Control Datasets

    PubMed Central

    de Vent, Nathalie R.; Agelink van Rentergem, Joost A.; Schmand, Ben A.; Murre, Jaap M. J.; Huizenga, Hilde M.

    2016-01-01

    In the Advanced Neuropsychological Diagnostics Infrastructure (ANDI), datasets of several research groups are combined into a single database, containing scores on neuropsychological tests from healthy participants. For most popular neuropsychological tests the quantity, and range of these data surpasses that of traditional normative data, thereby enabling more accurate neuropsychological assessment. Because of the unique structure of the database, it facilitates normative comparison methods that were not feasible before, in particular those in which entire profiles of scores are evaluated. In this article, we describe the steps that were necessary to combine the separate datasets into a single database. These steps involve matching variables from multiple datasets, removing outlying values, determining the influence of demographic variables, and finding appropriate transformations to normality. Also, a brief description of the current contents of the ANDI database is given. PMID:27812340

  11. Software Infrastructure for Computer-aided Drug Discovery and Development, a Practical Example with Guidelines.

    PubMed

    Moretti, Loris; Sartori, Luca

    2016-09-01

    In the field of Computer-Aided Drug Discovery and Development (CADDD) the proper software infrastructure is essential for everyday investigations. The creation of such an environment should be carefully planned and implemented with certain features in order to be productive and efficient. Here we describe a solution to integrate standard computational services into a functional unit that empowers modelling applications for drug discovery. This system allows users with various level of expertise to run in silico experiments automatically and without the burden of file formatting for different software, managing the actual computation, keeping track of the activities and graphical rendering of the structural outcomes. To showcase the potential of this approach, performances of five different docking programs on an Hiv-1 protease test set are presented. PMID:27546042

  12. Software Infrastructure for Computer-aided Drug Discovery and Development, a Practical Example with Guidelines.

    PubMed

    Moretti, Loris; Sartori, Luca

    2016-09-01

    In the field of Computer-Aided Drug Discovery and Development (CADDD) the proper software infrastructure is essential for everyday investigations. The creation of such an environment should be carefully planned and implemented with certain features in order to be productive and efficient. Here we describe a solution to integrate standard computational services into a functional unit that empowers modelling applications for drug discovery. This system allows users with various level of expertise to run in silico experiments automatically and without the burden of file formatting for different software, managing the actual computation, keeping track of the activities and graphical rendering of the structural outcomes. To showcase the potential of this approach, performances of five different docking programs on an Hiv-1 protease test set are presented.

  13. The GMOS cyber(e)-infrastructure: advanced services for supporting science and policy.

    PubMed

    Cinnirella, S; D'Amore, F; Bencardino, M; Sprovieri, F; Pirrone, N

    2014-03-01

    The need for coordinated, systematized and catalogued databases on mercury in the environment is of paramount importance as improved information can help the assessment of the effectiveness of measures established to phase out and ban mercury. Long-term monitoring sites have been established in a number of regions and countries for the measurement of mercury in ambient air and wet deposition. Long term measurements of mercury concentration in biota also produced a huge amount of information, but such initiatives are far from being within a global, systematic and interoperable approach. To address these weaknesses the on-going Global Mercury Observation System (GMOS) project ( www.gmos.eu ) established a coordinated global observation system for mercury as well it retrieved historical data ( www.gmos.eu/sdi ). To manage such large amount of information a technological infrastructure was planned. This high-performance back-end resource associated with sophisticated client applications enables data storage, computing services, telecommunications networks and all services necessary to support the activity. This paper reports the architecture definition of the GMOS Cyber(e)-Infrastructure and the services developed to support science and policy, including the United Nation Environmental Program. It finally describes new possibilities in data analysis and data management through client applications. PMID:24249682

  14. A COSPAR/ILWS roadmap towards advanced space weather science to protect society's technological infrastructure

    NASA Astrophysics Data System (ADS)

    Schrijver, K.; Kauristie, K.

    2014-12-01

    With the rapid development of the technological infrastructure upon which modern society depends comes a growing appreciation of the hazards presented by the phenomena around our home planet that we call space weather. The complexity of the coupled Sun-Earth system, the sparseness by which it can be covered by remote-sensing and in-situ instrumentation, and the costs of the required observational and computational infrastructure warrant an international approach with feasible, affordable solutions. COSPAR and the steering committee of the International Living With a Star program tasked a multi-disciplinary, international team with the development of a roadmap with the goal of demonstrably improving our observational capabilities for, scientific understanding of, and ability to forecast the various aspects of space weather. We summarize the roadmap, its top-priority recommendations to achieve its goals, and their underlying rationale. More information on the roadmap, including the team's full membership, can be found at http://www.lmsal.com/~schryver/COSPARrm.

  15. The GMOS cyber(e)-infrastructure: advanced services for supporting science and policy.

    PubMed

    Cinnirella, S; D'Amore, F; Bencardino, M; Sprovieri, F; Pirrone, N

    2014-03-01

    The need for coordinated, systematized and catalogued databases on mercury in the environment is of paramount importance as improved information can help the assessment of the effectiveness of measures established to phase out and ban mercury. Long-term monitoring sites have been established in a number of regions and countries for the measurement of mercury in ambient air and wet deposition. Long term measurements of mercury concentration in biota also produced a huge amount of information, but such initiatives are far from being within a global, systematic and interoperable approach. To address these weaknesses the on-going Global Mercury Observation System (GMOS) project ( www.gmos.eu ) established a coordinated global observation system for mercury as well it retrieved historical data ( www.gmos.eu/sdi ). To manage such large amount of information a technological infrastructure was planned. This high-performance back-end resource associated with sophisticated client applications enables data storage, computing services, telecommunications networks and all services necessary to support the activity. This paper reports the architecture definition of the GMOS Cyber(e)-Infrastructure and the services developed to support science and policy, including the United Nation Environmental Program. It finally describes new possibilities in data analysis and data management through client applications.

  16. Running WRF on various distributed computing infrastructures through a standard-based Science Gateway

    NASA Astrophysics Data System (ADS)

    Barbera, Roberto; Bruno, Riccardo; La Rocca, Giuseppe; Markussen Lunde, Torleif; Pehrson, Bjorn

    2014-05-01

    The Weather Research and Forecasting (WRF) modelling system is a widely used meso-scale numerical weather prediction system designed to serve both atmospheric research and operational forecasting needs. WRF has a large worldwide community counting more than 20,000 users in 130 countries and it has been specifically designed to be the state-of-the-art atmospheric simulation system being portable and running efficiently on available parallel computing platforms. Although WRF can be executed in many different environments ranging form the single core inside a stand-alone machine up to the most sophisticated HPC platforms, there are no solutions yet to match the e-Science paradigm where software, data and users are "linked" together by the network as components of distributed computing infrastructures. The topmost component of the typical e-Science model consists of Science Gateways, defined as community-developed sets of tools, applications, and data collections that normally are integrated via a portal to get access to a distributed infrastructure. One of the many available Science Gateway solutions is the Catania Science Gateway Framework (CSGF - www.catania-science-gateways.it) whose most descriptive keywords are: standard adoption, interoperability and standard adoption. The support of standards such as SAGA and SAML allows any CSGF user to seamlessly access and use both Grid and Cloud-based resources. In this work we present the CSGF and how it has been used in the context of the eI4frica project (www.ei4africa.eu) to implement the Africa Grid Science Gateway (http://sgw.africa-grid.org), which allows to execute WRF simulations on various kinds of distributed computing infrastructures at the same time, including the EGI Federated Cloud.

  17. Collaborative mobile sensing and computing for civil infrastructure condition assessment: framework and applications

    NASA Astrophysics Data System (ADS)

    Chen, Jianfei; Chen, ZhiQiang

    2012-04-01

    Multi-function sensing and imaging devices, GPS, communication and computing devices are being ubiquitously used in field by engineers in civil engineering and emergence response practice. Field engineers, however, still have difficulty to balance between ever-increasing data collection demand and capacity of real-time data processing and knowledge sharing. In addition, field engineers usually work collaboratively in a geospatially large area; however, the existing sensing and computing modalities used in the field are not designed to accommodate this condition. In this paper, we present a solution framework of collaborative mobile sensing and computing (CMSC) for civil infrastructure condition assessment, with the Android-based mobile devices as the basic nodes in the framework with a potential of adding other auxiliary imaging and sensing devices into the network. Difficulties in mixed C++ and Java code programming that are critical to realize the framework are discussed. With a few prototypes illustrated in this paper, we envisage that the proposed CMSC framework will enable seamless integration of sensing, imaging, real-time processing and knowledge discovery in future engineers-centered field reconnaissances and civil infrastructure condition assessment.

  18. a Holistic Approach for Inspection of Civil Infrastructures Based on Computer Vision Techniques

    NASA Astrophysics Data System (ADS)

    Stentoumis, C.; Protopapadakis, E.; Doulamis, A.; Doulamis, N.

    2016-06-01

    In this work, it is examined the 2D recognition and 3D modelling of concrete tunnel cracks, through visual cues. At the time being, the structural integrity inspection of large-scale infrastructures is mainly performed through visual observations by human inspectors, who identify structural defects, rate them and, then, categorize their severity. The described approach targets at minimum human intervention, for autonomous inspection of civil infrastructures. The shortfalls of existing approaches in crack assessment are being addressed by proposing a novel detection scheme. Although efforts have been made in the field, synergies among proposed techniques are still missing. The holistic approach of this paper exploits the state of the art techniques of pattern recognition and stereo-matching, in order to build accurate 3D crack models. The innovation lies in the hybrid approach for the CNN detector initialization, and the use of the modified census transformation for stereo matching along with a binary fusion of two state-of-the-art optimization schemes. The described approach manages to deal with images of harsh radiometry, along with severe radiometric differences in the stereo pair. The effectiveness of this workflow is evaluated on a real dataset gathered in highway and railway tunnels. What is promising is that the computer vision workflow described in this work can be transferred, with adaptations of course, to other infrastructure such as pipelines, bridges and large industrial facilities that are in the need of continuous state assessment during their operational life cycle.

  19. 78 FR 6087 - Advanced Scientific Computing Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-29

    ... Advanced Scientific Computing Advisory Committee AGENCY: Department of Energy, Office of Science. ACTION: Notice of open meeting. SUMMARY: This notice announces a meeting of the Advanced Scientific Computing..., Office of Advanced Scientific Computing Research; SC-21/Germantown Building; U. S. Department of...

  20. 75 FR 9887 - Advanced Scientific Computing Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-04

    ... Advanced Scientific Computing Advisory Committee AGENCY: Department of Energy, Office of Science. ACTION: Notice of open meeting. SUMMARY: This notice announces a meeting of the Advanced Scientific Computing... Baker, Office of Advanced Scientific Computing Research; SC-21/Germantown Building; U.S. Department...

  1. 76 FR 9765 - Advanced Scientific Computing Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-22

    ... Advanced Scientific Computing Advisory Committee AGENCY: Office of Science, Department of Energy. ACTION: Notice of open meeting. SUMMARY: This notice announces a meeting of the Advanced Scientific Computing..., Office of Advanced Scientific Computing Research, SC-21/Germantown Building, U.S. Department of...

  2. 78 FR 41046 - Advanced Scientific Computing Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-09

    ... Advanced Scientific Computing Advisory Committee AGENCY: Department of Energy, Office of Science. ACTION... hereby given that the Advanced Scientific Computing Advisory Committee will be renewed for a two-year... (DOE), on the Advanced Scientific Computing Research Program managed by the Office of...

  3. 75 FR 64720 - DOE/Advanced Scientific Computing Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-20

    .../Advanced Scientific Computing Advisory Committee AGENCY: Department of Energy, Office of Science. ACTION: Notice of open meeting. SUMMARY: This notice announces a meeting of the Advanced Scientific Computing... Baker, Office of Advanced Scientific Computing Research; SC-21/Germantown Building; U. S. Department...

  4. 76 FR 41234 - Advanced Scientific Computing Advisory Committee Charter Renewal

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-13

    ... Advanced Scientific Computing Advisory Committee Charter Renewal AGENCY: Department of Energy, Office of... Administration, notice is hereby given that the Advanced Scientific Computing Advisory Committee will be renewed... concerning the Advanced Scientific Computing program in response only to charges from the Director of...

  5. 78 FR 56871 - Advanced Scientific Computing Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-16

    ... Advanced Scientific Computing Advisory Committee AGENCY: Office of Science, Department of Energy. ACTION: Notice of open meeting. SUMMARY: This notice announces a meeting of the Advanced Scientific Computing... Baker, Office of Advanced Scientific Computing Research; SC-21/Germantown Building; U.S. Department...

  6. 77 FR 45345 - DOE/Advanced Scientific Computing Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-31

    .../Advanced Scientific Computing Advisory Committee AGENCY: Office of Science, Department of Energy. ACTION: Notice of open meeting. SUMMARY: This notice announces a meeting of the Advanced Scientific Computing... Baker, Office of Advanced Scientific Computing Research; SC-21/Germantown Building; U.S. Department...

  7. 75 FR 43518 - Advanced Scientific Computing Advisory Committee; Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-26

    ... Advanced Scientific Computing Advisory Committee; Meeting AGENCY: Office of Science, DOE. ACTION: Notice of open meeting. SUMMARY: This notice announces a meeting of the Advanced Scientific Computing Advisory..., Office of Advanced Scientific Computing Research; SC-21/Germantown Building; U. S. Department of...

  8. 77 FR 12823 - Advanced Scientific Computing Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-02

    ... Advanced Scientific Computing Advisory Committee AGENCY: Department of Energy, Office of Science. ACTION: Notice of Open Meeting. SUMMARY: This notice announces a meeting of the Advanced Scientific Computing..., Office of Advanced Scientific Computing Research, SC-21/Germantown Building, U.S. Department of...

  9. Computational Design of Advanced Nuclear Fuels

    SciTech Connect

    Savrasov, Sergey; Kotliar, Gabriel; Haule, Kristjan

    2014-06-03

    The objective of the project was to develop a method for theoretical understanding of nuclear fuel materials whose physical and thermophysical properties can be predicted from first principles using a novel dynamical mean field method for electronic structure calculations. We concentrated our study on uranium, plutonium, their oxides, nitrides, carbides, as well as some rare earth materials whose 4f eletrons provide a simplified framework for understanding complex behavior of the f electrons. We addressed the issues connected to the electronic structure, lattice instabilities, phonon and magnon dynamics as well as thermal conductivity. This allowed us to evaluate characteristics of advanced nuclear fuel systems using computer based simulations and avoid costly experiments.

  10. Advances in computed tomography imaging technology.

    PubMed

    Ginat, Daniel Thomas; Gupta, Rajiv

    2014-07-11

    Computed tomography (CT) is an essential tool in diagnostic imaging for evaluating many clinical conditions. In recent years, there have been several notable advances in CT technology that already have had or are expected to have a significant clinical impact, including extreme multidetector CT, iterative reconstruction algorithms, dual-energy CT, cone-beam CT, portable CT, and phase-contrast CT. These techniques and their clinical applications are reviewed and illustrated in this article. In addition, emerging technologies that address deficiencies in these modalities are discussed.

  11. ATCA for Machines-- Advanced Telecommunications Computing Architecture

    SciTech Connect

    Larsen, R.S.; /SLAC

    2008-04-22

    The Advanced Telecommunications Computing Architecture is a new industry open standard for electronics instrument modules and shelves being evaluated for the International Linear Collider (ILC). It is the first industrial standard designed for High Availability (HA). ILC availability simulations have shown clearly that the capabilities of ATCA are needed in order to achieve acceptable integrated luminosity. The ATCA architecture looks attractive for beam instruments and detector applications as well. This paper provides an overview of ongoing R&D including application of HA principles to power electronics systems.

  12. An introduction to NASA's advanced computing program: Integrated computing systems in advanced multichip modules

    NASA Technical Reports Server (NTRS)

    Fang, Wai-Chi; Alkalai, Leon

    1996-01-01

    Recent changes within NASA's space exploration program favor the design, implementation, and operation of low cost, lightweight, small and micro spacecraft with multiple launches per year. In order to meet the future needs of these missions with regard to the use of spacecraft microelectronics, NASA's advanced flight computing (AFC) program is currently considering industrial cooperation and advanced packaging architectures. In relation to this, the AFC program is reviewed, considering the design and implementation of NASA's AFC multichip module.

  13. 77 FR 40586 - Draft NIST Interagency Report (NISTIR) 7823, Advanced Metering Infrastructure Smart Meter...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-10

    ... 7823 includes numerous stakeholders in the Smart Grid space, particularly customers, Smart Meter... Metering Infrastructure Smart Meter Upgradeability Test Framework; Request for Comments AGENCY: National... Metering Infrastructure Smart Meter Upgradeability Test Framework (Draft NISTIR 7823). This draft...

  14. A simple grid implementation with Berkeley Open Infrastructure for Network Computing using BLAST as a model

    PubMed Central

    Pinthong, Watthanai; Muangruen, Panya

    2016-01-01

    Development of high-throughput technologies, such as Next-generation sequencing, allows thousands of experiments to be performed simultaneously while reducing resource requirement. Consequently, a massive amount of experiment data is now rapidly generated. Nevertheless, the data are not readily usable or meaningful until they are further analysed and interpreted. Due to the size of the data, a high performance computer (HPC) is required for the analysis and interpretation. However, the HPC is expensive and difficult to access. Other means were developed to allow researchers to acquire the power of HPC without a need to purchase and maintain one such as cloud computing services and grid computing system. In this study, we implemented grid computing in a computer training center environment using Berkeley Open Infrastructure for Network Computing (BOINC) as a job distributor and data manager combining all desktop computers to virtualize the HPC. Fifty desktop computers were used for setting up a grid system during the off-hours. In order to test the performance of the grid system, we adapted the Basic Local Alignment Search Tools (BLAST) to the BOINC system. Sequencing results from Illumina platform were aligned to the human genome database by BLAST on the grid system. The result and processing time were compared to those from a single desktop computer and HPC. The estimated durations of BLAST analysis for 4 million sequence reads on a desktop PC, HPC and the grid system were 568, 24 and 5 days, respectively. Thus, the grid implementation of BLAST by BOINC is an efficient alternative to the HPC for sequence alignment. The grid implementation by BOINC also helped tap unused computing resources during the off-hours and could be easily modified for other available bioinformatics software. PMID:27547555

  15. A simple grid implementation with Berkeley Open Infrastructure for Network Computing using BLAST as a model.

    PubMed

    Pinthong, Watthanai; Muangruen, Panya; Suriyaphol, Prapat; Mairiang, Dumrong

    2016-01-01

    Development of high-throughput technologies, such as Next-generation sequencing, allows thousands of experiments to be performed simultaneously while reducing resource requirement. Consequently, a massive amount of experiment data is now rapidly generated. Nevertheless, the data are not readily usable or meaningful until they are further analysed and interpreted. Due to the size of the data, a high performance computer (HPC) is required for the analysis and interpretation. However, the HPC is expensive and difficult to access. Other means were developed to allow researchers to acquire the power of HPC without a need to purchase and maintain one such as cloud computing services and grid computing system. In this study, we implemented grid computing in a computer training center environment using Berkeley Open Infrastructure for Network Computing (BOINC) as a job distributor and data manager combining all desktop computers to virtualize the HPC. Fifty desktop computers were used for setting up a grid system during the off-hours. In order to test the performance of the grid system, we adapted the Basic Local Alignment Search Tools (BLAST) to the BOINC system. Sequencing results from Illumina platform were aligned to the human genome database by BLAST on the grid system. The result and processing time were compared to those from a single desktop computer and HPC. The estimated durations of BLAST analysis for 4 million sequence reads on a desktop PC, HPC and the grid system were 568, 24 and 5 days, respectively. Thus, the grid implementation of BLAST by BOINC is an efficient alternative to the HPC for sequence alignment. The grid implementation by BOINC also helped tap unused computing resources during the off-hours and could be easily modified for other available bioinformatics software.

  16. A simple grid implementation with Berkeley Open Infrastructure for Network Computing using BLAST as a model.

    PubMed

    Pinthong, Watthanai; Muangruen, Panya; Suriyaphol, Prapat; Mairiang, Dumrong

    2016-01-01

    Development of high-throughput technologies, such as Next-generation sequencing, allows thousands of experiments to be performed simultaneously while reducing resource requirement. Consequently, a massive amount of experiment data is now rapidly generated. Nevertheless, the data are not readily usable or meaningful until they are further analysed and interpreted. Due to the size of the data, a high performance computer (HPC) is required for the analysis and interpretation. However, the HPC is expensive and difficult to access. Other means were developed to allow researchers to acquire the power of HPC without a need to purchase and maintain one such as cloud computing services and grid computing system. In this study, we implemented grid computing in a computer training center environment using Berkeley Open Infrastructure for Network Computing (BOINC) as a job distributor and data manager combining all desktop computers to virtualize the HPC. Fifty desktop computers were used for setting up a grid system during the off-hours. In order to test the performance of the grid system, we adapted the Basic Local Alignment Search Tools (BLAST) to the BOINC system. Sequencing results from Illumina platform were aligned to the human genome database by BLAST on the grid system. The result and processing time were compared to those from a single desktop computer and HPC. The estimated durations of BLAST analysis for 4 million sequence reads on a desktop PC, HPC and the grid system were 568, 24 and 5 days, respectively. Thus, the grid implementation of BLAST by BOINC is an efficient alternative to the HPC for sequence alignment. The grid implementation by BOINC also helped tap unused computing resources during the off-hours and could be easily modified for other available bioinformatics software. PMID:27547555

  17. AstroCloud, a Cyber-Infrastructure for Astronomy Research: Cloud Computing Environments

    NASA Astrophysics Data System (ADS)

    Li, C.; Wang, J.; Cui, C.; He, B.; Fan, D.; Yang, Y.; Chen, J.; Zhang, H.; Yu, C.; Xiao, J.; Wang, C.; Cao, Z.; Fan, Y.; Hong, Z.; Li, S.; Mi, L.; Wan, W.; Wang, J.; Yin, S.

    2015-09-01

    AstroCloud is a cyber-Infrastructure for Astronomy Research initiated by Chinese Virtual Observatory (China-VO) under funding support from NDRC (National Development and Reform commission) and CAS (Chinese Academy of Sciences). Based on CloudStack, an open source software, we set up the cloud computing environment for AstroCloud Project. It consists of five distributed nodes across the mainland of China. Users can use and analysis data in this cloud computing environment. Based on GlusterFS, we built a scalable cloud storage system. Each user has a private space, which can be shared among different virtual machines and desktop systems. With this environments, astronomer can access to astronomical data collected by different telescopes and data centers easily, and data producers can archive their datasets safely.

  18. Interactions among human behavior, social networks, and societal infrastructures: A Case Study in Computational Epidemiology

    NASA Astrophysics Data System (ADS)

    Barrett, Christopher L.; Bisset, Keith; Chen, Jiangzhuo; Eubank, Stephen; Lewis, Bryan; Kumar, V. S. Anil; Marathe, Madhav V.; Mortveit, Henning S.

    Human behavior, social networks, and the civil infrastructures are closely intertwined. Understanding their co-evolution is critical for designing public policies and decision support for disaster planning. For example, human behaviors and day to day activities of individuals create dense social interactions that are characteristic of modern urban societies. These dense social networks provide a perfect fabric for fast, uncontrolled disease propagation. Conversely, people’s behavior in response to public policies and their perception of how the crisis is unfolding as a result of disease outbreak can dramatically alter the normally stable social interactions. Effective planning and response strategies must take these complicated interactions into account. In this chapter, we describe a computer simulation based approach to study these issues using public health and computational epidemiology as an illustrative example. We also formulate game-theoretic and stochastic optimization problems that capture many of the problems that we study empirically.

  19. Problem-Oriented Simulation Packages and Computational Infrastructure for Numerical Studies of Powerful Gyrotrons

    NASA Astrophysics Data System (ADS)

    Damyanova, M.; Sabchevski, S.; Zhelyazkov, I.; Vasileva, E.; Balabanova, E.; Dankov, P.; Malinov, P.

    2016-05-01

    Powerful gyrotrons are necessary as sources of strong microwaves for electron cyclotron resonance heating (ECRH) and electron cyclotron current drive (ECCD) of magnetically confined plasmas in various reactors (most notably ITER) for controlled thermonuclear fusion. Adequate physical models and efficient problem-oriented software packages are essential tools for numerical studies, analysis, optimization and computer-aided design (CAD) of such high-performance gyrotrons operating in a CW mode and delivering output power of the order of 1-2 MW. In this report we present the current status of our simulation tools (physical models, numerical codes, pre- and post-processing programs, etc.) as well as the computational infrastructure on which they are being developed, maintained and executed.

  20. A roadmap towards advanced space weather science to protect society's technological infrastructure: Panel Discussion 1

    NASA Astrophysics Data System (ADS)

    Schrijver, Carolus; Kauristie, Kirsti

    This single 90minute slot will follow on from the morning plenary presentation of the roadmap, providing an opportunity for further discussion of the panel’s findings with an invited panel of key stakeholders. --- As mankind’s technological capabilities grow, society constructs a rapidly deepening insight into the workings of the universe at large, being guided by exploring space near to our home. But at the same time our societal dependence on technology increases and with that comes a growing appreciation of the challenges presented by the phenomena that occur in that space around our home planet: Magnetic explosions on the Sun and their counterparts in the geomagnetic field can in extreme cases endanger our all-pervasive electrical infrastructure. Powerful space storms occasionally lower the reliability of the globe-spanning satellite navigation systems and interrupt radio communications. Energetic particle storms lead to malfunctions and even failures in satellites that are critical to the flow of information in the globally connected economies. These and other Sun-driven effects on Earth’s environment, collectively known as space weather, resemble some other natural hazards in the sense that they pose a risk for the safe and efficient functioning of society that needs to be understood, quantified, and - ultimately - mitigated against. The complexity of the coupled Sun-Earth system, the sparseness by which it can be covered by remote-sensing and in-situ instrumentation, and the costs of the required observational and computational infrastructure warrant a well-planned and well-coordinated approach with cost-efficient solutions. Our team is tasked with the development of a roadmap with the goal of demonstrably improving our observational capabilities, scientific understanding, and the ability to forecast. This paper summarizes the accomplishments of the roadmap team in identifying the highest-priority challenges to achieve these goals.

  1. A roadmap towards advanced space weather science to protect society's technological infrastructure: Panel Discussion 2

    NASA Astrophysics Data System (ADS)

    Schrijver, Carolus; Kauristie, Kirsti

    This single 90minute slot will follow on from the morning plenary presentation of the roadmap, providing an opportunity for further discussion of the panel’s findings with an invited panel of key stakeholders. --- As mankind’s technological capabilities grow, society constructs a rapidly deepening insight into the workings of the universe at large, being guided by exploring space near to our home. But at the same time our societal dependence on technology increases and with that comes a growing appreciation of the challenges presented by the phenomena that occur in that space around our home planet: Magnetic explosions on the Sun and their counterparts in the geomagnetic field can in extreme cases endanger our all-pervasive electrical infrastructure. Powerful space storms occasionally lower the reliability of the globe-spanning satellite navigation systems and interrupt radio communications. Energetic particle storms lead to malfunctions and even failures in satellites that are critical to the flow of information in the globally connected economies. These and other Sun-driven effects on Earth’s environment, collectively known as space weather, resemble some other natural hazards in the sense that they pose a risk for the safe and efficient functioning of society that needs to be understood, quantified, and - ultimately - mitigated against. The complexity of the coupled Sun-Earth system, the sparseness by which it can be covered by remote-sensing and in-situ instrumentation, and the costs of the required observational and computational infrastructure warrant a well-planned and well-coordinated approach with cost-efficient solutions. Our team is tasked with the development of a roadmap with the goal of demonstrably improving our observational capabilities, scientific understanding, and the ability to forecast. This paper summarizes the accomplishments of the roadmap team in identifying the highest-priority challenges to achieve these goals.

  2. A roadmap towards advanced space weather science to protect society's technological infrastructure: Panel Discussion 3

    NASA Astrophysics Data System (ADS)

    Schrijver, Carolus; Kauristie, Kirsti

    This single 90minute slot will follow on from the morning plenary presentation of the roadmap, providing an opportunity for further discussion of the panel’s findings with an invited panel of key stakeholders. --- As mankind’s technological capabilities grow, society constructs a rapidly deepening insight into the workings of the universe at large, being guided by exploring space near to our home. But at the same time our societal dependence on technology increases and with that comes a growing appreciation of the challenges presented by the phenomena that occur in that space around our home planet: Magnetic explosions on the Sun and their counterparts in the geomagnetic field can in extreme cases endanger our all-pervasive electrical infrastructure. Powerful space storms occasionally lower the reliability of the globe-spanning satellite navigation systems and interrupt radio communications. Energetic particle storms lead to malfunctions and even failures in satellites that are critical to the flow of information in the globally connected economies. These and other Sun-driven effects on Earth’s environment, collectively known as space weather, resemble some other natural hazards in the sense that they pose a risk for the safe and efficient functioning of society that needs to be understood, quantified, and - ultimately - mitigated against. The complexity of the coupled Sun-Earth system, the sparseness by which it can be covered by remote-sensing and in-situ instrumentation, and the costs of the required observational and computational infrastructure warrant a well-planned and well-coordinated approach with cost-efficient solutions. Our team is tasked with the development of a roadmap with the goal of demonstrably improving our observational capabilities, scientific understanding, and the ability to forecast. This paper summarizes the accomplishments of the roadmap team in identifying the highest-priority challenges to achieve these goals.

  3. A roadmap towards advanced space weather science to protect society's technological infrastructure

    NASA Astrophysics Data System (ADS)

    Schrijver, Carolus

    As mankind’s technological capabilities grow, society constructs a rapidly deepening insight into the workings of the universe at large, being guided by exploring space near to our home. But at the same time our societal dependence on technology increases and with that comes a growing appreciation of the challenges presented by the phenomena that occur in that space around our home planet: Magnetic explosions on the Sun and their counterparts in the geomagnetic field can in extreme cases endanger our all-pervasive electrical infrastructure. Powerful space storms occasionally lower the reliability of the globe-spanning satellite navigation systems and interrupt radio communications. Energetic particle storms lead to malfunctions and even failures in satellites that are critical to the flow of information in the globally connected economies. These and other Sun-driven effects on Earth’s environment, collectively known as space weather, resemble some other natural hazards in the sense that they pose a risk for the safe and efficient functioning of society that needs to be understood, quantified, and - ultimately - mitigated against. The complexity of the coupled Sun-Earth system, the sparseness by which it can be covered by remote-sensing and in-situ instrumentation, and the costs of the required observational and computational infrastructure warrant a well-planned and well-coordinated approach with cost-efficient solutions. Our team is tasked with the development of a roadmap with the goal of demonstrably improving our observational capabilities, scientific understanding, and the ability to forecast. This paper summarizes the accomplishments of the roadmap team in identifying the highest-priority challenges to achieve these goals.

  4. Advanced Refrigerant-Based Cooling Technologies for Information and Communication Infrastructure (ARCTIC)

    SciTech Connect

    Salamon, Todd

    2012-12-13

    efficiency and carbon footprint reduction for our nation's Information and Communications Technology (ICT) infrastructure. The specific objectives of the ARCTIC project focused in the following three areas: i) advanced research innovations that dramatically enhance the ability to deal with ever-increasing device heat densities and footprint reduction by bringing the liquid cooling much closer to the actual heat sources; ii) manufacturing optimization of key components; and iii) ensuring rapid market acceptance by reducing cost, thoroughly understanding system-level performance, and developing viable commercialization strategies. The project involved participants with expertise in all aspects of commercialization, including research & development, manufacturing, sales & marketing and end users. The team was lead by Alcatel-Lucent, and included subcontractors Modine and USHose.

  5. Advanced Scientific Computing Research Network Requirements

    SciTech Connect

    Bacon, Charles; Bell, Greg; Canon, Shane; Dart, Eli; Dattoria, Vince; Goodwin, Dave; Lee, Jason; Hicks, Susan; Holohan, Ed; Klasky, Scott; Lauzon, Carolyn; Rogers, Jim; Shipman, Galen; Skinner, David; Tierney, Brian

    2013-03-08

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the U.S. Department of Energy (DOE) Office of Science (SC), the single largest supporter of basic research in the physical sciences in the United States. In support of SC programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 25 years. In October 2012, ESnet and the Office of Advanced Scientific Computing Research (ASCR) of the DOE SC organized a review to characterize the networking requirements of the programs funded by the ASCR program office. The requirements identified at the review are summarized in the Findings section, and are described in more detail in the body of the report.

  6. Feasibility of a Networked Air Traffic Infrastructure Validation Environment for Advanced NextGen Concepts

    NASA Technical Reports Server (NTRS)

    McCormack, Michael J.; Gibson, Alec K.; Dennis, Noah E.; Underwood, Matthew C.; Miller,Lana B.; Ballin, Mark G.

    2013-01-01

    Abstract-Next Generation Air Transportation System (NextGen) applications reliant upon aircraft data links such as Automatic Dependent Surveillance-Broadcast (ADS-B) offer a sweeping modernization of the National Airspace System (NAS), but the aviation stakeholder community has not yet established a positive business case for equipage and message content standards remain in flux. It is necessary to transition promising Air Traffic Management (ATM) Concepts of Operations (ConOps) from simulation environments to full-scale flight tests in order to validate user benefits and solidify message standards. However, flight tests are prohibitively expensive and message standards for Commercial-off-the-Shelf (COTS) systems cannot support many advanced ConOps. It is therefore proposed to simulate future aircraft surveillance and communications equipage and employ an existing commercial data link to exchange data during dedicated flight tests. This capability, referred to as the Networked Air Traffic Infrastructure Validation Environment (NATIVE), would emulate aircraft data links such as ADS-B using in-flight Internet and easily-installed test equipment. By utilizing low-cost equipment that is easy to install and certify for testing, advanced ATM ConOps can be validated, message content standards can be solidified, and new standards can be established through full-scale flight trials without necessary or expensive equipage or extensive flight test preparation. This paper presents results of a feasibility study of the NATIVE concept. To determine requirements, six NATIVE design configurations were developed for two NASA ConOps that rely on ADS-B. The performance characteristics of three existing in-flight Internet services were investigated to determine whether performance is adequate to support the concept. Next, a study of requisite hardware and software was conducted to examine whether and how the NATIVE concept might be realized. Finally, to determine a business case

  7. λ-augmented tree for robust data collection in Advanced Metering Infrastructure

    SciTech Connect

    Kamto, Joseph; Qian, Lijun; Li, Wei; Han, Zhu

    2016-01-01

    In this study, tree multicast configuration of smart meters (SMs) can maintain the connectivity and meet the latency requirements for the Advanced Metering Infrastructure (AMI). However, such topology is extremely weak as any single failure suffices to break its connectivity. On the other hand, the impact of a SM node failure can be more or less significant: a noncut SM node will have a limited local impact compared to a cut SM node that will break the network connectivity. In this work, we design a highly connected tree with a set of backup links to minimize the weakness of tree topology of SMs. A topology repair scheme is proposed to address the impact of a SM node failure on the connectivity of the augmented tree network. It relies on a loop detection scheme to define the criticality of a SM node and specifically targets cut SM node by selecting backup parent SM to cover its children. Detailed algorithms to create such AMI tree and related theoretical and complexity analysis are provided with insightful simulation results: sufficient redundancy is provided to alleviate data loss at the cost of signaling overhead. It is however observed that biconnected tree provides the best compromise between the two entities.

  8. λ-augmented tree for robust data collection in Advanced Metering Infrastructure

    DOE PAGES

    Kamto, Joseph; Qian, Lijun; Li, Wei; Han, Zhu

    2016-01-01

    In this study, tree multicast configuration of smart meters (SMs) can maintain the connectivity and meet the latency requirements for the Advanced Metering Infrastructure (AMI). However, such topology is extremely weak as any single failure suffices to break its connectivity. On the other hand, the impact of a SM node failure can be more or less significant: a noncut SM node will have a limited local impact compared to a cut SM node that will break the network connectivity. In this work, we design a highly connected tree with a set of backup links to minimize the weakness of treemore » topology of SMs. A topology repair scheme is proposed to address the impact of a SM node failure on the connectivity of the augmented tree network. It relies on a loop detection scheme to define the criticality of a SM node and specifically targets cut SM node by selecting backup parent SM to cover its children. Detailed algorithms to create such AMI tree and related theoretical and complexity analysis are provided with insightful simulation results: sufficient redundancy is provided to alleviate data loss at the cost of signaling overhead. It is however observed that biconnected tree provides the best compromise between the two entities.« less

  9. A Comprehensive and Cost-Effective Computer Infrastructure for K-12 Schools

    NASA Technical Reports Server (NTRS)

    Warren, G. P.; Seaton, J. M.

    1996-01-01

    Since 1993, NASA Langley Research Center has been developing and implementing a low-cost Internet connection model, including system architecture, training, and support, to provide Internet access for an entire network of computers. This infrastructure allows local area networks which exceed 50 machines per school to independently access the complete functionality of the Internet by connecting to a central site, using state-of-the-art commercial modem technology, through a single standard telephone line. By locating high-cost resources at this central site and sharing these resources and their costs among the school districts throughout a region, a practical, efficient, and affordable infrastructure for providing scale-able Internet connectivity has been developed. As the demand for faster Internet access grows, the model has a simple expansion path that eliminates the need to replace major system components and re-train personnel. Observations of optical Internet usage within an environment, particularly school classrooms, have shown that after an initial period of 'surfing,' the Internet traffic becomes repetitive. By automatically storing requested Internet information on a high-capacity networked disk drive at the local site (network based disk caching), then updating this information only when it changes, well over 80 percent of the Internet traffic that leaves a location can be eliminated by retrieving the information from the local disk cache.

  10. OPENING REMARKS: Scientific Discovery through Advanced Computing

    NASA Astrophysics Data System (ADS)

    Strayer, Michael

    2006-01-01

    Good morning. Welcome to SciDAC 2006 and Denver. I share greetings from the new Undersecretary for Energy, Ray Orbach. Five years ago SciDAC was launched as an experiment in computational science. The goal was to form partnerships among science applications, computer scientists, and applied mathematicians to take advantage of the potential of emerging terascale computers. This experiment has been a resounding success. SciDAC has emerged as a powerful concept for addressing some of the biggest challenges facing our world. As significant as these successes were, I believe there is also significance in the teams that achieved them. In addition to their scientific aims these teams have advanced the overall field of computational science and set the stage for even larger accomplishments as we look ahead to SciDAC-2. I am sure that many of you are expecting to hear about the results of our current solicitation for SciDAC-2. I’m afraid we are not quite ready to make that announcement. Decisions are still being made and we will announce the results later this summer. Nearly 250 unique proposals were received and evaluated, involving literally thousands of researchers, postdocs, and students. These collectively requested more than five times our expected budget. This response is a testament to the success of SciDAC in the community. In SciDAC-2 our budget has been increased to about 70 million for FY 2007 and our partnerships have expanded to include the Environment and National Security missions of the Department. The National Science Foundation has also joined as a partner. These new partnerships are expected to expand the application space of SciDAC, and broaden the impact and visibility of the program. We have, with our recent solicitation, expanded to turbulence, computational biology, and groundwater reactive modeling and simulation. We are currently talking with the Department’s applied energy programs about risk assessment, optimization of complex systems - such

  11. 76 FR 45786 - Advanced Scientific Computing Advisory Committee; Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-01

    ... Advanced Scientific Computing Advisory Committee; Meeting AGENCY: Office of Science, Department of Energy... Computing Advisory Committee (ASCAC). Federal Advisory Committee Act (Pub. L. 92-463, 86 Stat. 770) requires... INFORMATION CONTACT: Melea Baker, Office of Advanced Scientific Computing Research; SC-21/Germantown...

  12. 75 FR 57742 - Advanced Scientific Computing Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-22

    ... Advanced Scientific Computing Advisory Committee AGENCY: Department of Energy, Office of Science. ACTION... Scientific Computing Advisory Committee (ASCAC). Federal Advisory Committee Act (Pub. L. 92-463, 86 Stat. 770...: Melea Baker, Office of Advanced Scientific Computing Research; SC-21/Germantown Building;...

  13. 78 FR 64931 - Advanced Scientific Computing Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-30

    ... Advanced Scientific Computing Advisory Committee AGENCY: Office of Science, Department of Energy. ACTION... Computing Advisory Committee (ASCAC). This meeting replaces the cancelled ASCAC meeting that was to be held... Advanced Scientific Computing Research; SC-21/Germantown Building; U. S. Department of Energy;...

  14. 76 FR 64330 - Advanced Scientific Computing Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-18

    ... Advanced Scientific Computing Advisory Committee AGENCY: Department of Energy, Office of Science. ACTION... Computing Advisory Committee (ASCAC). The Federal Advisory Committee Act (Pub. L. 92-463, 86 Stat. 770..., Office of Advanced Scientific Computing Research; SC-21/Germantown Building; U. S. Department of...

  15. 78 FR 50404 - Advanced Scientific Computing Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-19

    ... Advanced Scientific Computing Advisory Committee AGENCY: Office of Science, Department of Energy. ] ACTION... Scientific Computing Advisory Committee (ASCAC). The Federal Advisory Committee Act (Pub. L. 92-463, 86 Stat... INFORMATION CONTACT: Melea Baker, Office of Advanced Scientific Computing Research; SC-21/Germantown...

  16. Making Advanced Computer Science Topics More Accessible through Interactive Technologies

    ERIC Educational Resources Information Center

    Shao, Kun; Maher, Peter

    2012-01-01

    Purpose: Teaching advanced technical concepts in a computer science program to students of different technical backgrounds presents many challenges. The purpose of this paper is to present a detailed experimental pedagogy in teaching advanced computer science topics, such as computer networking, telecommunications and data structures using…

  17. High-Performance Compute Infrastructure in Astronomy: 2020 Is Only Months Away

    NASA Astrophysics Data System (ADS)

    Berriman, B.; Deelman, E.; Juve, G.; Rynge, M.; Vöckler, J. S.

    2012-09-01

    By 2020, astronomy will be awash with as much as 60 PB of public data. Full scientific exploitation of such massive volumes of data will require high-performance computing on server farms co-located with the data. Development of this computing model will be a community-wide enterprise that has profound cultural and technical implications. Astronomers must be prepared to develop environment-agnostic applications that support parallel processing. The community must investigate the applicability and cost-benefit of emerging technologies such as cloud computing to astronomy, and must engage the Computer Science community to develop science-driven cyberinfrastructure such as workflow schedulers and optimizers. We report here the results of collaborations between a science center, IPAC, and a Computer Science research institute, ISI. These collaborations may be considered pathfinders in developing a high-performance compute infrastructure in astronomy. These collaborations investigated two exemplar large-scale science-driver workflow applications: 1) Calculation of an infrared atlas of the Galactic Plane at 18 different wavelengths by placing data from multiple surveys on a common plate scale and co-registering all the pixels; 2) Calculation of an atlas of periodicities present in the public Kepler data sets, which currently contain 380,000 light curves. These products have been generated with two workflow applications, written in C for performance and designed to support parallel processing on multiple environments and platforms, but with different compute resource needs: the Montage image mosaic engine is I/O-bound, and the NASA Star and Exoplanet Database periodogram code is CPU-bound. Our presentation will report cost and performance metrics and lessons-learned for continuing development. Applicability of Cloud Computing: Commercial Cloud providers generally charge for all operations, including processing, transfer of input and output data, and for storage of data

  18. Event heap: a coordination infrastructure for dynamic heterogeneous application interactions in ubiquitous computing environments

    DOEpatents

    Johanson, Bradley E.; Fox, Armando; Winograd, Terry A.; Hanrahan, Patrick M.

    2010-04-20

    An efficient and adaptive middleware infrastructure called the Event Heap system dynamically coordinates application interactions and communications in a ubiquitous computing environment, e.g., an interactive workspace, having heterogeneous software applications running on various machines and devices across different platforms. Applications exchange events via the Event Heap. Each event is characterized by a set of unordered, named fields. Events are routed by matching certain attributes in the fields. The source and target versions of each field are automatically set when an event is posted or used as a template. The Event Heap system implements a unique combination of features, both intrinsic to tuplespaces and specific to the Event Heap, including content based addressing, support for routing patterns, standard routing fields, limited data persistence, query persistence/registration, transparent communication, self-description, flexible typing, logical/physical centralization, portable client API, at most once per source first-in-first-out ordering, and modular restartability.

  19. New Evaluations and Computational Infrastructure for Management and Visualization of Nuclear Astrophysics Data

    SciTech Connect

    Nesaraja, C.D.; Smith, M.S.; Hix, W.R.; Lingerfelt, E.J.; Scott, J.P.; Bardayan, D.W.; Blackmon, J.C.; Chae, Kyungyuk; Guidry, M.W.; Mat, Zhanwen; Kozub, R.L.; Meyer, R.A.; Thomas, J.S.

    2005-05-24

    Recent measurements with radioactive beams at ORNL's Holifield Radioactive Ion Beam Facility (HRIBF) have prompted evaluations of the structure and reactions of unstable nuclei that play an important role in stellar explosions. The evaluation work focuses on reactions involving unstable nuclei and their associated level structures. To determine the astrophysical impact of these evaluations and other new nuclear physics results, it is vital to rapidly and accurately process and incorporate them into astrophysics models. We discuss the development of a new computational infrastructure that streamlines this process, and is available online at nucastrodata.org. This site also hyperlinks all available nuclear data sets relevant for nuclear astrophysics studies. Features of the suite and future developments are described.

  20. An APEL Tool Based CPU Usage Accounting Infrastructure for Large Scale Computing Grids

    NASA Astrophysics Data System (ADS)

    Jiang, Ming; Novales, Cristina Del Cano; Mathieu, Gilles; Casson, John; Rogers, William; Gordon, John

    The APEL (Accounting Processor for Event Logs) is the fundamental tool for the CPU usage accounting infrastructure deployed within the WLCG and EGEE Grids. In these Grids, jobs are submitted by users to computing resources via a Grid Resource Broker (e.g. gLite Workload Management System). As a log processing tool, APEL interprets logs of Grid gatekeeper (e.g. globus) and batch system logs (e.g. PBS, LSF, SGE and Condor) to produce CPU job accounting records identified with Grid identities. These records provide a complete description of usage of computing resources by user's jobs. APEL publishes accounting records into an accounting record repository at a Grid Operations Centre (GOC) for the access from a GUI web tool. The functions of log files parsing, records generation and publication are implemented by the APEL Parser, APEL Core, and APEL Publisher component respectively. Within the distributed accounting infrastructure, accounting records are transported from APEL Publishers at Grid sites to either a regionalised accounting system or the central one by choice via a common ActiveMQ message broker network. This provides an open transport layer for other accounting systems to publish relevant accounting data to a central accounting repository via a unified interface provided an APEL Publisher and also will give regional/National Grid Initiatives (NGIs) Grids the flexibility in their choice of accounting system. The robust and secure delivery of accounting record messages at an NGI level and between NGI accounting instances and the central one are achieved by using configurable APEL Publishers and an ActiveMQ message broker network.

  1. Education as eHealth Infrastructure: Considerations in Advancing a National Agenda for eHealth

    ERIC Educational Resources Information Center

    Hilberts, Sonya; Gray, Kathleen

    2014-01-01

    This paper explores the role of education as infrastructure in large-scale ehealth strategies--in theory, in international practice and in one national case study. Education is often invisible in the documentation of ehealth infrastructure. Nevertheless a review of international practice shows that there is significant educational investment made…

  2. Sustaining a Community Computing Infrastructure for Online Teacher Professional Development: A Case Study of Designing Tapped In

    NASA Astrophysics Data System (ADS)

    Farooq, Umer; Schank, Patricia; Harris, Alexandra; Fusco, Judith; Schlager, Mark

    Community computing has recently grown to become a major research area in human-computer interaction. One of the objectives of community computing is to support computer-supported cooperative work among distributed collaborators working toward shared professional goals in online communities of practice. A core issue in designing and developing community computing infrastructures — the underlying sociotechnical layer that supports communitarian activities — is sustainability. Many community computing initiatives fail because the underlying infrastructure does not meet end user requirements; the community is unable to maintain a critical mass of users consistently over time; it generates insufficient social capital to support significant contributions by members of the community; or, as typically happens with funded initiatives, financial and human capital resource become unavailable to further maintain the infrastructure. On the basis of more than 9 years of design experience with Tapped In-an online community of practice for education professionals — we present a case study that discusses four design interventions that have sustained the Tapped In infrastructure and its community to date. These interventions represent broader design strategies for developing online environments for professional communities of practice.

  3. Toward real-time Monte Carlo simulation using a commercial cloud computing infrastructure.

    PubMed

    Wang, Henry; Ma, Yunzhi; Pratx, Guillem; Xing, Lei

    2011-09-01

    Monte Carlo (MC) methods are the gold standard for modeling photon and electron transport in a heterogeneous medium; however, their computational cost prohibits their routine use in the clinic. Cloud computing, wherein computing resources are allocated on-demand from a third party, is a new approach for high performance computing and is implemented to perform ultra-fast MC calculation in radiation therapy. We deployed the EGS5 MC package in a commercial cloud environment. Launched from a single local computer with Internet access, a Python script allocates a remote virtual cluster. A handshaking protocol designates master and worker nodes. The EGS5 binaries and the simulation data are initially loaded onto the master node. The simulation is then distributed among independent worker nodes via the message passing interface, and the results aggregated on the local computer for display and data analysis. The described approach is evaluated for pencil beams and broad beams of high-energy electrons and photons. The output of cloud-based MC simulation is identical to that produced by single-threaded implementation. For 1 million electrons, a simulation that takes 2.58 h on a local computer can be executed in 3.3 min on the cloud with 100 nodes, a 47× speed-up. Simulation time scales inversely with the number of parallel nodes. The parallelization overhead is also negligible for large simulations. Cloud computing represents one of the most important recent advances in supercomputing technology and provides a promising platform for substantially improved MC simulation. In addition to the significant speed up, cloud computing builds a layer of abstraction for high performance parallel computing, which may change the way dose calculations are performed and radiation treatment plans are completed.

  4. Toward real-time Monte Carlo simulation using a commercial cloud computing infrastructure

    NASA Astrophysics Data System (ADS)

    Wang, Henry; Ma, Yunzhi; Pratx, Guillem; Xing, Lei

    2011-09-01

    Monte Carlo (MC) methods are the gold standard for modeling photon and electron transport in a heterogeneous medium; however, their computational cost prohibits their routine use in the clinic. Cloud computing, wherein computing resources are allocated on-demand from a third party, is a new approach for high performance computing and is implemented to perform ultra-fast MC calculation in radiation therapy. We deployed the EGS5 MC package in a commercial cloud environment. Launched from a single local computer with Internet access, a Python script allocates a remote virtual cluster. A handshaking protocol designates master and worker nodes. The EGS5 binaries and the simulation data are initially loaded onto the master node. The simulation is then distributed among independent worker nodes via the message passing interface, and the results aggregated on the local computer for display and data analysis. The described approach is evaluated for pencil beams and broad beams of high-energy electrons and photons. The output of cloud-based MC simulation is identical to that produced by single-threaded implementation. For 1 million electrons, a simulation that takes 2.58 h on a local computer can be executed in 3.3 min on the cloud with 100 nodes, a 47× speed-up. Simulation time scales inversely with the number of parallel nodes. The parallelization overhead is also negligible for large simulations. Cloud computing represents one of the most important recent advances in supercomputing technology and provides a promising platform for substantially improved MC simulation. In addition to the significant speed up, cloud computing builds a layer of abstraction for high performance parallel computing, which may change the way dose calculations are performed and radiation treatment plans are completed. This work was presented in part at the 2010 Annual Meeting of the American Association of Physicists in Medicine (AAPM), Philadelphia, PA.

  5. Developing an Advanced Environment for Collaborative Computing

    NASA Technical Reports Server (NTRS)

    Becerra-Fernandez, Irma; Stewart, Helen; DelAlto, Martha; DelAlto, Martha; Knight, Chris

    1999-01-01

    Knowledge management in general tries to organize and make available important know-how, whenever and where ever is needed. Today, organizations rely on decision-makers to produce "mission critical" decisions that am based on inputs from multiple domains. The ideal decision-maker has a profound understanding of specific domains that influence the decision-making process coupled with the experience that allows them to act quickly and decisively on the information. In addition, learning companies benefit by not repeating costly mistakes, and by reducing time-to-market in Research & Development projects. Group-decision making tools can help companies make better decisions by capturing the knowledge from groups of experts. Furthermore, companies that capture their customers preferences can improve their customer service, which translates to larger profits. Therefore collaborative computing provides a common communication space, improves sharing of knowledge, provides a mechanism for real-time feedback on the tasks being performed, helps to optimize processes, and results in a centralized knowledge warehouse. This paper presents the research directions. of a project which seeks to augment an advanced collaborative web-based environment called Postdoc, with workflow capabilities. Postdoc is a "government-off-the-shelf" document management software developed at NASA-Ames Research Center (ARC).

  6. Large-Scale Data Collection Metadata Management at the National Computation Infrastructure

    NASA Astrophysics Data System (ADS)

    Wang, J.; Evans, B. J. K.; Bastrakova, I.; Ryder, G.; Martin, J.; Duursma, D.; Gohar, K.; Mackey, T.; Paget, M.; Siddeswara, G.

    2014-12-01

    Data Collection management has become an essential activity at the National Computation Infrastructure (NCI) in Australia. NCI's partners (CSIRO, Bureau of Meteorology, Australian National University, and Geoscience Australia), supported by the Australian Government and Research Data Storage Infrastructure (RDSI), have established a national data resource that is co-located with high-performance computing. This paper addresses the metadata management of these data assets over their lifetime. NCI manages 36 data collections (10+ PB) categorised as earth system sciences, climate and weather model data assets and products, earth and marine observations and products, geosciences, terrestrial ecosystem, water management and hydrology, astronomy, social science and biosciences. The data is largely sourced from NCI partners, the custodians of many of the national scientific records, and major research community organisations. The data is made available in a HPC and data-intensive environment - a ~56000 core supercomputer, virtual labs on a 3000 core cloud system, and data services. By assembling these large national assets, new opportunities have arisen to harmonise the data collections, making a powerful cross-disciplinary resource.To support the overall management, a Data Management Plan (DMP) has been developed to record the workflows, procedures, the key contacts and responsibilities. The DMP has fields that can be exported to the ISO19115 schema and to the collection level catalogue of GeoNetwork. The subset or file level metadata catalogues are linked with the collection level through parent-child relationship definition using UUID. A number of tools have been developed that support interactive metadata management, bulk loading of data, and support for computational workflows or data pipelines. NCI creates persistent identifiers for each of the assets. The data collection is tracked over its lifetime, and the recognition of the data providers, data owners, data

  7. Stimulated dual-band infrared computed tomography: A tool to inspect the aging infrastructure

    SciTech Connect

    Del Grande, N.K.; Durbin, P.F.

    1995-06-27

    The authors have developed stimulated dual-band infrared (IR) computed tomography as a tool to inspect the aging infrastructure. The system has the potential to locate and quantify structural damage within airframes and bridge decks. Typically, dual-band IR detection methods improve the signal-to-noise ratio by a factor of ten, compared to single-band IR detection methods. They conducted a demonstration at Boeing using a uniform pulsed-heat source to stimulate IR images of hidden defects in the 727 fuselage. The dual-band IR camera and image processing system produced temperature, thermal inertia, and cooling-rate maps. In combination, these maps characterized the defect site, size, depth, thickness and type. The authors quantified the percent metal loss from corrosion above a threshold of 5%, with overall uncertainties of 3%. Also, they conducted a feasibility study of dual-band IR thermal imaging for bridge deck inspections. They determined the sites and relative concrete displacement of 2-in. and 4-in. deep delaminations from thin styrofoam implants in asphalt-covered concrete slabs. They demonstrated the value of dual-band IR computed tomography to quantify structural damage within flash-heated airframes and naturally-heated bridge decks.

  8. Stimulated dual-band infrared computed tomography: a tool to inspect the aging infrastructure

    NASA Astrophysics Data System (ADS)

    DelGrande, Nancy; Durbin, Philip F.

    1995-09-01

    We have developed stimulated dual-band infrared (IR) computed tomography as a tool to inspect the aging infrastructure. Our system has the potential to locate and quantify structural damage within airframes and bridge decks. Typically, dual-band IR detection methods improve the signal-to-noise ratio by a factor of ten, compared to single-band IR detection methods. We conducted a demonstration at Boeing using a uniform pulsed-heat source to stimulate IR images of hidden defects in the 727 fuselage. Our dual-band IR camera and image processing system produced temperature, thermal inertia, and cooling-rate maps. In combination, these maps characterized the defect site, size, depth, thickness, and type. We quantified the percent metal loss from corrosion above a threshold of 5%, with overall uncertainties of 3%. Also, we conducted a feasibility study of dual-band IR thermal imaging for bridge deck inspections. We determined the sites and relative concrete displacement of 12- in. and 4-in. deep delaminations from thin styrofoam implants in asphalt-covered concrete slabs. We demonstrated the value of dual-band IR computed tomography to quantify structural damage within flash-heated airframes and naturally heated bridge decks.

  9. TerraFERMA: Harnessing Advanced Computational Libraries in Earth Science

    NASA Astrophysics Data System (ADS)

    Wilson, C. R.; Spiegelman, M.; van Keken, P.

    2012-12-01

    Many important problems in Earth sciences can be described by non-linear coupled systems of partial differential equations. These "multi-physics" problems include thermo-chemical convection in Earth and planetary interiors, interactions of fluids and magmas with the Earth's mantle and crust and coupled flow of water and ice. These problems are of interest to a large community of researchers but are complicated to model and understand. Much of this complexity stems from the nature of multi-physics where small changes in the coupling between variables or constitutive relations can lead to radical changes in behavior, which in turn affect critical computational choices such as discretizations, solvers and preconditioners. To make progress in understanding such coupled systems requires a computational framework where multi-physics problems can be described at a high-level while maintaining the flexibility to easily modify the solution algorithm. Fortunately, recent advances in computational science provide a basis for implementing such a framework. Here we present the Transparent Finite Element Rapid Model Assembler (TerraFERMA), which leverages several advanced open-source libraries for core functionality. FEniCS (fenicsproject.org) provides a high level language for describing the weak forms of coupled systems of equations, and an automatic code generator that produces finite element assembly code. PETSc (www.mcs.anl.gov/petsc) provides a wide range of scalable linear and non-linear solvers that can be composed into effective multi-physics preconditioners. SPuD (amcg.ese.ic.ac.uk/Spud) is an application neutral options system that provides both human and machine-readable interfaces based on a single xml schema. Our software integrates these libraries and provides the user with a framework for exploring multi-physics problems. A single options file fully describes the problem, including all equations, coefficients and solver options. Custom compiled applications are

  10. Advanced flight computers for planetary exploration

    NASA Technical Reports Server (NTRS)

    Stephenson, R. Rhoads

    1988-01-01

    Research concerning flight computers for use on interplanetary probes is reviewed. The history of these computers from the Viking mission to the present is outlined. The differences between ground commercial computers and computers for planetary exploration are listed. The development of a computer for the Mariner Mark II comet rendezvous asteroid flyby mission is described. Various aspects of recently developed computer systems are examined, including the Max real time, embedded computer, a hypercube distributed supercomputer, a SAR data processor, a processor for the High Resolution IR Imaging Spectrometer, and a robotic vision multiresolution pyramid machine for processsing images obtained by a Mars Rover.

  11. [Design and study of parallel computing environment of Monte Carlo simulation for particle therapy planning using a public cloud-computing infrastructure].

    PubMed

    Yokohama, Noriya

    2013-07-01

    This report was aimed at structuring the design of architectures and studying performance measurement of a parallel computing environment using a Monte Carlo simulation for particle therapy using a high performance computing (HPC) instance within a public cloud-computing infrastructure. Performance measurements showed an approximately 28 times faster speed than seen with single-thread architecture, combined with improved stability. A study of methods of optimizing the system operations also indicated lower cost. PMID:23877155

  12. [Design and study of parallel computing environment of Monte Carlo simulation for particle therapy planning using a public cloud-computing infrastructure].

    PubMed

    Yokohama, Noriya

    2013-07-01

    This report was aimed at structuring the design of architectures and studying performance measurement of a parallel computing environment using a Monte Carlo simulation for particle therapy using a high performance computing (HPC) instance within a public cloud-computing infrastructure. Performance measurements showed an approximately 28 times faster speed than seen with single-thread architecture, combined with improved stability. A study of methods of optimizing the system operations also indicated lower cost.

  13. ADVANCED COMPUTATIONAL METHODS IN DOSE MODELING: APPLICATION OF COMPUTATIONAL BIOPHYSICAL TRANSPORT, COMPUTATIONAL CHEMISTRY, AND COMPUTATIONAL BIOLOGY

    EPA Science Inventory

    Computational toxicology (CompTox) leverages the significant gains in computing power and computational techniques (e.g., numerical approaches, structure-activity relationships, bioinformatics) realized over the last few years, thereby reducing costs and increasing efficiency i...

  14. Computing Advances in the Teaching of Chemistry.

    ERIC Educational Resources Information Center

    Baskett, W. P.; Matthews, G. P.

    1984-01-01

    Discusses three trends in computer-oriented chemistry instruction: (1) availability of interfaces to integrate computers with experiments; (2) impact of the development of higher resolution graphics and greater memory capacity; and (3) role of videodisc technology on computer assisted instruction. Includes program listings for auto-titration and…

  15. VASA: Interactive Computational Steering of Large Asynchronous Simulation Pipelines for Societal Infrastructure.

    PubMed

    Ko, Sungahn; Zhao, Jieqiong; Xia, Jing; Afzal, Shehzad; Wang, Xiaoyu; Abram, Greg; Elmqvist, Niklas; Kne, Len; Van Riper, David; Gaither, Kelly; Kennedy, Shaun; Tolone, William; Ribarsky, William; Ebert, David S

    2014-12-01

    We present VASA, a visual analytics platform consisting of a desktop application, a component model, and a suite of distributed simulation components for modeling the impact of societal threats such as weather, food contamination, and traffic on critical infrastructure such as supply chains, road networks, and power grids. Each component encapsulates a high-fidelity simulation model that together form an asynchronous simulation pipeline: a system of systems of individual simulations with a common data and parameter exchange format. At the heart of VASA is the Workbench, a visual analytics application providing three distinct features: (1) low-fidelity approximations of the distributed simulation components using local simulation proxies to enable analysts to interactively configure a simulation run; (2) computational steering mechanisms to manage the execution of individual simulation components; and (3) spatiotemporal and interactive methods to explore the combined results of a simulation run. We showcase the utility of the platform using examples involving supply chains during a hurricane as well as food contamination in a fast food restaurant chain. PMID:26356899

  16. Data Grids: a new computational infrastructure for data-intensive science.

    PubMed

    Avery, Paul

    2002-06-15

    Twenty-first-century scientific and engineering enterprises are increasingly characterized by their geographic dispersion and their reliance on large data archives. These characteristics bring with them unique challenges. First, the increasing size and complexity of modern data collections require significant investments in information technologies to store, retrieve and analyse them. Second, the increased distribution of people and resources in these projects has made resource sharing and collaboration across significant geographic and organizational boundaries critical to their success. In this paper I explore how computing infrastructures based on Data Grids offer data-intensive enterprises a comprehensive, scalable framework for collaboration and resource sharing. A detailed example of a Data Grid framework is presented for a Large Hadron Collider experiment, where a hierarchical set of laboratory and university resources comprising petaflops of processing power and a multi-petabyte data archive must be efficiently used by a global collaboration. The experience gained with these new information systems, providing transparent managed access to massive distributed data collections, will be applicable to large-scale, data-intensive problems in a wide spectrum of scientific and engineering disciplines, and eventually in industry and commerce. Such systems will be needed in the coming decades as a central element of our information-based society.

  17. Data Grids: a new computational infrastructure for data-intensive science.

    PubMed

    Avery, Paul

    2002-06-15

    Twenty-first-century scientific and engineering enterprises are increasingly characterized by their geographic dispersion and their reliance on large data archives. These characteristics bring with them unique challenges. First, the increasing size and complexity of modern data collections require significant investments in information technologies to store, retrieve and analyse them. Second, the increased distribution of people and resources in these projects has made resource sharing and collaboration across significant geographic and organizational boundaries critical to their success. In this paper I explore how computing infrastructures based on Data Grids offer data-intensive enterprises a comprehensive, scalable framework for collaboration and resource sharing. A detailed example of a Data Grid framework is presented for a Large Hadron Collider experiment, where a hierarchical set of laboratory and university resources comprising petaflops of processing power and a multi-petabyte data archive must be efficiently used by a global collaboration. The experience gained with these new information systems, providing transparent managed access to massive distributed data collections, will be applicable to large-scale, data-intensive problems in a wide spectrum of scientific and engineering disciplines, and eventually in industry and commerce. Such systems will be needed in the coming decades as a central element of our information-based society. PMID:12804274

  18. VASA: Interactive Computational Steering of Large Asynchronous Simulation Pipelines for Societal Infrastructure.

    PubMed

    Ko, Sungahn; Zhao, Jieqiong; Xia, Jing; Afzal, Shehzad; Wang, Xiaoyu; Abram, Greg; Elmqvist, Niklas; Kne, Len; Van Riper, David; Gaither, Kelly; Kennedy, Shaun; Tolone, William; Ribarsky, William; Ebert, David S

    2014-12-01

    We present VASA, a visual analytics platform consisting of a desktop application, a component model, and a suite of distributed simulation components for modeling the impact of societal threats such as weather, food contamination, and traffic on critical infrastructure such as supply chains, road networks, and power grids. Each component encapsulates a high-fidelity simulation model that together form an asynchronous simulation pipeline: a system of systems of individual simulations with a common data and parameter exchange format. At the heart of VASA is the Workbench, a visual analytics application providing three distinct features: (1) low-fidelity approximations of the distributed simulation components using local simulation proxies to enable analysts to interactively configure a simulation run; (2) computational steering mechanisms to manage the execution of individual simulation components; and (3) spatiotemporal and interactive methods to explore the combined results of a simulation run. We showcase the utility of the platform using examples involving supply chains during a hurricane as well as food contamination in a fast food restaurant chain.

  19. Advancing crime scene computer forensics techniques

    NASA Astrophysics Data System (ADS)

    Hosmer, Chet; Feldman, John; Giordano, Joe

    1999-02-01

    Computers and network technology have become inexpensive and powerful tools that can be applied to a wide range of criminal activity. Computers have changed the world's view of evidence because computers are used more and more as tools in committing `traditional crimes' such as embezzlements, thefts, extortion and murder. This paper will focus on reviewing the current state-of-the-art of the data recovery and evidence construction tools used in both the field and laboratory for prosection purposes.

  20. Estimation of future outflows and infrastructure needed to recycle personal computer systems in California.

    PubMed

    Kang, Hai-Yong; Schoenung, Julie M

    2006-09-21

    The objectives of the present study are to estimate future quantities of electronic waste (e-waste) for which appropriate infrastructure needs to be established, and to the estimate the total cost for e-waste recycling in California. Estimation of the future amounts of e-waste as a function of time is critical to effective e-waste management. To generate estimates, we use a time-series materials flow analysis model (MFAM). The model estimates future e-waste quantities by modeling the stages of production, usage, and disposal. We consider four scenarios for the estimation of future e-waste generation in order to consider the effects of exportation outside the State of California and of user preferences to store or to recycle the e-waste. These efforts were further investigated through the use of sensitivity analysis. The results of the present research indicate that the outflow (recycling) amount of central processing units (CPUs) will increase and will reach approximately 8.5 million units per year in 2013, but the outflow (recycling) of cathode ray tube (CRT) monitors will decrease from 2004 in California because of the replacement of CRT monitors by liquid crystal display (LCD) monitors. In 2013, the cost for CPU recycling will be 1.7 times higher than that in 2005. But for CRT monitors, the cost for recycling in 2013 will be negligible. After the State of California enacted the ban on landfill disposal of e-waste, recycling became the most common end-of-life (EOL) option in California. Also, after 2005, the State of California will need more than 60 average-capacity materials recovery facilities (MRFs), to recycle the number of personal computer systems generated, which represents an investment in capital of over 16 million dollars.

  1. Software Attribution for Geoscience Applications in the Computational Infrastructure for Geodynamics

    NASA Astrophysics Data System (ADS)

    Hwang, L.; Dumit, J.; Fish, A.; Soito, L.; Kellogg, L. H.; Smith, M.

    2015-12-01

    Scientific software is largely developed by individual scientists and represents a significant intellectual contribution to the field. As the scientific culture and funding agencies move towards an expectation that software be open-source, there is a corresponding need for mechanisms to cite software, both to provide credit and recognition to developers, and to aid in discoverability of software and scientific reproducibility. We assess the geodynamic modeling community's current citation practices by examining more than 300 predominantly self-reported publications utilizing scientific software in the past 5 years that is available through the Computational Infrastructure for Geodynamics (CIG). Preliminary results indicate that authors cite and attribute software either through citing (in rank order) peer-reviewed scientific publications, a user's manual, and/or a paper describing the software code. Attributions maybe found directly in the text, in acknowledgements, in figure captions, or in footnotes. What is considered citable varies widely. Citations predominantly lack software version numbers or persistent identifiers to find the software package. Versioning may be implied through reference to a versioned user manual. Authors sometimes report code features used and whether they have modified the code. As an open-source community, CIG requests that researchers contribute their modifications to the repository. However, such modifications may not be contributed back to a repository code branch, decreasing the chances of discoverability and reproducibility. Survey results through CIG's Software Attribution for Geoscience Applications (SAGA) project suggest that lack of knowledge, tools, and workflows to cite codes are barriers to effectively implement the emerging citation norms. Generated on-demand attributions on software landing pages and a prototype extensible plug-in to automatically generate attributions in codes are the first steps towards reproducibility.

  2. Aerodynamic optimization studies on advanced architecture computers

    NASA Technical Reports Server (NTRS)

    Chawla, Kalpana

    1995-01-01

    The approach to carrying out multi-discipline aerospace design studies in the future, especially in massively parallel computing environments, comprises of choosing (1) suitable solvers to compute solutions to equations characterizing a discipline, and (2) efficient optimization methods. In addition, for aerodynamic optimization problems, (3) smart methodologies must be selected to modify the surface shape. In this research effort, a 'direct' optimization method is implemented on the Cray C-90 to improve aerodynamic design. It is coupled with an existing implicit Navier-Stokes solver, OVERFLOW, to compute flow solutions. The optimization method is chosen such that it can accomodate multi-discipline optimization in future computations. In the work , however, only single discipline aerodynamic optimization will be included.

  3. A social-ecological database to advance research on infrastructure development impacts in the Brazilian Amazon.

    PubMed

    Tucker Lima, Joanna M; Valle, Denis; Moretto, Evandro Mateus; Pulice, Sergio Mantovani Paiva; Zuca, Nadia Lucia; Roquetti, Daniel Rondinelli; Beduschi, Liviam Elizabeth Cordeiro; Praia, Amanda Salles; Okamoto, Claudia Parucce Franco; da Silva Carvalhaes, Vinicius Leite; Branco, Evandro Albiach; Barbezani, Bruna; Labandera, Emily; Timpe, Kelsie; Kaplan, David

    2016-01-01

    Recognized as one of the world's most vital natural and cultural resources, the Amazon faces a wide variety of threats from natural resource and infrastructure development. Within this context, rigorous scientific study of the region's complex social-ecological system is critical to inform and direct decision-making toward more sustainable environmental and social outcomes. Given the Amazon's tightly linked social and ecological components and the scope of potential development impacts, effective study of this system requires an easily accessible resource that provides a broad and reliable data baseline. This paper brings together multiple datasets from diverse disciplines (including human health, socio-economics, environment, hydrology, and energy) to provide investigators with a variety of baseline data to explore the multiple long-term effects of infrastructure development in the Brazilian Amazon. PMID:27575915

  4. A social-ecological database to advance research on infrastructure development impacts in the Brazilian Amazon.

    PubMed

    Tucker Lima, Joanna M; Valle, Denis; Moretto, Evandro Mateus; Pulice, Sergio Mantovani Paiva; Zuca, Nadia Lucia; Roquetti, Daniel Rondinelli; Beduschi, Liviam Elizabeth Cordeiro; Praia, Amanda Salles; Okamoto, Claudia Parucce Franco; da Silva Carvalhaes, Vinicius Leite; Branco, Evandro Albiach; Barbezani, Bruna; Labandera, Emily; Timpe, Kelsie; Kaplan, David

    2016-08-30

    Recognized as one of the world's most vital natural and cultural resources, the Amazon faces a wide variety of threats from natural resource and infrastructure development. Within this context, rigorous scientific study of the region's complex social-ecological system is critical to inform and direct decision-making toward more sustainable environmental and social outcomes. Given the Amazon's tightly linked social and ecological components and the scope of potential development impacts, effective study of this system requires an easily accessible resource that provides a broad and reliable data baseline. This paper brings together multiple datasets from diverse disciplines (including human health, socio-economics, environment, hydrology, and energy) to provide investigators with a variety of baseline data to explore the multiple long-term effects of infrastructure development in the Brazilian Amazon.

  5. A social-ecological database to advance research on infrastructure development impacts in the Brazilian Amazon

    PubMed Central

    Tucker Lima, Joanna M.; Valle, Denis; Moretto, Evandro Mateus; Pulice, Sergio Mantovani Paiva; Zuca, Nadia Lucia; Roquetti, Daniel Rondinelli; Beduschi, Liviam Elizabeth Cordeiro; Praia, Amanda Salles; Okamoto, Claudia Parucce Franco; da Silva Carvalhaes, Vinicius Leite; Branco, Evandro Albiach; Barbezani, Bruna; Labandera, Emily; Timpe, Kelsie; Kaplan, David

    2016-01-01

    Recognized as one of the world’s most vital natural and cultural resources, the Amazon faces a wide variety of threats from natural resource and infrastructure development. Within this context, rigorous scientific study of the region’s complex social-ecological system is critical to inform and direct decision-making toward more sustainable environmental and social outcomes. Given the Amazon’s tightly linked social and ecological components and the scope of potential development impacts, effective study of this system requires an easily accessible resource that provides a broad and reliable data baseline. This paper brings together multiple datasets from diverse disciplines (including human health, socio-economics, environment, hydrology, and energy) to provide investigators with a variety of baseline data to explore the multiple long-term effects of infrastructure development in the Brazilian Amazon. PMID:27575915

  6. Advances in Monte Carlo computer simulation

    NASA Astrophysics Data System (ADS)

    Swendsen, Robert H.

    2011-03-01

    Since the invention of the Metropolis method in 1953, Monte Carlo methods have been shown to provide an efficient, practical approach to the calculation of physical properties in a wide variety of systems. In this talk, I will discuss some of the advances in the MC simulation of thermodynamics systems, with an emphasis on optimization to obtain a maximum of useful information.

  7. Implementation issues for mobile-wireless infrastructure and mobile health care computing devices for a hospital ward setting.

    PubMed

    Heslop, Liza; Weeding, Stephen; Dawson, Linda; Fisher, Julie; Howard, Andrew

    2010-08-01

    mWard is a project whose purpose is to enhance existing clinical and administrative decision support and to consider mobile computers, connected via wireless network, for bringing clinical information to the point of care. The mWard project allowed a limited number of users to test and evaluate a selected range of mobile-wireless infrastructure and mobile health care computing devices at the neuroscience ward at Southern Health's Monash Medical Centre, Victoria, Australia. Before the project commenced, the ward had two PC's which were used as terminals by all ward-based staff and numerous multi-disciplinary staff who visited the ward each day. The first stage of the research, outlined in this paper, evaluates a selected range of mobile-wireless infrastructure. PMID:20703905

  8. Implementation issues for mobile-wireless infrastructure and mobile health care computing devices for a hospital ward setting.

    PubMed

    Heslop, Liza; Weeding, Stephen; Dawson, Linda; Fisher, Julie; Howard, Andrew

    2010-08-01

    mWard is a project whose purpose is to enhance existing clinical and administrative decision support and to consider mobile computers, connected via wireless network, for bringing clinical information to the point of care. The mWard project allowed a limited number of users to test and evaluate a selected range of mobile-wireless infrastructure and mobile health care computing devices at the neuroscience ward at Southern Health's Monash Medical Centre, Victoria, Australia. Before the project commenced, the ward had two PC's which were used as terminals by all ward-based staff and numerous multi-disciplinary staff who visited the ward each day. The first stage of the research, outlined in this paper, evaluates a selected range of mobile-wireless infrastructure.

  9. Evaluation of Advanced Computing Techniques and Technologies: Reconfigurable Computing

    NASA Technical Reports Server (NTRS)

    Wells, B. Earl

    2003-01-01

    The focus of this project was to survey the technology of reconfigurable computing determine its level of maturity and suitability for NASA applications. To better understand and assess the effectiveness of the reconfigurable design paradigm that is utilized within the HAL-15 reconfigurable computer system. This system was made available to NASA MSFC for this purpose, from Star Bridge Systems, Inc. To implement on at least one application that would benefit from the performance levels that are possible with reconfigurable hardware. It was originally proposed that experiments in fault tolerance and dynamically reconfigurability would be perform but time constraints mandated that these be pursued as future research.

  10. Space data systems: Advanced flight computers

    NASA Technical Reports Server (NTRS)

    Benz, Harry F.

    1991-01-01

    The technical objectives are to develop high-performance, space-qualifiable, onboard computing, storage, and networking technologies. The topics are presented in viewgraph form and include the following: technology challenges; state-of-the-art assessment; program description; relationship to external programs; and cooperation and coordination effort.

  11. Advances in Computer-Supported Learning

    ERIC Educational Resources Information Center

    Neto, Francisco; Brasileiro, Francisco

    2007-01-01

    The Internet and growth of computer networks have eliminated geographic barriers, creating an environment where education can be brought to a student no matter where that student may be. The success of distance learning programs and the availability of many Web-supported applications and multimedia resources have increased the effectiveness of…

  12. Advanced Computing Tools and Models for Accelerator Physics

    SciTech Connect

    Ryne, Robert; Ryne, Robert D.

    2008-06-11

    This paper is based on a transcript of my EPAC'08 presentation on advanced computing tools for accelerator physics. Following an introduction I present several examples, provide a history of the development of beam dynamics capabilities, and conclude with thoughts on the future of large scale computing in accelerator physics.

  13. The multi-modal Australian ScienceS Imaging and Visualization Environment (MASSIVE) high performance computing infrastructure: applications in neuroscience and neuroinformatics research

    PubMed Central

    Goscinski, Wojtek J.; McIntosh, Paul; Felzmann, Ulrich; Maksimenko, Anton; Hall, Christopher J.; Gureyev, Timur; Thompson, Darren; Janke, Andrew; Galloway, Graham; Killeen, Neil E. B.; Raniga, Parnesh; Kaluza, Owen; Ng, Amanda; Poudel, Govinda; Barnes, David G.; Nguyen, Toan; Bonnington, Paul; Egan, Gary F.

    2014-01-01

    The Multi-modal Australian ScienceS Imaging and Visualization Environment (MASSIVE) is a national imaging and visualization facility established by Monash University, the Australian Synchrotron, the Commonwealth Scientific Industrial Research Organization (CSIRO), and the Victorian Partnership for Advanced Computing (VPAC), with funding from the National Computational Infrastructure and the Victorian Government. The MASSIVE facility provides hardware, software, and expertise to drive research in the biomedical sciences, particularly advanced brain imaging research using synchrotron x-ray and infrared imaging, functional and structural magnetic resonance imaging (MRI), x-ray computer tomography (CT), electron microscopy and optical microscopy. The development of MASSIVE has been based on best practice in system integration methodologies, frameworks, and architectures. The facility has: (i) integrated multiple different neuroimaging analysis software components, (ii) enabled cross-platform and cross-modality integration of neuroinformatics tools, and (iii) brought together neuroimaging databases and analysis workflows. MASSIVE is now operational as a nationally distributed and integrated facility for neuroinfomatics and brain imaging research. PMID:24734019

  14. The multi-modal Australian ScienceS Imaging and Visualization Environment (MASSIVE) high performance computing infrastructure: applications in neuroscience and neuroinformatics research.

    PubMed

    Goscinski, Wojtek J; McIntosh, Paul; Felzmann, Ulrich; Maksimenko, Anton; Hall, Christopher J; Gureyev, Timur; Thompson, Darren; Janke, Andrew; Galloway, Graham; Killeen, Neil E B; Raniga, Parnesh; Kaluza, Owen; Ng, Amanda; Poudel, Govinda; Barnes, David G; Nguyen, Toan; Bonnington, Paul; Egan, Gary F

    2014-01-01

    The Multi-modal Australian ScienceS Imaging and Visualization Environment (MASSIVE) is a national imaging and visualization facility established by Monash University, the Australian Synchrotron, the Commonwealth Scientific Industrial Research Organization (CSIRO), and the Victorian Partnership for Advanced Computing (VPAC), with funding from the National Computational Infrastructure and the Victorian Government. The MASSIVE facility provides hardware, software, and expertise to drive research in the biomedical sciences, particularly advanced brain imaging research using synchrotron x-ray and infrared imaging, functional and structural magnetic resonance imaging (MRI), x-ray computer tomography (CT), electron microscopy and optical microscopy. The development of MASSIVE has been based on best practice in system integration methodologies, frameworks, and architectures. The facility has: (i) integrated multiple different neuroimaging analysis software components, (ii) enabled cross-platform and cross-modality integration of neuroinformatics tools, and (iii) brought together neuroimaging databases and analysis workflows. MASSIVE is now operational as a nationally distributed and integrated facility for neuroinfomatics and brain imaging research.

  15. Advances in Computationally Modeling Human Oral Bioavailability

    PubMed Central

    Wang, Junmei; Hou, Tingjun

    2015-01-01

    Although significant progress has been made in experimental high throughput screening (HTS) of ADME (absorption, distribution, metabolism, excretion) and pharmacokinetic properties, the ADME and Toxicity (ADME-Tox) in silico modeling is still indispensable in drug discovery as it can guide us to wisely select drug candidates prior to expensive ADME screenings and clinical trials. Compared to other ADME-Tox properties, human oral bioavailability (HOBA) is particularly important but extremely difficult to predict. In this paper, the advances in human oral bioavailability modeling will be reviewed. Moreover, our deep insight on how to construct more accurate and reliable HOBA QSAR and classification models will also discussed. PMID:25582307

  16. Education as ehealth infrastructure: considerations in advancing a national agenda for ehealth.

    PubMed

    Hilberts, Sonya; Gray, Kathleen

    2014-03-01

    This paper explores the role of education as infrastructure in large-scale ehealth strategies-in theory, in international practice and in one national case study. Education is often invisible in the documentation of ehealth infrastructure. Nevertheless a review of international practice shows that there is significant educational investment made in implementing national ehealth agendas. Disparate views about the role of education are implicit in the ehealth strategy literature, while there is a shortage of evidence-based accounts of ehealth education. In the case of Australia, the benefits and challenges of a broadly collaborative approach to ehealth education are highlighted through activities of various types occurring on many levels to support the implementation of a national ehealth system. This paper concludes that although evidence-based practice is a given in other areas of healthcare, and although there are many published evaluations of ehealth usability and acceptance in the health informatics literature, there is surprisingly little evidence about what works and doesn't work with regard to the ehealth education.

  17. An Open Computing Infrastructure that Facilitates Integrated Product and Process Development from a Decision-Based Perspective

    NASA Technical Reports Server (NTRS)

    Hale, Mark A.

    1996-01-01

    Computer applications for design have evolved rapidly over the past several decades, and significant payoffs are being achieved by organizations through reductions in design cycle times. These applications are overwhelmed by the requirements imposed during complex, open engineering systems design. Organizations are faced with a number of different methodologies, numerous legacy disciplinary tools, and a very large amount of data. Yet they are also faced with few interdisciplinary tools for design collaboration or methods for achieving the revolutionary product designs required to maintain a competitive advantage in the future. These organizations are looking for a software infrastructure that integrates current corporate design practices with newer simulation and solution techniques. Such an infrastructure must be robust to changes in both corporate needs and enabling technologies. In addition, this infrastructure must be user-friendly, modular and scalable. This need is the motivation for the research described in this dissertation. The research is focused on the development of an open computing infrastructure that facilitates product and process design. In addition, this research explicitly deals with human interactions during design through a model that focuses on the role of a designer as that of decision-maker. The research perspective here is taken from that of design as a discipline with a focus on Decision-Based Design, Theory of Languages, Information Science, and Integration Technology. Given this background, a Model of IPPD is developed and implemented along the lines of a traditional experimental procedure: with the steps of establishing context, formalizing a theory, building an apparatus, conducting an experiment, reviewing results, and providing recommendations. Based on this Model, Design Processes and Specification can be explored in a structured and implementable architecture. An architecture for exploring design called DREAMS (Developing Robust

  18. GDS-3 initiative: advanced design-through-chip infrastructure for subwavelength technology

    NASA Astrophysics Data System (ADS)

    Pack, Robert C.; Heins, Mitchell D.; Chatila, Ahmad R.; Boksha, Victor V.; Cottrell, D.; Berglund, C. Neil; Hogan, J.; James, F.; Vucurevich, T.; Bales, M.; Shimasaki, K.

    2002-07-01

    In this paper we review current design-to-silicon manufacturing challenges and complexities confronting the IC design and manufacturing worlds as the industry prepares for sub-100nm technology node IC production and discuss a simplifying infrastructure and various principles for reducing and managing these complexities. Rapidly increasing overall complexity spanning all elements of the design- through-silicon 'ecosystem' and entanglement of the intricacies of traditionally separable design and manufacturing process technical disciplines is increasingly evident in long-loop design-mask-FAB iterations portending a widening of the design-productivity gap and an impact on the cost-effectiveness and productivity of the IC industry. Using the concept of 'technology overshoot' we conclude that the IC industry must broaden its development efforts and diversify investments to include those of building a robust and inherently simplifying interface infrastructure between design and manufacturing and to enable the efficiencies required of a maturing industry. We also explore the concept of modularity and how other mature industries have employed it to optimize efficiencies and investments and conclude that while the design and manufacturing worlds have practiced a number of fundamental concepts of modularity - the overall desegregation of the industry as a whole as case in point - a consistent, well-planed architecture for managing the interface between the two worlds has not yet been employed; hindering the development and migration of much needed productivity and cost-effectiveness enhancements. We then discus the impact of these factors on the industry in light of sub-wavelength era lithography resolution enhancement technologies and related manufacturing process and device physics issue, which increasingly impact the design flow. Recognizing that significant improvement to the design-silicon manufacturing interface is required, lastly we discuss a solution in the form of a

  19. Advances in computational studies of energy materials.

    PubMed

    Catlow, C R A; Guo, Z X; Miskufova, M; Shevlin, S A; Smith, A G H; Sokol, A A; Walsh, A; Wilson, D J; Woodley, S M

    2010-07-28

    We review recent developments and applications of computational modelling techniques in the field of materials for energy technologies including hydrogen production and storage, energy storage and conversion, and light absorption and emission. In addition, we present new work on an Sn2TiO4 photocatalyst containing an Sn(II) lone pair, new interatomic potential models for SrTiO3 and GaN, an exploration of defects in the kesterite/stannite-structured solar cell absorber Cu2ZnSnS4, and report details of the incorporation of hydrogen into Ag2O and Cu2O. Special attention is paid to the modelling of nanostructured systems, including ceria (CeO2, mixed Ce(x)O(y) and Ce2O3) and group 13 sesquioxides. We consider applications based on both interatomic potential and electronic structure methodologies; and we illustrate the increasingly quantitative and predictive nature of modelling in this field. PMID:20566517

  20. Computational Approaches to Enhance Nanosafety and Advance Nanomedicine

    NASA Astrophysics Data System (ADS)

    Mendoza, Eduardo R.

    With the increasing use of nanoparticles in food processing, filtration/purification and consumer products, as well as the huge potential of their use in nanomedicine, a quantitative understanding of the effects of nanoparticle uptake and transport is needed. We provide examples of novel methods for modeling complex bio-nano interactions which are based on stochastic process algebras. Since model construction presumes sufficient availability of experimental data, recent developments in "nanoinformatics", an emerging discipline analogous to bioinfomatics, in building an accessible information infrastructure are subsequently discussed. Both computational areas offer opportunities for Filipinos to engage in collaborative, cutting edge research in this impactful field.

  1. Advancing vector biology research: a community survey for future directions, research applications and infrastructure requirements.

    PubMed

    Kohl, Alain; Pondeville, Emilie; Schnettler, Esther; Crisanti, Andrea; Supparo, Clelia; Christophides, George K; Kersey, Paul J; Maslen, Gareth L; Takken, Willem; Koenraadt, Constantianus J M; Oliva, Clelia F; Busquets, Núria; Abad, F Xavier; Failloux, Anna-Bella; Levashina, Elena A; Wilson, Anthony J; Veronesi, Eva; Pichard, Maëlle; Arnaud Marsh, Sarah; Simard, Frédéric; Vernick, Kenneth D

    2016-01-01

    Vector-borne pathogens impact public health, animal production, and animal welfare. Research on arthropod vectors such as mosquitoes, ticks, sandflies, and midges which transmit pathogens to humans and economically important animals is crucial for development of new control measures that target transmission by the vector. While insecticides are an important part of this arsenal, appearance of resistance mechanisms is increasingly common. Novel tools for genetic manipulation of vectors, use of Wolbachia endosymbiotic bacteria, and other biological control mechanisms to prevent pathogen transmission have led to promising new intervention strategies, adding to strong interest in vector biology and genetics as well as vector-pathogen interactions. Vector research is therefore at a crucial juncture, and strategic decisions on future research directions and research infrastructure investment should be informed by the research community. A survey initiated by the European Horizon 2020 INFRAVEC-2 consortium set out to canvass priorities in the vector biology research community and to determine key activities that are needed for researchers to efficiently study vectors, vector-pathogen interactions, as well as access the structures and services that allow such activities to be carried out. We summarize the most important findings of the survey which in particular reflect the priorities of researchers in European countries, and which will be of use to stakeholders that include researchers, government, and research organizations. PMID:27677378

  2. Use of advanced computers for aerodynamic flow simulation

    NASA Technical Reports Server (NTRS)

    Bailey, F. R.; Ballhaus, W. F.

    1980-01-01

    The current and projected use of advanced computers for large-scale aerodynamic flow simulation applied to engineering design and research is discussed. The design use of mature codes run on conventional, serial computers is compared with the fluid research use of new codes run on parallel and vector computers. The role of flow simulations in design is illustrated by the application of a three dimensional, inviscid, transonic code to the Sabreliner 60 wing redesign. Research computations that include a more complete description of the fluid physics by use of Reynolds averaged Navier-Stokes and large-eddy simulation formulations are also presented. Results of studies for a numerical aerodynamic simulation facility are used to project the feasibility of design applications employing these more advanced three dimensional viscous flow simulations.

  3. Advanced Simulation and Computing FY07-08 Implementation Plan Volume 2

    SciTech Connect

    Kusnezov, D; Hale, A; McCoy, M; Hopson, J

    2006-06-22

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the safety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future nonnuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program will require the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear-weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable Stockpile Life Extension Programs (SLEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional simulation environment while maintaining the support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from

  4. Advanced Simulation & Computing FY09-FY10 Implementation Plan Volume 2, Rev. 0

    SciTech Connect

    Meisner, R; Perry, J; McCoy, M; Hopson, J

    2008-04-30

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the safety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future nonnuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC)1 is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear-weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable Stockpile Life Extension Programs (SLEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional simulation environment while maintaining the support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from one

  5. Advanced Simulation and Computing FY10-11 Implementation Plan Volume 2, Rev. 0

    SciTech Connect

    Carnes, B

    2009-06-08

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from one that

  6. Advanced Simulation and Computing FY08-09 Implementation Plan Volume 2 Revision 0

    SciTech Connect

    McCoy, M; Kusnezov, D; Bikkel, T; Hopson, J

    2007-04-25

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the safety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future nonnuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear-weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable Stockpile Life Extension Programs (SLEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional simulation environment while maintaining the support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from one

  7. Advanced Simulation and Computing FY09-FY10 Implementation Plan Volume 2, Rev. 1

    SciTech Connect

    Kissel, L

    2009-04-01

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from one that

  8. Advanced Simulation and Computing FY09-FY10 Implementation Plan, Volume 2, Revision 0.5

    SciTech Connect

    Meisner, R; Hopson, J; Peery, J; McCoy, M

    2008-10-07

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC)1 is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from one

  9. Advanced Simulation and Computing FY10-FY11 Implementation Plan Volume 2, Rev. 0.5

    SciTech Connect

    Meisner, R; Peery, J; McCoy, M; Hopson, J

    2009-09-08

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering (D&E) programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model

  10. Advanced Simulation and Computing Fiscal Year 2011-2012 Implementation Plan, Revision 0

    SciTech Connect

    McCoy, M; Phillips, J; Hpson, J; Meisner, R

    2010-04-22

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering (D&E) programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model

  11. Advanced Simulation and Computing FY08-09 Implementation Plan, Volume 2, Revision 0.5

    SciTech Connect

    Kusnezov, D; Bickel, T; McCoy, M; Hopson, J

    2007-09-13

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC)1 is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear-weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable Stockpile Life Extension Programs (SLEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional simulation environment while maintaining the support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from

  12. A Revolution in the Making: Advances in Materials That May Transform Future Exploration Infrastructures and Missions

    NASA Technical Reports Server (NTRS)

    Harris, Charles E.; Dicus, Dennis L.; Shuart, Mark J.

    2001-01-01

    The NASA Strategic Plan identifies the long-term goal to provide safe and affordable space access, orbital transfer, and interplanetary transportation capabilities to enable research, human exploration, and the commercial development of space; and to conduct human and robotic missions to planets and other bodies in our solar system. Numerous scientific and engineering breakthroughs will be required to develop the technology necessary to achieve this goal. Critical technologies include advanced vehicle primary and secondary structure, radiation protection, propulsion and power systems, fuel storage, electronics and devices, sensors and science instruments, and medical diagnostics and treatment. Advanced materials with revolutionary new capabilities are an essential element of each of these technologies. This paper discusses those materials best suited for aerospace vehicle structure and highlights the enormous potential of one revolutionary new material, carbon nanotubes.

  13. Design and implementation of a reliable and cost-effective cloud computing infrastructure: the INFN Napoli experience

    NASA Astrophysics Data System (ADS)

    Capone, V.; Esposito, R.; Pardi, S.; Taurino, F.; Tortone, G.

    2012-12-01

    Over the last few years we have seen an increasing number of services and applications needed to manage and maintain cloud computing facilities. This is particularly true for computing in high energy physics, which often requires complex configurations and distributed infrastructures. In this scenario a cost effective rationalization and consolidation strategy is the key to success in terms of scalability and reliability. In this work we describe an IaaS (Infrastructure as a Service) cloud computing system, with high availability and redundancy features, which is currently in production at INFN-Naples and ATLAS Tier-2 data centre. The main goal we intended to achieve was a simplified method to manage our computing resources and deliver reliable user services, reusing existing hardware without incurring heavy costs. A combined usage of virtualization and clustering technologies allowed us to consolidate our services on a small number of physical machines, reducing electric power costs. As a result of our efforts we developed a complete solution for data and computing centres that can be easily replicated using commodity hardware. Our architecture consists of 2 main subsystems: a clustered storage solution, built on top of disk servers running GlusterFS file system, and a virtual machines execution environment. GlusterFS is a network file system able to perform parallel writes on multiple disk servers, providing this way live replication of data. High availability is also achieved via a network configuration using redundant switches and multiple paths between hypervisor hosts and disk servers. We also developed a set of management scripts to easily perform basic system administration tasks such as automatic deployment of new virtual machines, adaptive scheduling of virtual machines on hypervisor hosts, live migration and automated restart in case of hypervisor failures.

  14. Computer-Assisted Foreign Language Teaching and Learning: Technological Advances

    ERIC Educational Resources Information Center

    Zou, Bin; Xing, Minjie; Wang, Yuping; Sun, Mingyu; Xiang, Catherine H.

    2013-01-01

    Computer-Assisted Foreign Language Teaching and Learning: Technological Advances highlights new research and an original framework that brings together foreign language teaching, experiments and testing practices that utilize the most recent and widely used e-learning resources. This comprehensive collection of research will offer linguistic…

  15. Advanced Placement Computer Science with Pascal. Volume 2. Experimental Edition.

    ERIC Educational Resources Information Center

    New York City Board of Education, Brooklyn, NY.

    This curriculum guide presents 100 lessons for an advanced placement course on programming in Pascal. Some of the topics covered include arrays, sorting, strings, sets, records, computers in society, files, stacks, queues, linked lists, binary trees, searching, hashing, and chaining. Performance objectives, vocabulary, motivation, aim,…

  16. Advanced computational tools for 3-D seismic analysis

    SciTech Connect

    Barhen, J.; Glover, C.W.; Protopopescu, V.A.

    1996-06-01

    The global objective of this effort is to develop advanced computational tools for 3-D seismic analysis, and test the products using a model dataset developed under the joint aegis of the United States` Society of Exploration Geophysicists (SEG) and the European Association of Exploration Geophysicists (EAEG). The goal is to enhance the value to the oil industry of the SEG/EAEG modeling project, carried out with US Department of Energy (DOE) funding in FY` 93-95. The primary objective of the ORNL Center for Engineering Systems Advanced Research (CESAR) is to spearhead the computational innovations techniques that would enable a revolutionary advance in 3-D seismic analysis. The CESAR effort is carried out in collaboration with world-class domain experts from leading universities, and in close coordination with other national laboratories and oil industry partners.

  17. [Activities of Research Institute for Advanced Computer Science

    NASA Technical Reports Server (NTRS)

    Gross, Anthony R. (Technical Monitor); Leiner, Barry M.

    2001-01-01

    The Research Institute for Advanced Computer Science (RIACS) carries out basic research and technology development in computer science, in support of the National Aeronautics and Space Administrations missions. RIACS is located at the NASA Ames Research Center, Moffett Field, California. RIACS research focuses on the three cornerstones of IT research necessary to meet the future challenges of NASA missions: 1. Automated Reasoning for Autonomous Systems Techniques are being developed enabling spacecraft that will be self-guiding and self-correcting to the extent that they will require little or no human intervention. Such craft will be equipped to independently solve problems as they arise, and fulfill their missions with minimum direction from Earth. 2. Human-Centered Computing Many NASA missions require synergy between humans and computers, with sophisticated computational aids amplifying human cognitive and perceptual abilities. 3. High Performance Computing and Networking Advances in the performance of computing and networking continue to have major impact on a variety of NASA endeavors, ranging from modeling and simulation to analysis of large scientific datasets to collaborative engineering, planning and execution. In addition, RIACS collaborates with NASA scientists to apply IT research to a variety of NASA application domains. RIACS also engages in other activities, such as workshops, seminars, visiting scientist programs and student summer programs, designed to encourage and facilitate collaboration between the university and NASA IT research communities.

  18. Downscaling seasonal to centennial simulations on distributed computing infrastructures using WRF model. The WRF4G project

    NASA Astrophysics Data System (ADS)

    Cofino, A. S.; Fernández Quiruelas, V.; Blanco Real, J. C.; García Díez, M.; Fernández, J.

    2013-12-01

    Nowadays Grid Computing is powerful computational tool which is ready to be used for scientific community in different areas (such as biomedicine, astrophysics, climate, etc.). However, the use of this distributed computing infrastructures (DCI) is not yet common practice in climate research, and only a few teams and applications in this area take advantage of this infrastructure. Thus, the WRF4G project objective is to popularize the use of this technology in the atmospheric sciences area. In order to achieve this objective, one of the most used applications has been taken (WRF; a limited- area model, successor of the MM5 model), that has a user community formed by more than 8000 researchers worldwide. This community develop its research activity on different areas and could benefit from the advantages of Grid resources (case study simulations, regional hind-cast/forecast, sensitivity studies, etc.). The WRF model is used by many groups, in the climate research community, to carry on downscaling simulations. Therefore this community will also benefit. However, Grid infrastructures have some drawbacks for the execution of applications that make an intensive use of CPU and memory for a long period of time. This makes necessary to develop a specific framework (middleware). This middleware encapsulates the application and provides appropriate services for the monitoring and management of the simulations and the data. Thus,another objective of theWRF4G project consists on the development of a generic adaptation of WRF to DCIs. It should simplify the access to the DCIs for the researchers, and also to free them from the technical and computational aspects of the use of theses DCI. Finally, in order to demonstrate the ability of WRF4G solving actual scientific challenges with interest and relevance on the climate science (implying a high computational cost) we will shown results from different kind of downscaling experiments, like ERA-Interim re-analysis, CMIP5 models

  19. Advances in Numerical Boundary Conditions for Computational Aeroacoustics

    NASA Technical Reports Server (NTRS)

    Tam, Christopher K. W.

    1997-01-01

    Advances in Computational Aeroacoustics (CAA) depend critically on the availability of accurate, nondispersive, least dissipative computation algorithm as well as high quality numerical boundary treatments. This paper focuses on the recent developments of numerical boundary conditions. In a typical CAA problem, one often encounters two types of boundaries. Because a finite computation domain is used, there are external boundaries. On the external boundaries, boundary conditions simulating the solution outside the computation domain are to be imposed. Inside the computation domain, there may be internal boundaries. On these internal boundaries, boundary conditions simulating the presence of an object or surface with specific acoustic characteristics are to be applied. Numerical boundary conditions, both external or internal, developed for simple model problems are reviewed and examined. Numerical boundary conditions for real aeroacoustic problems are also discussed through specific examples. The paper concludes with a description of some much needed research in numerical boundary conditions for CAA.

  20. An expanded framework for the advanced computational testing and simulation toolkit

    SciTech Connect

    Marques, Osni A.; Drummond, Leroy A.

    2003-11-09

    The Advanced Computational Testing and Simulation (ACTS) Toolkit is a set of computational tools developed primarily at DOE laboratories and is aimed at simplifying the solution of common and important computational problems. The use of the tools reduces the development time for new codes and the tools provide functionality that might not otherwise be available. This document outlines an agenda for expanding the scope of the ACTS Project based on lessons learned from current activities. Highlights of this agenda include peer-reviewed certification of new tools; finding tools to solve problems that are not currently addressed by the Toolkit; working in collaboration with other software initiatives and DOE computer facilities; expanding outreach efforts; promoting interoperability, further development of the tools; and improving functionality of the ACTS Information Center, among other tasks. The ultimate goal is to make the ACTS tools more widely used and more effective in solving DOE's and the nation's scientific problems through the creation of a reliable software infrastructure for scientific computing.

  1. Technology Advancements in the Next Generation of Domain Agnostic Spatial Data Infrastructures

    NASA Astrophysics Data System (ADS)

    Golodoniuc, Pavel; Rankine, Terry; Box, Paul; Atkinson, Rob; Kostanski, Laura

    2013-04-01

    Spatial Data Infrastructures (SDI) are typically composed of a suite of products focused on improving spatial information discovery and access. Proliferation of SDI initiatives has caused the "Yet Another Portal" (YAP) syndrome to emerge with each initiative providing a new mechanism for cataloguing and enabling users to search for spatial information resources. Often coarse-grained and incomplete metadata information available via these SDIs renders them to being analogous with an antiquated library catalogue. We posit that the successful use of SDI resources requires attention to be focused on various semantic aspects of the information contained within - particularly the information models and vocabularies. Currently it is common for understanding of these models and vocabularies to be built into portals. This does not enhance interoperability between SDIs, nor does this provide a means for referencing or searching for a specific feature (e.g., the City of Sydney) without first knowing the location of the information source for the feature and the form in which it is represented. SDI interfaces, such as OGC WFS, provide data from a spatial representation perspective, but do not provide identifiers that can easily be cited or used across system boundaries. The lack of mechanisms to provide stable identifiers of a feature renders it permanently scoped to a particular dataset. The other three important aspects that are commonly lacking in SDIs are the inadequate handling of feature level metadata that is commonly not sufficient enough for more than the most basic data discovery; features delivered through SDI are not well integrated with information systems that deliver statistical information about those features; and, importantly there are inadequate mechanisms to reconcile and associate multiple identities and representations of the same real world feature. In this paper we present an extended view of an SDI architecture with integrated support for information

  2. An evolving infrastructure for scientific computing and the integration of new graphics technology

    SciTech Connect

    Fong, K.W.

    1993-02-01

    The National Energy Research Supercomputer Center (NERSC) at the Lawrence Livermore National Laboratory is currently pursuing several projects to implement and integrate new hardware and software technologies. While each of these projects ought to be and is in fact individually justifiable, there is an appealing metaphor for viewing them collectively which provides a simple and memorable way to understand the future direction not only of supercomputing services but of computer centers in general. Once this general direction is understood, it becomes clearer what future computer graphics technologies would be possible and desirable, at least within the context of large scale scientific computing.

  3. ATOS-1: Designing the infrastructure for an advanced spacecraft operations system

    NASA Astrophysics Data System (ADS)

    Poulter, K. J.; Smith, H. N.

    1993-03-01

    The space industry has identified the need to use artificial intelligence and knowledge based system techniques as integrated, central, symbolic processing components of future mission design, support and operations systems. Various practical and commercial constraints require that off-the-shelf applications, and their knowledge bases, are reused where appropriate and that different mission contractors, potentially using different KBS technologies, can provide application and knowledge sub-modules of an overall integrated system. In order to achieve this integration, which we call knowledge sharing and distributed reasoning, there needs to be agreement on knowledge representations, knowledge interchange-formats, knowledge level communications protocols, and ontology. Research indicates that the latter is most important, providing the applications with a common conceptualization of the domain, in our case spacecraft operations, mission design, and planning. Agreement on ontology permits applications that employ different knowledge representations to interwork through mediators which we refer to as knowledge agents. This creates the illusion of a shared model without the constraints, both technical and commercial, that occur in centralized or uniform architectures. This paper explains how these matters are being addressed within the ATOS program at ESOC, using techniques which draw upon ideas and standards emerging from the DARPA Knowledge Sharing Effort. In particular, we explain how the project is developing an electronic Ontology of Spacecraft Operations and how this can be used as an enabling component within space support systems that employ advanced software engineering. We indicate our hope and expectation that the core ontology developed in ATOS, will permit the full development of standards for such systems throughout the space industry.

  4. ATOS-1: Designing the infrastructure for an advanced spacecraft operations system

    NASA Technical Reports Server (NTRS)

    Poulter, K. J.; Smith, H. N.

    1993-01-01

    The space industry has identified the need to use artificial intelligence and knowledge based system techniques as integrated, central, symbolic processing components of future mission design, support and operations systems. Various practical and commercial constraints require that off-the-shelf applications, and their knowledge bases, are reused where appropriate and that different mission contractors, potentially using different KBS technologies, can provide application and knowledge sub-modules of an overall integrated system. In order to achieve this integration, which we call knowledge sharing and distributed reasoning, there needs to be agreement on knowledge representations, knowledge interchange-formats, knowledge level communications protocols, and ontology. Research indicates that the latter is most important, providing the applications with a common conceptualization of the domain, in our case spacecraft operations, mission design, and planning. Agreement on ontology permits applications that employ different knowledge representations to interwork through mediators which we refer to as knowledge agents. This creates the illusion of a shared model without the constraints, both technical and commercial, that occur in centralized or uniform architectures. This paper explains how these matters are being addressed within the ATOS program at ESOC, using techniques which draw upon ideas and standards emerging from the DARPA Knowledge Sharing Effort. In particular, we explain how the project is developing an electronic Ontology of Spacecraft Operations and how this can be used as an enabling component within space support systems that employ advanced software engineering. We indicate our hope and expectation that the core ontology developed in ATOS, will permit the full development of standards for such systems throughout the space industry.

  5. The Design and Implementation of NASA's Advanced Flight Computing Module

    NASA Technical Reports Server (NTRS)

    Alkakaj, Leon; Straedy, Richard; Jarvis, Bruce

    1995-01-01

    This paper describes a working flight computer Multichip Module developed jointly by JPL and TRW under their respective research programs in a collaborative fashion. The MCM is fabricated by nCHIP and is packaged within a 2 by 4 inch Al package from Coors. This flight computer module is one of three modules under development by NASA's Advanced Flight Computer (AFC) program. Further development of the Mass Memory and the programmable I/O MCM modules will follow. The three building block modules will then be stacked into a 3D MCM configuration. The mass and volume of the flight computer MCM achieved at 89 grams and 1.5 cubic inches respectively, represent a major enabling technology for future deep space as well as commercial remote sensing applications.

  6. The Integrated Safety-Critical Advanced Avionics Communication and Control (ISAACC) System Concept: Infrastructure for ISHM

    NASA Technical Reports Server (NTRS)

    Gwaltney, David A.; Briscoe, Jeri M.

    2005-01-01

    Integrated System Health Management (ISHM) architectures for spacecraft will include hard real-time, critical subsystems and soft real-time monitoring subsystems. Interaction between these subsystems will be necessary and an architecture supporting multiple criticality levels will be required. Demonstration hardware for the Integrated Safety-Critical Advanced Avionics Communication & Control (ISAACC) system has been developed at NASA Marshall Space Flight Center. It is a modular system using a commercially available time-triggered protocol, ?Tp/C, that supports hard real-time distributed control systems independent of the data transmission medium. The protocol is implemented in hardware and provides guaranteed low-latency messaging with inherent fault-tolerance and fault-containment. Interoperability between modules and systems of modules using the TTP/C is guaranteed through definition of messages and the precise message schedule implemented by the master-less Time Division Multiple Access (TDMA) communications protocol. "Plug-and-play" capability for sensors and actuators provides automatically configurable modules supporting sensor recalibration and control algorithm re-tuning without software modification. Modular components of controlled physical system(s) critical to control algorithm tuning, such as pumps or valve components in an engine, can be replaced or upgraded as "plug and play" components without modification to the ISAACC module hardware or software. ISAACC modules can communicate with other vehicle subsystems through time-triggered protocols or other communications protocols implemented over Ethernet, MIL-STD- 1553 and RS-485/422. Other communication bus physical layers and protocols can be included as required. In this way, the ISAACC modules can be part of a system-of-systems in a vehicle with multi-tier subsystems of varying criticality. The goal of the ISAACC architecture development is control and monitoring of safety critical systems of a

  7. Activities of the Research Institute for Advanced Computer Science

    NASA Technical Reports Server (NTRS)

    Oliger, Joseph

    1994-01-01

    The Research Institute for Advanced Computer Science (RIACS) was established by the Universities Space Research Association (USRA) at the NASA Ames Research Center (ARC) on June 6, 1983. RIACS is privately operated by USRA, a consortium of universities with research programs in the aerospace sciences, under contract with NASA. The primary mission of RIACS is to provide research and expertise in computer science and scientific computing to support the scientific missions of NASA ARC. The research carried out at RIACS must change its emphasis from year to year in response to NASA ARC's changing needs and technological opportunities. Research at RIACS is currently being done in the following areas: (1) parallel computing; (2) advanced methods for scientific computing; (3) high performance networks; and (4) learning systems. RIACS technical reports are usually preprints of manuscripts that have been submitted to research journals or conference proceedings. A list of these reports for the period January 1, 1994 through December 31, 1994 is in the Reports and Abstracts section of this report.

  8. Advanced computer architecture specification for automated weld systems

    NASA Technical Reports Server (NTRS)

    Katsinis, Constantine

    1994-01-01

    This report describes the requirements for an advanced automated weld system and the associated computer architecture, and defines the overall system specification from a broad perspective. According to the requirements of welding procedures as they relate to an integrated multiaxis motion control and sensor architecture, the computer system requirements are developed based on a proven multiple-processor architecture with an expandable, distributed-memory, single global bus architecture, containing individual processors which are assigned to specific tasks that support sensor or control processes. The specified architecture is sufficiently flexible to integrate previously developed equipment, be upgradable and allow on-site modifications.

  9. Advanced computer graphic techniques for laser range finder (LRF) simulation

    NASA Astrophysics Data System (ADS)

    Bedkowski, Janusz; Jankowski, Stanislaw

    2008-11-01

    This paper show an advanced computer graphic techniques for laser range finder (LRF) simulation. The LRF is the common sensor for unmanned ground vehicle, autonomous mobile robot and security applications. The cost of the measurement system is extremely high, therefore the simulation tool is designed. The simulation gives an opportunity to execute algorithm such as the obstacle avoidance[1], slam for robot localization[2], detection of vegetation and water obstacles in surroundings of the robot chassis[3], LRF measurement in crowd of people[1]. The Axis Aligned Bounding Box (AABB) and alternative technique based on CUDA (NVIDIA Compute Unified Device Architecture) is presented.

  10. Advanced computer modeling techniques expand belt conveyor technology

    SciTech Connect

    Alspaugh, M.

    1998-07-01

    Increased mining production is continuing to challenge engineers and manufacturers to keep up. The pressure to produce larger and more versatile equipment is increasing. This paper will show some recent major projects in the belt conveyor industry that have pushed the limits of design and engineering technology. Also, it will discuss the systems engineering discipline and advanced computer modeling tools that have helped make these achievements possible. Several examples of technologically advanced designs will be reviewed. However, new technology can sometimes produce increased problems with equipment availability and reliability if not carefully developed. Computer modeling techniques that help one design larger equipment can also compound operational headaches if engineering processes and algorithms are not carefully analyzed every step of the way.

  11. Soft computing in design and manufacturing of advanced materials

    NASA Technical Reports Server (NTRS)

    Cios, Krzysztof J.; Baaklini, George Y; Vary, Alex

    1993-01-01

    The potential of fuzzy sets and neural networks, often referred to as soft computing, for aiding in all aspects of manufacturing of advanced materials like ceramics is addressed. In design and manufacturing of advanced materials, it is desirable to find which of the many processing variables contribute most to the desired properties of the material. There is also interest in real time quality control of parameters that govern material properties during processing stages. The concepts of fuzzy sets and neural networks are briefly introduced and it is shown how they can be used in the design and manufacturing processes. These two computational methods are alternatives to other methods such as the Taguchi method. The two methods are demonstrated by using data collected at NASA Lewis Research Center. Future research directions are also discussed.

  12. MOBBED: a computational data infrastructure for handling large collections of event-rich time series datasets in MATLAB.

    PubMed

    Cockfield, Jeremy; Su, Kyungmin; Robbins, Kay A

    2013-01-01

    Experiments to monitor human brain activity during active behavior record a variety of modalities (e.g., EEG, eye tracking, motion capture, respiration monitoring) and capture a complex environmental context leading to large, event-rich time series datasets. The considerable variability of responses within and among subjects in more realistic behavioral scenarios requires experiments to assess many more subjects over longer periods of time. This explosion of data requires better computational infrastructure to more systematically explore and process these collections. MOBBED is a lightweight, easy-to-use, extensible toolkit that allows users to incorporate a computational database into their normal MATLAB workflow. Although capable of storing quite general types of annotated data, MOBBED is particularly oriented to multichannel time series such as EEG that have event streams overlaid with sensor data. MOBBED directly supports access to individual events, data frames, and time-stamped feature vectors, allowing users to ask questions such as what types of events or features co-occur under various experimental conditions. A database provides several advantages not available to users who process one dataset at a time from the local file system. In addition to archiving primary data in a central place to save space and avoid inconsistencies, such a database allows users to manage, search, and retrieve events across multiple datasets without reading the entire dataset. The database also provides infrastructure for handling more complex event patterns that include environmental and contextual conditions. The database can also be used as a cache for expensive intermediate results that are reused in such activities as cross-validation of machine learning algorithms. MOBBED is implemented over PostgreSQL, a widely used open source database, and is freely available under the GNU general public license at http://visual.cs.utsa.edu/mobbed. Source and issue reports for MOBBED

  13. MOBBED: a computational data infrastructure for handling large collections of event-rich time series datasets in MATLAB

    PubMed Central

    Cockfield, Jeremy; Su, Kyungmin; Robbins, Kay A.

    2013-01-01

    Experiments to monitor human brain activity during active behavior record a variety of modalities (e.g., EEG, eye tracking, motion capture, respiration monitoring) and capture a complex environmental context leading to large, event-rich time series datasets. The considerable variability of responses within and among subjects in more realistic behavioral scenarios requires experiments to assess many more subjects over longer periods of time. This explosion of data requires better computational infrastructure to more systematically explore and process these collections. MOBBED is a lightweight, easy-to-use, extensible toolkit that allows users to incorporate a computational database into their normal MATLAB workflow. Although capable of storing quite general types of annotated data, MOBBED is particularly oriented to multichannel time series such as EEG that have event streams overlaid with sensor data. MOBBED directly supports access to individual events, data frames, and time-stamped feature vectors, allowing users to ask questions such as what types of events or features co-occur under various experimental conditions. A database provides several advantages not available to users who process one dataset at a time from the local file system. In addition to archiving primary data in a central place to save space and avoid inconsistencies, such a database allows users to manage, search, and retrieve events across multiple datasets without reading the entire dataset. The database also provides infrastructure for handling more complex event patterns that include environmental and contextual conditions. The database can also be used as a cache for expensive intermediate results that are reused in such activities as cross-validation of machine learning algorithms. MOBBED is implemented over PostgreSQL, a widely used open source database, and is freely available under the GNU general public license at http://visual.cs.utsa.edu/mobbed. Source and issue reports for MOBBED

  14. The advanced computational testing and simulation toolkit (ACTS)

    SciTech Connect

    Drummond, L.A.; Marques, O.

    2002-05-21

    During the past decades there has been a continuous growth in the number of physical and societal problems that have been successfully studied and solved by means of computational modeling and simulation. Distinctively, a number of these are important scientific problems ranging in scale from the atomic to the cosmic. For example, ionization is a phenomenon as ubiquitous in modern society as the glow of fluorescent lights and the etching on silicon computer chips; but it was not until 1999 that researchers finally achieved a complete numerical solution to the simplest example of ionization, the collision of a hydrogen atom with an electron. On the opposite scale, cosmologists have long wondered whether the expansion of the Universe, which began with the Big Bang, would ever reverse itself, ending the Universe in a Big Crunch. In 2000, analysis of new measurements of the cosmic microwave background radiation showed that the geometry of the Universe is flat, and thus the Universe will continue expanding forever. Both of these discoveries depended on high performance computer simulations that utilized computational tools included in the Advanced Computational Testing and Simulation (ACTS) Toolkit. The ACTS Toolkit is an umbrella project that brought together a number of general purpose computational tool development projects funded and supported by the U.S. Department of Energy (DOE). These tools, which have been developed independently, mainly at DOE laboratories, make it easier for scientific code developers to write high performance applications for parallel computers. They tackle a number of computational issues that are common to a large number of scientific applications, mainly implementation of numerical algorithms, and support for code development, execution and optimization. The ACTS Toolkit Project enables the use of these tools by a much wider community of computational scientists, and promotes code portability, reusability, reduction of duplicate efforts

  15. An infrastructure with a unified control plane to integrate IP into optical metro networks to provide flexible and intelligent bandwidth on demand for cloud computing

    NASA Astrophysics Data System (ADS)

    Yang, Wei; Hall, Trevor

    2012-12-01

    The Internet is entering an era of cloud computing to provide more cost effective, eco-friendly and reliable services to consumer and business users and the nature of the Internet traffic will undertake a fundamental transformation. Consequently, the current Internet will no longer suffice for serving cloud traffic in metro areas. This work proposes an infrastructure with a unified control plane that integrates simple packet aggregation technology with optical express through the interoperation between IP routers and electrical traffic controllers in optical metro networks. The proposed infrastructure provides flexible, intelligent, and eco-friendly bandwidth on demand for cloud computing in metro areas.

  16. Experimental and computing strategies in advanced material characterization problems

    SciTech Connect

    Bolzon, G.

    2015-10-28

    The mechanical characterization of materials relies more and more often on sophisticated experimental methods that permit to acquire a large amount of data and, contemporarily, to reduce the invasiveness of the tests. This evolution accompanies the growing demand of non-destructive diagnostic tools that assess the safety level of components in use in structures and infrastructures, for instance in the strategic energy sector. Advanced material systems and properties that are not amenable to traditional techniques, for instance thin layered structures and their adhesion on the relevant substrates, can be also characterized by means of combined experimental-numerical tools elaborating data acquired by full-field measurement techniques. In this context, parameter identification procedures involve the repeated simulation of the laboratory or in situ tests by sophisticated and usually expensive non-linear analyses while, in some situation, reliable and accurate results would be required in real time. The effectiveness and the filtering capabilities of reduced models based on decomposition and interpolation techniques can be profitably used to meet these conflicting requirements. This communication intends to summarize some results recently achieved in this field by the author and her co-workers. The aim is to foster further interaction between engineering and mathematical communities.

  17. Experimental and computing strategies in advanced material characterization problems

    NASA Astrophysics Data System (ADS)

    Bolzon, G.

    2015-10-01

    The mechanical characterization of materials relies more and more often on sophisticated experimental methods that permit to acquire a large amount of data and, contemporarily, to reduce the invasiveness of the tests. This evolution accompanies the growing demand of non-destructive diagnostic tools that assess the safety level of components in use in structures and infrastructures, for instance in the strategic energy sector. Advanced material systems and properties that are not amenable to traditional techniques, for instance thin layered structures and their adhesion on the relevant substrates, can be also characterized by means of combined experimental-numerical tools elaborating data acquired by full-field measurement techniques. In this context, parameter identification procedures involve the repeated simulation of the laboratory or in situ tests by sophisticated and usually expensive non-linear analyses while, in some situation, reliable and accurate results would be required in real time. The effectiveness and the filtering capabilities of reduced models based on decomposition and interpolation techniques can be profitably used to meet these conflicting requirements. This communication intends to summarize some results recently achieved in this field by the author and her co-workers. The aim is to foster further interaction between engineering and mathematical communities.

  18. CSDMS2.0: Computational Infrastructure for Community Surface Dynamics Modeling

    NASA Astrophysics Data System (ADS)

    Syvitski, J. P.; Hutton, E.; Peckham, S. D.; Overeem, I.; Kettner, A.

    2012-12-01

    The Community Surface Dynamic Modeling System (CSDMS) is an NSF-supported, international and community-driven program that seeks to transform the science and practice of earth-surface dynamics modeling. CSDMS integrates a diverse community of more than 850 geoscientists representing 360 international institutions (academic, government, industry) from 60 countries and is supported by a CSDMS Interagency Committee (22 Federal agencies), and a CSDMS Industrial Consortia (18 companies). CSDMS presently distributes more 200 Open Source models and modeling tools, access to high performance computing clusters in support of developing and running models, and a suite of products for education and knowledge transfer. CSDMS software architecture employs frameworks and services that convert stand-alone models into flexible "plug-and-play" components to be assembled into larger applications. CSDMS2.0 will support model applications within a web browser, on a wider variety of computational platforms, and on other high performance computing clusters to ensure robustness and sustainability of the framework. Conversion of stand-alone models into "plug-and-play" components will employ automated wrapping tools. Methods for quantifying model uncertainty are being adapted as part of the modeling framework. Benchmarking data is being incorporated into the CSDMS modeling framework to support model inter-comparison. Finally, a robust mechanism for ingesting and utilizing semantic mediation databases is being developed within the Modeling Framework. Six new community initiatives are being pursued: 1) an earth - ecosystem modeling initiative to capture ecosystem dynamics and ensuing interactions with landscapes, 2) a geodynamics initiative to investigate the interplay among climate, geomorphology, and tectonic processes, 3) an Anthropocene modeling initiative, to incorporate mechanistic models of human influences, 4) a coastal vulnerability modeling initiative, with emphasis on deltas and

  19. Whole-genome CNV analysis: advances in computational approaches

    PubMed Central

    Pirooznia, Mehdi; Goes, Fernando S.; Zandi, Peter P.

    2015-01-01

    Accumulating evidence indicates that DNA copy number variation (CNV) is likely to make a significant contribution to human diversity and also play an important role in disease susceptibility. Recent advances in genome sequencing technologies have enabled the characterization of a variety of genomic features, including CNVs. This has led to the development of several bioinformatics approaches to detect CNVs from next-generation sequencing data. Here, we review recent advances in CNV detection from whole genome sequencing. We discuss the informatics approaches and current computational tools that have been developed as well as their strengths and limitations. This review will assist researchers and analysts in choosing the most suitable tools for CNV analysis as well as provide suggestions for new directions in future development. PMID:25918519

  20. A Geometry Based Infra-Structure for Computational Analysis and Design

    NASA Technical Reports Server (NTRS)

    Haimes, Robert

    1998-01-01

    The computational steps traditionally taken for most engineering analysis suites (computational fluid dynamics (CFD), structural analysis, heat transfer and etc.) are: (1) Surface Generation -- usually by employing a Computer Assisted Design (CAD) system; (2) Grid Generation -- preparing the volume for the simulation; (3) Flow Solver -- producing the results at the specified operational point; (4) Post-processing Visualization -- interactively attempting to understand the results. For structural analysis, integrated systems can be obtained from a number of commercial vendors. These vendors couple directly to a number of CAD systems and are executed from within the CAD Graphical User Interface (GUI). It should be noted that the structural analysis problem is more tractable than CFD; there are fewer mesh topologies used and the grids are not as fine (this problem space does not have the length scaling issues of fluids). For CFD, these steps have worked well in the past for simple steady-state simulations at the expense of much user interaction. The data was transmitted between phases via files. In most cases, the output from a CAD system could go to Initial Graphics Exchange Specification (IGES) or Standard Exchange Program (STEP) files. The output from Grid Generators and Solvers do not really have standards though there are a couple of file formats that can be used for a subset of the gridding (i.e. PLOT3D data formats). The user would have to patch up the data or translate from one format to another to move to the next step. Sometimes this could take days. Specifically the problems with this procedure are:(1) File based -- Information flows from one step to the next via data files with formats specified for that procedure. File standards, when they exist, are wholly inadequate. For example, geometry from CAD systems (transmitted via IGES files) is defined as disjoint surfaces and curves (as well as masses of other information of no interest for the Grid Generator

  1. Software for the ACP (Advanced Computer Program) multiprocessor system

    SciTech Connect

    Biel, J.; Areti, H.; Atac, R.; Cook, A.; Fischler, M.; Gaines, I.; Kaliher, C.; Hance, R.; Husby, D.; Nash, T.

    1987-02-02

    Software has been developed for use with the Fermilab Advanced Computer Program (ACP) multiprocessor system. The software was designed to make a system of a hundred independent node processors as easy to use as a single, powerful CPU. Subroutines have been developed by which a user's host program can send data to and get results from the program running in each of his ACP node processors. Utility programs make it easy to compile and link host and node programs, to debug a node program on an ACP development system, and to submit a debugged program to an ACP production system.

  2. Advances in Computational Stability Analysis of Composite Aerospace Structures

    SciTech Connect

    Degenhardt, R.; Araujo, F. C. de

    2010-09-30

    European aircraft industry demands for reduced development and operating costs. Structural weight reduction by exploitation of structural reserves in composite aerospace structures contributes to this aim, however, it requires accurate and experimentally validated stability analysis of real structures under realistic loading conditions. This paper presents different advances from the area of computational stability analysis of composite aerospace structures which contribute to that field. For stringer stiffened panels main results of the finished EU project COCOMAT are given. It investigated the exploitation of reserves in primary fibre composite fuselage structures through an accurate and reliable simulation of postbuckling and collapse. For unstiffened cylindrical composite shells a proposal for a new design method is presented.

  3. Computer modeling for advanced life support system analysis.

    PubMed

    Drysdale, A

    1997-01-01

    This article discusses the equivalent mass approach to advanced life support system analysis, describes a computer model developed to use this approach, and presents early results from modeling the NASA JSC BioPlex. The model is built using an object oriented approach and G2, a commercially available modeling package Cost factor equivalencies are given for the Volosin scenarios. Plant data from NASA KSC and Utah State University (USU) are used, together with configuration data from the BioPlex design effort. Initial results focus on the importance of obtaining high plant productivity with a flight-like configuration. PMID:11540448

  4. National Computational Infrastructure for Lattice Gauge Theory SciDAC-2 Closeout Report

    SciTech Connect

    Sun, Xian-He

    2013-08-01

    As part of this project work, researchers from Vanderbilt University, Fermi National Laboratory and Illinois Institute of technology developed a real-time cluster fault-tolerant cluster monitoring framework. This framework is open source and is available for download upon request. This work has also been used at Fermi Laboratory, Vanderbilt University and Mississippi State University across projects other than LQCD. The goal for the scientific workflow project is to investigate and develop domain-specific workflow tools for LQCD to help effectively orchestrate, in parallel, computational campaigns consisting of many loosely-coupled batch processing jobs. Major requirements for an LQCD workflow system include: a system to manage input metadata, e.g. physics parameters such as masses, a system to manage and permit the reuse of templates describing workflows, a system to capture data provenance information, a systems to manage produced data, a means of monitoring workflow progress and status, a means of resuming or extending a stopped workflow, fault tolerance features to enhance the reliability of running workflows. Requirements for an LQCD workflow system are available in documentation.

  5. A Geometry Based Infra-structure for Computational Analysis and Design

    NASA Technical Reports Server (NTRS)

    Haimes, Robert

    1997-01-01

    The computational steps traditionally taken for most engineering analysis (CFD, structural analysis, and etc.) are: Surface Generation - usually by employing a CAD system; Grid Generation - preparing the volume for the simulation; Flow Solver - producing the results at the specified operational point; and Post-processing Visualization - interactively attempting to understand the results For structural analysis, integrated systems can be obtained from a number of commercial vendors. For CFD, these steps have worked well in the past for simple steady-state simulations at the expense of much user interaction. The data was transmitted between phases via files. Specifically the problems with this procedure are: (1) File based. Information flows from one step to the next via data files with formats specified for that procedure. (2) 'Good' Geometry. A bottleneck in getting results from a solver is the construction of proper geometry to be fed to the grid generator. With 'good' geometry a grid can be constructed in tens of minutes (even with a complex configuration) using unstructured techniques. (3) One-Way communication. All information travels on from one phase to the next. Until this process can be automated, more complex problems such as multi-disciplinary analysis or using the above procedure for design becomes prohibitive.

  6. Advances in parallel computer technology for desktop atmospheric dispersion models

    SciTech Connect

    Bian, X.; Ionescu-Niscov, S.; Fast, J.D.; Allwine, K.J.

    1996-12-31

    Desktop models are those models used by analysts with varied backgrounds, for performing, for example, air quality assessment and emergency response activities. These models must be robust, well documented, have minimal and well controlled user inputs, and have clear outputs. Existing coarse-grained parallel computers can provide significant increases in computation speed in desktop atmospheric dispersion modeling without considerable increases in hardware cost. This increased speed will allow for significant improvements to be made in the scientific foundations of these applied models, in the form of more advanced diffusion schemes and better representation of the wind and turbulence fields. This is especially attractive for emergency response applications where speed and accuracy are of utmost importance. This paper describes one particular application of coarse-grained parallel computer technology to a desktop complex terrain atmospheric dispersion modeling system. By comparing performance characteristics of the coarse-grained parallel version of the model with the single-processor version, we will demonstrate that applying coarse-grained parallel computer technology to desktop atmospheric dispersion modeling systems will allow us to address critical issues facing future requirements of this class of dispersion models.

  7. PRACE - The European HPC Infrastructure

    NASA Astrophysics Data System (ADS)

    Stadelmeyer, Peter

    2014-05-01

    The mission of PRACE (Partnership for Advanced Computing in Europe) is to enable high impact scientific discovery and engineering research and development across all disciplines to enhance European competitiveness for the benefit of society. PRACE seeks to realize this mission by offering world class computing and data management resources and services through a peer review process. This talk gives a general overview about PRACE and the PRACE research infrastructure (RI). PRACE is established as an international not-for-profit association and the PRACE RI is a pan-European supercomputing infrastructure which offers access to computing and data management resources at partner sites distributed throughout Europe. Besides a short summary about the organization, history, and activities of PRACE, it is explained how scientists and researchers from academia and industry from around the world can access PRACE systems and which education and training activities are offered by PRACE. The overview also contains a selection of PRACE contributions to societal challenges and ongoing activities. Examples of the latter are beside others petascaling, application benchmark suite, best practice guides for efficient use of key architectures, application enabling / scaling, new programming models, and industrial applications. The Partnership for Advanced Computing in Europe (PRACE) is an international non-profit association with its seat in Brussels. The PRACE Research Infrastructure provides a persistent world-class high performance computing service for scientists and researchers from academia and industry in Europe. The computer systems and their operations accessible through PRACE are provided by 4 PRACE members (BSC representing Spain, CINECA representing Italy, GCS representing Germany and GENCI representing France). The Implementation Phase of PRACE receives funding from the EU's Seventh Framework Programme (FP7/2007-2013) under grant agreements RI-261557, RI-283493 and RI

  8. National Computational Infrastructure for LatticeGauge Theory SciDAC-2 Closeout Report

    SciTech Connect

    Bapty, Theodore; Dubey, Abhishek

    2013-07-18

    As part of the reliability project work, researchers from Vanderbilt University, Fermi National Laboratory and Illinois Institute of technology developed a real-time cluster fault-tolerant cluster monitoring framework. The goal for the scientific workflow project is to investigate and develop domain-specific workflow tools for LQCD to help effectively orchestrate, in parallel, computational campaigns consisting of many loosely-coupled batch processing jobs. Major requirements for an LQCD workflow system include: a system to manage input metadata, e.g. physics parameters such as masses, a system to manage and permit the reuse of templates describing workflows, a system to capture data provenance information, a systems to manage produced data, a means of monitoring workflow progress and status, a means of resuming or extending a stopped workflow, fault tolerance features to enhance the reliability of running workflows. In summary, these achievements are reported: • Implemented a software system to manage parameters. This includes a parameter set language based on a superset of the JSON data-interchange format, parsers in multiple languages (C++, Python, Ruby), and a web-based interface tool. It also includes a templating system that can produce input text for LQCD applications like MILC. • Implemented a monitoring sensor framework in software that is in production on the Fermilab USQCD facility. This includes equipment health, process accounting, MPI/QMP process tracking, and batch system (Torque) job monitoring. All sensor data are available from databases, and various query tools can be used to extract common data patterns and perform ad hoc searches. Common batch system queries such as job status are available in command line tools and are used in actual workflow-based production by a subset of Fermilab users. • Developed a formal state machine model for scientific workflow and reliability systems. This includes the use of Vanderbilt’s Generic Modeling

  9. DOE Advanced Scientific Computing Advisory Committee (ASCAC) Report: Exascale Computing Initiative Review

    SciTech Connect

    Reed, Daniel; Berzins, Martin; Pennington, Robert; Sarkar, Vivek; Taylor, Valerie

    2015-08-01

    On November 19, 2014, the Advanced Scientific Computing Advisory Committee (ASCAC) was charged with reviewing the Department of Energy’s conceptual design for the Exascale Computing Initiative (ECI). In particular, this included assessing whether there are significant gaps in the ECI plan or areas that need to be given priority or extra management attention. Given the breadth and depth of previous reviews of the technical challenges inherent in exascale system design and deployment, the subcommittee focused its assessment on organizational and management issues, considering technical issues only as they informed organizational or management priorities and structures. This report presents the observations and recommendations of the subcommittee.

  10. Strategies for casualty mitigation programs by using advanced tsunami computation

    NASA Astrophysics Data System (ADS)

    IMAI, K.; Imamura, F.

    2012-12-01

    1. Purpose of the study In this study, based on the scenario of great earthquakes along the Nankai trough, we aim on the estimation of the run up and high accuracy inundation process of tsunami in coastal areas including rivers. Here, using a practical method of tsunami analytical model, and taking into account characteristics of detail topography, land use and climate change in a realistic present and expected future environment, we examined the run up and tsunami inundation process. Using these results we estimated the damage due to tsunami and obtained information for the mitigation of human casualties. Considering the time series from the occurrence of the earthquake and the risk of tsunami damage, in order to mitigate casualties we provide contents of disaster risk information displayed in a tsunami hazard and risk map. 2. Creating a tsunami hazard and risk map From the analytical and practical tsunami model (a long wave approximated model) and the high resolution topography (5 m) including detailed data of shoreline, rivers, building and houses, we present a advanced analysis of tsunami inundation considering the land use. Based on the results of tsunami inundation and its analysis; it is possible to draw a tsunami hazard and risk map with information of human casualty, building damage estimation, drift of vehicles, etc. 3. Contents of disaster prevention information To improve the hazard, risk and evacuation information distribution, it is necessary to follow three steps. (1) Provide basic information such as tsunami attack info, areas and routes for evacuation and location of tsunami evacuation facilities. (2) Provide as additional information the time when inundation starts, the actual results of inundation, location of facilities with hazard materials, presence or absence of public facilities and areas underground that required evacuation. (3) Provide information to support disaster response such as infrastructure and traffic network damage prediction

  11. ABrIL - Advanced Brain Imaging Lab : a cloud based computation environment for cooperative neuroimaging projects.

    PubMed

    Neves Tafula, Sérgio M; Moreira da Silva, Nádia; Rozanski, Verena E; Silva Cunha, João Paulo

    2014-01-01

    Neuroscience is an increasingly multidisciplinary and highly cooperative field where neuroimaging plays an important role. Neuroimaging rapid evolution is demanding for a growing number of computing resources and skills that need to be put in place at every lab. Typically each group tries to setup their own servers and workstations to support their neuroimaging needs, having to learn from Operating System management to specific neuroscience software tools details before any results can be obtained from each setup. This setup and learning process is replicated in every lab, even if a strong collaboration among several groups is going on. In this paper we present a new cloud service model - Brain Imaging Application as a Service (BiAaaS) - and one of its implementation - Advanced Brain Imaging Lab (ABrIL) - in the form of an ubiquitous virtual desktop remote infrastructure that offers a set of neuroimaging computational services in an interactive neuroscientist-friendly graphical user interface (GUI). This remote desktop has been used for several multi-institution cooperative projects with different neuroscience objectives that already achieved important results, such as the contribution to a high impact paper published in the January issue of the Neuroimage journal. The ABrIL system has shown its applicability in several neuroscience projects with a relatively low-cost, promoting truly collaborative actions and speeding up project results and their clinical applicability.

  12. Recent Advances in Computational Mechanics of the Human Knee Joint

    PubMed Central

    Kazemi, M.; Dabiri, Y.; Li, L. P.

    2013-01-01

    Computational mechanics has been advanced in every area of orthopedic biomechanics. The objective of this paper is to provide a general review of the computational models used in the analysis of the mechanical function of the knee joint in different loading and pathological conditions. Major review articles published in related areas are summarized first. The constitutive models for soft tissues of the knee are briefly discussed to facilitate understanding the joint modeling. A detailed review of the tibiofemoral joint models is presented thereafter. The geometry reconstruction procedures as well as some critical issues in finite element modeling are also discussed. Computational modeling can be a reliable and effective method for the study of mechanical behavior of the knee joint, if the model is constructed correctly. Single-phase material models have been used to predict the instantaneous load response for the healthy knees and repaired joints, such as total and partial meniscectomies, ACL and PCL reconstructions, and joint replacements. Recently, poromechanical models accounting for fluid pressurization in soft tissues have been proposed to study the viscoelastic response of the healthy and impaired knee joints. While the constitutive modeling has been considerably advanced at the tissue level, many challenges still exist in applying a good material model to three-dimensional joint simulations. A complete model validation at the joint level seems impossible presently, because only simple data can be obtained experimentally. Therefore, model validation may be concentrated on the constitutive laws using multiple mechanical tests of the tissues. Extensive model verifications at the joint level are still crucial for the accuracy of the modeling. PMID:23509602

  13. NASA Trapezoidal Wing Computations Including Transition and Advanced Turbulence Modeling

    NASA Technical Reports Server (NTRS)

    Rumsey, C. L.; Lee-Rausch, E. M.

    2012-01-01

    Flow about the NASA Trapezoidal Wing is computed with several turbulence models by using grids from the first High Lift Prediction Workshop in an effort to advance understanding of computational fluid dynamics modeling for this type of flowfield. Transition is accounted for in many of the computations. In particular, a recently-developed 4-equation transition model is utilized and works well overall. Accounting for transition tends to increase lift and decrease moment, which improves the agreement with experiment. Upper surface flap separation is reduced, and agreement with experimental surface pressures and velocity profiles is improved. The predicted shape of wakes from upstream elements is strongly influenced by grid resolution in regions above the main and flap elements. Turbulence model enhancements to account for rotation and curvature have the general effect of increasing lift and improving the resolution of the wing tip vortex as it convects downstream. However, none of the models improve the prediction of surface pressures near the wing tip, where more grid resolution is needed.

  14. SciDAC Advances and Applications in Computational Beam Dynamics

    SciTech Connect

    Ryne, R.; Abell, D.; Adelmann, A.; Amundson, J.; Bohn, C.; Cary, J.; Colella, P.; Dechow, D.; Decyk, V.; Dragt, A.; Gerber, R.; Habib, S.; Higdon, D.; Katsouleas, T.; Ma, K.-L.; McCorquodale, P.; Mihalcea, D.; Mitchell, C.; Mori, W.; Mottershead, C.T.; Neri, F.; Pogorelov, I.; Qiang, J.; Samulyak, R.; Serafini, D.; Shalf, J.; Siegerist, C.; Spentzouris, P.; Stoltz, P.; Terzic, B.; Venturini, M.; Walstrom, P.

    2005-06-26

    SciDAC has had a major impact on computational beam dynamics and the design of particle accelerators. Particle accelerators--which account for half of the facilities in the DOE Office of Science Facilities for the Future of Science 20 Year Outlook--are crucial for US scientific, industrial, and economic competitiveness. Thanks to SciDAC, accelerator design calculations that were once thought impossible are now carried routinely, and new challenging and important calculations are within reach. SciDAC accelerator modeling codes are being used to get the most science out of existing facilities, to produce optimal designs for future facilities, and to explore advanced accelerator concepts that may hold the key to qualitatively new ways of accelerating charged particle beams. In this poster we present highlights from the SciDAC Accelerator Science and Technology (AST) project Beam Dynamics focus area in regard to algorithm development, software development, and applications.

  15. Advanced information processing system: Inter-computer communication services

    NASA Technical Reports Server (NTRS)

    Burkhardt, Laura; Masotto, Tom; Sims, J. Terry; Whittredge, Roy; Alger, Linda S.

    1991-01-01

    The purpose is to document the functional requirements and detailed specifications for the Inter-Computer Communications Services (ICCS) of the Advanced Information Processing System (AIPS). An introductory section is provided to outline the overall architecture and functional requirements of the AIPS and to present an overview of the ICCS. An overview of the AIPS architecture as well as a brief description of the AIPS software is given. The guarantees of the ICCS are provided, and the ICCS is described as a seven-layered International Standards Organization (ISO) Model. The ICCS functional requirements, functional design, and detailed specifications as well as each layer of the ICCS are also described. A summary of results and suggestions for future work are presented.

  16. The virtual machine (VM) scaler: an infrastructure manager supporting environmental modeling on IaaS clouds

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Infrastructure-as-a-service (IaaS) clouds provide a new medium for deployment of environmental modeling applications. Harnessing advancements in virtualization, IaaS clouds can provide dynamic scalable infrastructure to better support scientific modeling computational demands. Providing scientific m...

  17. 5 CFR 550.404 - Computation of advance payments and evacuation payments; time periods.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 5 Administrative Personnel 1 2014-01-01 2014-01-01 false Computation of advance payments and... Computation of advance payments and evacuation payments; time periods. (a) Payments shall be based on the rate... others, when applicable, shall be made before advance payments or evacuation payments are made....

  18. 5 CFR 550.404 - Computation of advance payments and evacuation payments; time periods.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 5 Administrative Personnel 1 2013-01-01 2013-01-01 false Computation of advance payments and... Computation of advance payments and evacuation payments; time periods. (a) Payments shall be based on the rate... others, when applicable, shall be made before advance payments or evacuation payments are made....

  19. Reliability of an interactive computer program for advance care planning.

    PubMed

    Schubart, Jane R; Levi, Benjamin H; Camacho, Fabian; Whitehead, Megan; Farace, Elana; Green, Michael J

    2012-06-01

    Despite widespread efforts to promote advance directives (ADs), completion rates remain low. Making Your Wishes Known: Planning Your Medical Future (MYWK) is an interactive computer program that guides individuals through the process of advance care planning, explaining health conditions and interventions that commonly involve life or death decisions, helps them articulate their values/goals, and translates users' preferences into a detailed AD document. The purpose of this study was to demonstrate that (in the absence of major life changes) the AD generated by MYWK reliably reflects an individual's values/preferences. English speakers ≥30 years old completed MYWK twice, 4 to 6 weeks apart. Reliability indices were assessed for three AD components: General Wishes; Specific Wishes for treatment; and Quality-of-Life values (QoL). Twenty-four participants completed the study. Both the Specific Wishes and QoL scales had high internal consistency in both time periods (Knuder Richardson formula 20 [KR-20]=0.83-0.95, and 0.86-0.89). Test-retest reliability was perfect for General Wishes (κ=1), high for QoL (Pearson's correlation coefficient=0.83), but lower for Specific Wishes (Pearson's correlation coefficient=0.57). MYWK generates an AD where General Wishes and QoL (but not Specific Wishes) statements remain consistent over time. PMID:22512830

  20. Reliability of an Interactive Computer Program for Advance Care Planning

    PubMed Central

    Levi, Benjamin H.; Camacho, Fabian; Whitehead, Megan; Farace, Elana; Green, Michael J

    2012-01-01

    Abstract Despite widespread efforts to promote advance directives (ADs), completion rates remain low. Making Your Wishes Known: Planning Your Medical Future (MYWK) is an interactive computer program that guides individuals through the process of advance care planning, explaining health conditions and interventions that commonly involve life or death decisions, helps them articulate their values/goals, and translates users' preferences into a detailed AD document. The purpose of this study was to demonstrate that (in the absence of major life changes) the AD generated by MYWK reliably reflects an individual's values/preferences. English speakers ≥30 years old completed MYWK twice, 4 to 6 weeks apart. Reliability indices were assessed for three AD components: General Wishes; Specific Wishes for treatment; and Quality-of-Life values (QoL). Twenty-four participants completed the study. Both the Specific Wishes and QoL scales had high internal consistency in both time periods (Knuder Richardson formula 20 [KR-20]=0.83–0.95, and 0.86–0.89). Test-retest reliability was perfect for General Wishes (κ=1), high for QoL (Pearson's correlation coefficient=0.83), but lower for Specific Wishes (Pearson's correlation coefficient=0.57). MYWK generates an AD where General Wishes and QoL (but not Specific Wishes) statements remain consistent over time. PMID:22512830

  1. Optical design and characterization of an advanced computational imaging system

    NASA Astrophysics Data System (ADS)

    Shepard, R. Hamilton; Fernandez-Cull, Christy; Raskar, Ramesh; Shi, Boxin; Barsi, Christopher; Zhao, Hang

    2014-09-01

    We describe an advanced computational imaging system with an optical architecture that enables simultaneous and dynamic pupil-plane and image-plane coding accommodating several task-specific applications. We assess the optical requirement trades associated with custom and commercial-off-the-shelf (COTS) optics and converge on the development of two low-cost and robust COTS testbeds. The first is a coded-aperture programmable pixel imager employing a digital micromirror device (DMD) for image plane per-pixel oversampling and spatial super-resolution experiments. The second is a simultaneous pupil-encoded and time-encoded imager employing a DMD for pupil apodization or a deformable mirror for wavefront coding experiments. These two testbeds are built to leverage two MIT Lincoln Laboratory focal plane arrays - an orthogonal transfer CCD with non-uniform pixel sampling and on-chip dithering and a digital readout integrated circuit (DROIC) with advanced on-chip per-pixel processing capabilities. This paper discusses the derivation of optical component requirements, optical design metrics, and performance analyses for the two testbeds built.

  2. Recent advances in computational structural reliability analysis methods

    NASA Technical Reports Server (NTRS)

    Thacker, Ben H.; Wu, Y.-T.; Millwater, Harry R.; Torng, Tony Y.; Riha, David S.

    1993-01-01

    The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.

  3. Advances in computer technology: impact on the practice of medicine.

    PubMed

    Groth-Vasselli, B; Singh, K; Farnsworth, P N

    1995-01-01

    Advances in computer technology provide a wide range of applications which are revolutionizing the practice of medicine. The development of new software for the office creates a web of communication among physicians, staff members, health care facilities and associated agencies. This provides the physician with the prospect of a paperless office. At the other end of the spectrum, the development of 3D work stations and software based on computational chemistry permits visualization of protein molecules involved in disease. Computer assisted molecular modeling has been used to construct working 3D models of lens alpha-crystallin. The 3D structure of alpha-crystallin is basic to our understanding of the molecular mechanisms involved in lens fiber cell maturation, stabilization of the inner nuclear region, the maintenance of lens transparency and cataractogenesis. The major component of the high molecular weight aggregates that occur during cataractogenesis is alpha-crystallin subunits. Subunits of alpha-crystallin occur in other tissues of the body. In the central nervous system accumulation of these subunits in the form of dense inclusion bodies occurs in pathological conditions such as Alzheimer's disease, Huntington's disease, multiple sclerosis and toxoplasmosis (Iwaki, Wisniewski et al., 1992), as well as neoplasms of astrocyte origin (Iwaki, Iwaki, et al., 1991). Also cardiac ischemia is associated with an increased alpha B synthesis (Chiesi, Longoni et al., 1990). On a more global level, the molecular structure of alpha-crystallin may provide information pertaining to the function of small heat shock proteins, hsp, in maintaining cell stability under the stress of disease.

  4. OpenCMISS: a multi-physics & multi-scale computational infrastructure for the VPH/Physiome project.

    PubMed

    Bradley, Chris; Bowery, Andy; Britten, Randall; Budelmann, Vincent; Camara, Oscar; Christie, Richard; Cookson, Andrew; Frangi, Alejandro F; Gamage, Thiranja Babarenda; Heidlauf, Thomas; Krittian, Sebastian; Ladd, David; Little, Caton; Mithraratne, Kumar; Nash, Martyn; Nickerson, David; Nielsen, Poul; Nordbø, Oyvind; Omholt, Stig; Pashaei, Ali; Paterson, David; Rajagopal, Vijayaraghavan; Reeve, Adam; Röhrle, Oliver; Safaei, Soroush; Sebastián, Rafael; Steghöfer, Martin; Wu, Tim; Yu, Ting; Zhang, Heye; Hunter, Peter

    2011-10-01

    The VPH/Physiome Project is developing the model encoding standards CellML (cellml.org) and FieldML (fieldml.org) as well as web-accessible model repositories based on these standards (models.physiome.org). Freely available open source computational modelling software is also being developed to solve the partial differential equations described by the models and to visualise results. The OpenCMISS code (opencmiss.org), described here, has been developed by the authors over the last six years to replace the CMISS code that has supported a number of organ system Physiome projects. OpenCMISS is designed to encompass multiple sets of physical equations and to link subcellular and tissue-level biophysical processes into organ-level processes. In the Heart Physiome project, for example, the large deformation mechanics of the myocardial wall need to be coupled to both ventricular flow and embedded coronary flow, and the reaction-diffusion equations that govern the propagation of electrical waves through myocardial tissue need to be coupled with equations that describe the ion channel currents that flow through the cardiac cell membranes. In this paper we discuss the design principles and distributed memory architecture behind the OpenCMISS code. We also discuss the design of the interfaces that link the sets of physical equations across common boundaries (such as fluid-structure coupling), or between spatial fields over the same domain (such as coupled electromechanics), and the concepts behind CellML and FieldML that are embodied in the OpenCMISS data structures. We show how all of these provide a flexible infrastructure for combining models developed across the VPH/Physiome community. PMID:21762717

  5. Computational methods for predicting the response of critical as-built infrastructure to dynamic loads (architectural surety)

    SciTech Connect

    Preece, D.S.; Weatherby, J.R.; Attaway, S.W.; Swegle, J.W.; Matalucci, R.V.

    1998-06-01

    Coupled blast-structural computational simulations using supercomputer capabilities will significantly advance the understanding of how complex structures respond under dynamic loads caused by explosives and earthquakes, an understanding with application to the surety of both federal and nonfederal buildings. Simulation of the effects of explosives on structures is a challenge because the explosive response can best be simulated using Eulerian computational techniques and structural behavior is best modeled using Lagrangian methods. Due to the different methodologies of the two computational techniques and code architecture requirements, they are usually implemented in different computer programs. Explosive and structure modeling in two different codes make it difficult or next to impossible to do coupled explosive/structure interaction simulations. Sandia National Laboratories has developed two techniques for solving this problem. The first is called Smoothed Particle Hydrodynamics (SPH), a relatively new gridless method comparable to Eulerian, that is especially suited for treating liquids and gases such as those produced by an explosive. The SPH capability has been fully implemented into the transient dynamics finite element (Lagrangian) codes PRONTO-2D and -3D. A PRONTO-3D/SPH simulation of the effect of a blast on a protective-wall barrier is presented in this paper. The second technique employed at Sandia National Laboratories uses a relatively new code called ALEGRA which is an ALE (Arbitrary Lagrangian-Eulerian) wave code with specific emphasis on large deformation and shock propagation. ALEGRA is capable of solving many shock-wave physics problems but it is especially suited for modeling problems involving the interaction of decoupled explosives with structures.

  6. Recovery Act: Advanced Direct Methanol Fuel Cell for Mobile Computing

    SciTech Connect

    Fletcher, James H.; Cox, Philip; Harrington, William J; Campbell, Joseph L

    2013-09-03

    ABSTRACT Project Title: Recovery Act: Advanced Direct Methanol Fuel Cell for Mobile Computing PROJECT OBJECTIVE The objective of the project was to advance portable fuel cell system technology towards the commercial targets of power density, energy density and lifetime. These targets were laid out in the DOE’s R&D roadmap to develop an advanced direct methanol fuel cell power supply that meets commercial entry requirements. Such a power supply will enable mobile computers to operate non-stop, unplugged from the wall power outlet, by using the high energy density of methanol fuel contained in a replaceable fuel cartridge. Specifically this project focused on balance-of-plant component integration and miniaturization, as well as extensive component, subassembly and integrated system durability and validation testing. This design has resulted in a pre-production power supply design and a prototype that meet the rigorous demands of consumer electronic applications. PROJECT TASKS The proposed work plan was designed to meet the project objectives, which corresponded directly with the objectives outlined in the Funding Opportunity Announcement: To engineer the fuel cell balance-of-plant and packaging to meet the needs of consumer electronic systems, specifically at power levels required for mobile computing. UNF used existing balance-of-plant component technologies developed under its current US Army CERDEC project, as well as a previous DOE project completed by PolyFuel, to further refine them to both miniaturize and integrate their functionality to increase the system power density and energy density. Benefits of UNF’s novel passive water recycling MEA (membrane electrode assembly) and the simplified system architecture it enabled formed the foundation of the design approach. The package design was hardened to address orientation independence, shock, vibration, and environmental requirements. Fuel cartridge and fuel subsystems were improved to ensure effective fuel

  7. FabSim: Facilitating computational research through automation on large-scale and distributed e-infrastructures

    NASA Astrophysics Data System (ADS)

    Groen, Derek; Bhati, Agastya P.; Suter, James; Hetherington, James; Zasada, Stefan J.; Coveney, Peter V.

    2016-10-01

    We present FabSim, a toolkit developed to simplify a range of computational tasks for researchers in diverse disciplines. FabSim is flexible, adaptable, and allows users to perform a wide range of tasks with ease. It also provides a systematic way to automate the use of resources, including HPC and distributed machines, and to make tasks easier to repeat by recording contextual information. To demonstrate this, we present three use cases where FabSim has enhanced our research productivity. These include simulating cerebrovascular bloodflow, modelling clay-polymer nanocomposites across multiple scales, and calculating ligand-protein binding affinities. Catalogue identifier: AFAO_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AFAO_v1_0.html Program obtainable from: CPC Programme Library, Queen's University, Belfast, N. Ireland Licensing provisions: BSD 3-Clause No. of lines in distributed program, including test data, etc.: 268282 No. of bytes in distributed program, including test data, etc.: 2791792 Distribution format: tar.gz Programming language: Python. Computer: PC or Mac. Operating system: Unix, OSX. RAM: 1 Gbytes Classification: 3, 4, 6.5. External routines: NumPy, SciPy, Fabric (1.5 or newer), PyYaml Nature of problem: Running advanced computations using remote resources is an activity that requires considerable time and human attention. These activities, such as organizing data, configuring software and setting up individual runs often vary slightly each time they are performed. To lighten this burden, we required an approach that introduced little burden of its own to set up and adapt, beyond which very substantial productivity ensues. Solution method: We present a toolkit which helps to simplify and automate the activities which surround computational science research. FabSim is aimed squarely at the experienced computational scientist, who can use the command line interface and a system of modifiable content to quickly automate sets of

  8. Quantitative Computed Tomography and Image Analysis for Advanced Muscle Assessment

    PubMed Central

    Edmunds, Kyle Joseph; Gíslason, Magnus K.; Arnadottir, Iris D.; Marcante, Andrea; Piccione, Francesco; Gargiulo, Paolo

    2016-01-01

    Medical imaging is of particular interest in the field of translational myology, as extant literature describes the utilization of a wide variety of techniques to non-invasively recapitulate and quantity various internal and external tissue morphologies. In the clinical context, medical imaging remains a vital tool for diagnostics and investigative assessment. This review outlines the results from several investigations on the use of computed tomography (CT) and image analysis techniques to assess muscle conditions and degenerative process due to aging or pathological conditions. Herein, we detail the acquisition of spiral CT images and the use of advanced image analysis tools to characterize muscles in 2D and 3D. Results from these studies recapitulate changes in tissue composition within muscles, as visualized by the association of tissue types to specified Hounsfield Unit (HU) values for fat, loose connective tissue or atrophic muscle, and normal muscle, including fascia and tendon. We show how results from these analyses can be presented as both average HU values and compositions with respect to total muscle volumes, demonstrating the reliability of these tools to monitor, assess and characterize muscle degeneration. PMID:27478562

  9. Securing Infrastructure from High Explosive Threats

    SciTech Connect

    Glascoe, L; Noble, C; Reynolds, J; Kuhl, A; Morris, J

    2009-03-20

    Lawrence Livermore National Laboratory (LLNL) is working with the Department of Homeland Security's Science and Technology Directorate, the Transportation Security Administration, and several infrastructure partners to characterize and help mitigate principal structural vulnerabilities to explosive threats. Given the importance of infrastructure to the nation's security and economy, there is a clear need for applied research and analyses (1) to improve understanding of the vulnerabilities of these systems to explosive threats and (2) to provide decision makers with time-critical technical assistance concerning countermeasure and mitigation options. Fully-coupled high performance calculations of structural response to ideal and non-ideal explosives help bound and quantify specific critical vulnerabilities, and help identify possible corrective schemes. Experimental validation of modeling approaches and methodologies builds confidence in the prediction, while advanced stochastic techniques allow for optimal use of scarce computational resources to efficiently provide infrastructure owners and decision makers with timely analyses.

  10. Recent advances in data assimilation in computational geodynamic models

    NASA Astrophysics Data System (ADS)

    Ismail-Zadeh, Alik

    2010-05-01

    . The QRV method was most recently introduced in geodynamic modelling (Ismail-Zadeh et al., 2007, 2008; Tantsyrev, 2008; Glisovic et al., 2009). The advances in computational geodynamics and in data assimilation attract an interest of the community dealing with lithosphere, mantle and core dynamics.

  11. Important advances in technology and unique applications to cardiovascular computed tomography.

    PubMed

    Chaikriangkrai, Kongkiat; Choi, Su Yeon; Nabi, Faisal; Chang, Su Min

    2014-01-01

    For the past decade, multidetector cardiac computed tomography and its main application, coronary computed tomography angiography, have been established as a noninvasive technique for anatomical assessment of coronary arteries. This new era of coronary artery evaluation by coronary computed tomography angiography has arisen from the rapid advancement in computed tomography technology, which has led to massive diagnostic and prognostic clinical studies in various patient populations. This article gives a brief overview of current multidetector cardiac computed tomography systems, developing cardiac computed tomography technologies in both hardware and software fields, innovative radiation exposure reduction measures, multidetector cardiac computed tomography functional studies, and their newer clinical applications beyond coronary computed tomography angiography. PMID:25574342

  12. Some Hail 'Computational Science' as Biggest Advance Since Newton, Galileo.

    ERIC Educational Resources Information Center

    Turner, Judith Axler

    1987-01-01

    Computational science is defined as science done on a computer. A computer can serve as a laboratory for researchers who cannot experiment with their subjects, and as a calculator for those who otherwise might need centuries to solve some problems mathematically. The National Science Foundation's support of supercomputers is discussed. (MLW)

  13. First 3 years of operation of RIACS (Research Institute for Advanced Computer Science) (1983-1985)

    NASA Technical Reports Server (NTRS)

    Denning, P. J.

    1986-01-01

    The focus of the Research Institute for Advanced Computer Science (RIACS) is to explore matches between advanced computing architectures and the processes of scientific research. An architecture evaluation of the MIT static dataflow machine, specification of a graphical language for expressing distributed computations, and specification of an expert system for aiding in grid generation for two-dimensional flow problems was initiated. Research projects for 1984 and 1985 are summarized.

  14. Distributed telemedicine for the National Information Infrastructure

    SciTech Connect

    Forslund, D.W.; Lee, Seong H.; Reverbel, F.C.

    1997-08-01

    TeleMed is an advanced system that provides a distributed multimedia electronic medical record available over a wide area network. It uses object-based computing, distributed data repositories, advanced graphical user interfaces, and visualization tools along with innovative concept extraction of image information for storing and accessing medical records developed in a separate project from 1994-5. In 1996, we began the transition to Java, extended the infrastructure, and worked to begin deploying TeleMed-like technologies throughout the nation. Other applications are mentioned.

  15. Critical Infrastructure Protection II, The International Federation for Information Processing, Volume 290.

    NASA Astrophysics Data System (ADS)

    Papa, Mauricio; Shenoi, Sujeet

    The information infrastructure -- comprising computers, embedded devices, networks and software systems -- is vital to day-to-day operations in every sector: information and telecommunications, banking and finance, energy, chemicals and hazardous materials, agriculture, food, water, public health, emergency services, transportation, postal and shipping, government and defense. Global business and industry, governments, indeed society itself, cannot function effectively if major components of the critical information infrastructure are degraded, disabled or destroyed. Critical Infrastructure Protection II describes original research results and innovative applications in the interdisciplinary field of critical infrastructure protection. Also, it highlights the importance of weaving science, technology and policy in crafting sophisticated, yet practical, solutions that will help secure information, computer and network assets in the various critical infrastructure sectors. Areas of coverage include: - Themes and Issues - Infrastructure Security - Control Systems Security - Security Strategies - Infrastructure Interdependencies - Infrastructure Modeling and Simulation This book is the second volume in the annual series produced by the International Federation for Information Processing (IFIP) Working Group 11.10 on Critical Infrastructure Protection, an international community of scientists, engineers, practitioners and policy makers dedicated to advancing research, development and implementation efforts focused on infrastructure protection. The book contains a selection of twenty edited papers from the Second Annual IFIP WG 11.10 International Conference on Critical Infrastructure Protection held at George Mason University, Arlington, Virginia, USA in the spring of 2008.

  16. Generalized Advanced Propeller Analysis System (GAPAS). Volume 2: Computer program user manual

    NASA Technical Reports Server (NTRS)

    Glatt, L.; Crawford, D. R.; Kosmatka, J. B.; Swigart, R. J.; Wong, E. W.

    1986-01-01

    The Generalized Advanced Propeller Analysis System (GAPAS) computer code is described. GAPAS was developed to analyze advanced technology multi-bladed propellers which operate on aircraft with speeds up to Mach 0.8 and altitudes up to 40,000 feet. GAPAS includes technology for analyzing aerodynamic, structural, and acoustic performance of propellers. The computer code was developed for the CDC 7600 computer and is currently available for industrial use on the NASA Langley computer. A description of all the analytical models incorporated in GAPAS is included. Sample calculations are also described as well as users requirements for modifying the analysis system. Computer system core requirements and running times are also discussed.

  17. Computer architectures for computational physics work done by Computational Research and Technology Branch and Advanced Computational Concepts Group

    NASA Technical Reports Server (NTRS)

    1985-01-01

    Slides are reproduced that describe the importance of having high performance number crunching and graphics capability. They also indicate the types of research and development underway at Ames Research Center to ensure that, in the near term, Ames is a smart buyer and user, and in the long-term that Ames knows the best possible solutions for number crunching and graphics needs. The drivers for this research are real computational physics applications of interest to Ames and NASA. They are concerned with how to map the applications, and how to maximize the physics learned from the results of the calculations. The computer graphics activities are aimed at getting maximum information from the three-dimensional calculations by using the real time manipulation of three-dimensional data on the Silicon Graphics workstation. Work is underway on new algorithms that will permit the display of experimental results that are sparse and random, the same way that the dense and regular computed results are displayed.

  18. Advanced Scientific Computing Environment Team new scientific database management task

    SciTech Connect

    Church, J.P.; Roberts, J.C.; Sims, R.N.; Smetana, A.O.; Westmoreland, B.W.

    1991-06-01

    The mission of the ASCENT Team is to continually keep pace with, evaluate, and select emerging computing technologies to define and implement prototypic scientific environments that maximize the ability of scientists and engineers to manage scientific data. These environments are to be implemented in a manner consistent with the site computing architecture and standards and NRTSC/SCS strategic plans for scientific computing. The major trends in computing hardware and software technology clearly indicate that the future computer'' will be a network environment that comprises supercomputers, graphics boxes, mainframes, clusters, workstations, terminals, and microcomputers. This network computer'' will have an architecturally transparent operating system allowing the applications code to run on any box supplying the required computing resources. The environment will include a distributed database and database managing system(s) that permits use of relational, hierarchical, object oriented, GIS, et al, databases. To reach this goal requires a stepwise progression from the present assemblage of monolithic applications codes running on disparate hardware platforms and operating systems. The first steps include converting from the existing JOSHUA system to a new J80 system that complies with modern language standards, development of a new J90 prototype to provide JOSHUA capabilities on Unix platforms, development of portable graphics tools to greatly facilitate preparation of input and interpretation of output; and extension of Jvv'' concepts and capabilities to distributed and/or parallel computing environments.

  19. The Advance of Computing from the Ground to the Cloud

    ERIC Educational Resources Information Center

    Breeding, Marshall

    2009-01-01

    A trend toward the abstraction of computing platforms that has been developing in the broader IT arena over the last few years is just beginning to make inroads into the library technology scene. Cloud computing offers for libraries many interesting possibilities that may help reduce technology costs and increase capacity, reliability, and…

  20. Computational Enzyme Design: Advances, hurdles and possible ways forward

    PubMed Central

    Linder, Mats

    2012-01-01

    This mini review addresses recent developments in computational enzyme design. Successful protocols as well as known issues and limitations are discussed from an energetic perspective. It will be argued that improved results can be obtained by including a dynamic treatment in the design protocol. Finally, a molecular dynamics-based approach for evaluating and refining computational designs is presented. PMID:24688650

  1. Infrastructure Standardization.

    ERIC Educational Resources Information Center

    Yow, Donna

    2002-01-01

    Describes the development of technological design standards for a 35-school construction/renovation effort by Guilford County Schools in North Carolina. The standards encompassed the physical infrastructure, telephone systems, and paging systems. (EV)

  2. Green Infrastructure

    EPA Science Inventory

    Large paved surfaces keep rain from infiltrating the soil and recharging groundwater supplies. Alternatively, Green infrastructure uses natural processes to reduce and treat stormwater in place by soaking up and storing water. These systems provide many environmental, social, an...

  3. RecceMan: an interactive recognition assistance for image-based reconnaissance: synergistic effects of human perception and computational methods for object recognition, identification, and infrastructure analysis

    NASA Astrophysics Data System (ADS)

    El Bekri, Nadia; Angele, Susanne; Ruckhäberle, Martin; Peinsipp-Byma, Elisabeth; Haelke, Bruno

    2015-10-01

    This paper introduces an interactive recognition assistance system for imaging reconnaissance. This system supports aerial image analysts on missions during two main tasks: Object recognition and infrastructure analysis. Object recognition concentrates on the classification of one single object. Infrastructure analysis deals with the description of the components of an infrastructure and the recognition of the infrastructure type (e.g. military airfield). Based on satellite or aerial images, aerial image analysts are able to extract single object features and thereby recognize different object types. It is one of the most challenging tasks in the imaging reconnaissance. Currently, there are no high potential ATR (automatic target recognition) applications available, as consequence the human observer cannot be replaced entirely. State-of-the-art ATR applications cannot assume in equal measure human perception and interpretation. Why is this still such a critical issue? First, cluttered and noisy images make it difficult to automatically extract, classify and identify object types. Second, due to the changed warfare and the rise of asymmetric threats it is nearly impossible to create an underlying data set containing all features, objects or infrastructure types. Many other reasons like environmental parameters or aspect angles compound the application of ATR supplementary. Due to the lack of suitable ATR procedures, the human factor is still important and so far irreplaceable. In order to use the potential benefits of the human perception and computational methods in a synergistic way, both are unified in an interactive assistance system. RecceMan® (Reconnaissance Manual) offers two different modes for aerial image analysts on missions: the object recognition mode and the infrastructure analysis mode. The aim of the object recognition mode is to recognize a certain object type based on the object features that originated from the image signatures. The

  4. Advances in computational design and analysis of airbreathing propulsion systems

    NASA Technical Reports Server (NTRS)

    Klineberg, John M.

    1989-01-01

    The development of commercial and military aircraft depends, to a large extent, on engine manufacturers being able to achieve significant increases in propulsion capability through improved component aerodynamics, materials, and structures. The recent history of propulsion has been marked by efforts to develop computational techniques that can speed up the propulsion design process and produce superior designs. The availability of powerful supercomputers, such as the NASA Numerical Aerodynamic Simulator, and the potential for even higher performance offered by parallel computer architectures, have opened the door to the use of multi-dimensional simulations to study complex physical phenomena in propulsion systems that have previously defied analysis or experimental observation. An overview of several NASA Lewis research efforts is provided that are contributing toward the long-range goal of a numerical test-cell for the integrated, multidisciplinary design, analysis, and optimization of propulsion systems. Specific examples in Internal Computational Fluid Mechanics, Computational Structural Mechanics, Computational Materials Science, and High Performance Computing are cited and described in terms of current capabilities, technical challenges, and future research directions.

  5. OPENING REMARKS: SciDAC: Scientific Discovery through Advanced Computing

    NASA Astrophysics Data System (ADS)

    Strayer, Michael

    2005-01-01

    Good morning. Welcome to SciDAC 2005 and San Francisco. SciDAC is all about computational science and scientific discovery. In a large sense, computational science characterizes SciDAC and its intent is change. It transforms both our approach and our understanding of science. It opens new doors and crosses traditional boundaries while seeking discovery. In terms of twentieth century methodologies, computational science may be said to be transformational. There are a number of examples to this point. First are the sciences that encompass climate modeling. The application of computational science has in essence created the field of climate modeling. This community is now international in scope and has provided precision results that are challenging our understanding of our environment. A second example is that of lattice quantum chromodynamics. Lattice QCD, while adding precision and insight to our fundamental understanding of strong interaction dynamics, has transformed our approach to particle and nuclear science. The individual investigator approach has evolved to teams of scientists from different disciplines working side-by-side towards a common goal. SciDAC is also undergoing a transformation. This meeting is a prime example. Last year it was a small programmatic meeting tracking progress in SciDAC. This year, we have a major computational science meeting with a variety of disciplines and enabling technologies represented. SciDAC 2005 should position itself as a new corner stone for Computational Science and its impact on science. As we look to the immediate future, FY2006 will bring a new cycle to SciDAC. Most of the program elements of SciDAC will be re-competed in FY2006. The re-competition will involve new instruments for computational science, new approaches for collaboration, as well as new disciplines. There will be new opportunities for virtual experiments in carbon sequestration, fusion, and nuclear power and nuclear waste, as well as collaborations

  6. The Impact of Advance Organizers upon Students' Achievement in Computer-Assisted Video Instruction.

    ERIC Educational Resources Information Center

    Saidi, Houshmand

    1994-01-01

    Describes a study of undergraduates that was conducted to determine the impact of advance organizers on students' achievement in computer-assisted video instruction (CAVI). Treatments of the experimental and control groups are explained, and results indicate that advance organizers do not facilitate near-transfer of rule-learning in CAVI.…

  7. Some recent advances in computational aerodynamics for helicopter applications

    NASA Technical Reports Server (NTRS)

    Mccroskey, W. J.; Baeder, J. D.

    1985-01-01

    The growing application of computational aerodynamics to nonlinear helicopter problems is outlined, with particular emphasis on several recent quasi-two-dimensional examples that used the thin-layer Navier-Stokes equations and an eddy-viscosity model to approximate turbulence. Rotor blade section characteristics can now be calculated accurately over a wide range of transonic flow conditions. However, a finite-difference simulation of the complete flow field about a helicopter in forward flight is not currently feasible, despite the impressive progress that is being made in both two and three dimensions. The principal limitations are today's computer speeds and memories, algorithm and solution methods, grid generation, vortex modeling, structural and aerodynamic coupling, and a shortage of engineers who are skilled in both computational fluid dynamics and helicopter aerodynamics and dynamics.

  8. A Computationally Based Approach to Homogenizing Advanced Alloys

    SciTech Connect

    Jablonski, P D; Cowen, C J

    2011-02-27

    We have developed a computationally based approach to optimizing the homogenization heat treatment of complex alloys. The Scheil module within the Thermo-Calc software is used to predict the as-cast segregation present within alloys, and DICTRA (Diffusion Controlled TRAnsformations) is used to model the homogenization kinetics as a function of time, temperature and microstructural scale. We will discuss this approach as it is applied to both Ni based superalloys as well as the more complex (computationally) case of alloys that solidify with more than one matrix phase as a result of segregation. Such is the case typically observed in martensitic steels. With these alloys it is doubly important to homogenize them correctly, especially at the laboratory scale, since they are austenitic at high temperature and thus constituent elements will diffuse slowly. The computationally designed heat treatment and the subsequent verification real castings are presented.

  9. Connecting Performance Analysis and Visualization to Advance Extreme Scale Computing

    SciTech Connect

    Bremer, Peer-Timo; Mohr, Bernd; Schulz, Martin; Pasccci, Valerio; Gamblin, Todd; Brunst, Holger

    2015-07-29

    The characterization, modeling, analysis, and tuning of software performance has been a central topic in High Performance Computing (HPC) since its early beginnings. The overall goal is to make HPC software run faster on particular hardware, either through better scheduling, on-node resource utilization, or more efficient distributed communication.

  10. Advanced Simulation and Computing Co-Design Strategy

    SciTech Connect

    Ang, James A.; Hoang, Thuc T.; Kelly, Suzanne M.; McPherson, Allen; Neely, Rob

    2015-11-01

    This ASC Co-design Strategy lays out the full continuum and components of the co-design process, based on what we have experienced thus far and what we wish to do more in the future to meet the program’s mission of providing high performance computing (HPC) and simulation capabilities for NNSA to carry out its stockpile stewardship responsibility.

  11. Advanced Computer Image Generation Techniques Exploiting Perceptual Characteristics. Final Report.

    ERIC Educational Resources Information Center

    Stenger, Anthony J.; And Others

    This study suggests and identifies computer image generation (CIG) algorithms for visual simulation that improve the training effectiveness of CIG simulators and identifies areas of basic research in visual perception that are significant for improving CIG technology. The first phase of the project entailed observing three existing CIG simulators.…

  12. MAX - An advanced parallel computer for space applications

    NASA Technical Reports Server (NTRS)

    Lewis, Blair F.; Bunker, Robert L.

    1991-01-01

    MAX is a fault-tolerant multicomputer hardware and software architecture designed to meet the needs of NASA spacecraft systems. It consists of conventional computing modules (computers) connected via a dual network topology. One network is used to transfer data among the computers and between computers and I/O devices. This network's topology is arbitrary. The second network operates as a broadcast medium for operating system synchronization messages and supports the operating system's Byzantine resilience. A fully distributed operating system supports multitasking in an asynchronous event and data driven environment. A large grain dataflow paradigm is used to coordinate the multitasking and provide easy control of concurrency. It is the basis of the system's fault tolerance and allows both static and dynamical location of tasks. Redundant execution of tasks with software voting of results may be specified for critical tasks. The dataflow paradigm also supports simplified software design, test and maintenance. A unique feature is a method for reliably patching code in an executing dataflow application.

  13. The use of regional advance mitigation planning (RAMP) to integrate transportation infrastructure impacts with sustainability; a perspective from the USA

    NASA Astrophysics Data System (ADS)

    Thorne, James H.; Huber, Patrick R.; O'Donoghue, Elizabeth; Santos, Maria J.

    2014-05-01

    Globally, urban areas are expanding, and their regional, spatially cumulative, environmental impacts from transportation projects are not typically assessed. However, incorporation of a Regional Advance Mitigation Planning (RAMP) framework can promote more effective, ecologically sound, and less expensive environmental mitigation. As a demonstration of the first phase of the RAMP framework, we assessed environmental impacts from 181 planned transportation projects in the 19 368 km2 San Francisco Bay Area. We found that 107 road and railroad projects will impact 2411-3490 ha of habitat supporting 30-43 threatened or endangered species. In addition, 1175 ha of impacts to agriculture and native vegetation are expected, as well as 125 crossings of waterways supporting anadromous fish species. The extent of these spatially cumulative impacts shows the need for a regional approach to associated environmental offsets. Many of the impacts were comprised of numerous small projects, where project-by-project mitigation would result in increased transaction costs, land costs, and lost project time. Ecological gains can be made if a regional approach is taken through the avoidance of small-sized reserves and the ability to target parcels for acquisition that fit within conservation planning designs. The methods are straightforward, and can be used in other metropolitan areas.

  14. The cloud services innovation platform- enabling service-based environmental modelling using infrastructure-as-a-service cloud computing

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Service oriented architectures allow modelling engines to be hosted over the Internet abstracting physical hardware configuration and software deployments from model users. Many existing environmental models are deployed as desktop applications running on user's personal computers (PCs). Migration ...

  15. Teaching Advanced Concepts in Computer Networks: VNUML-UM Virtualization Tool

    ERIC Educational Resources Information Center

    Ruiz-Martinez, A.; Pereniguez-Garcia, F.; Marin-Lopez, R.; Ruiz-Martinez, P. M.; Skarmeta-Gomez, A. F.

    2013-01-01

    In the teaching of computer networks the main problem that arises is the high price and limited number of network devices the students can work with in the laboratories. Nowadays, with virtualization we can overcome this limitation. In this paper, we present a methodology that allows students to learn advanced computer network concepts through…

  16. UNEDF: Advanced Scientific Computing Transforms the Low-Energy Nuclear Many-Body Problem

    SciTech Connect

    Stoitsov, Mario; Nam, Hai Ah; Nazarewicz, Witold; Bulgac, Aurel; Hagen, Gaute; Kortelainen, E. M.; Pei, Junchen; Roche, K. J.; Schunck, N.; Thompson, I.; Vary, J. P.; Wild, S.

    2011-01-01

    The UNEDF SciDAC collaboration of nuclear theorists, applied mathematicians, and computer scientists is developing a comprehensive description of nuclei and their reactions that delivers maximum predictive power with quantified uncertainties. This paper illustrates significant milestones accomplished by UNEDF through integration of the theoretical approaches, advanced numerical algorithms, and leadership class computational resources.

  17. Project T.E.A.M. (Technical Education Advancement Modules). Introduction to Computers.

    ERIC Educational Resources Information Center

    Ellis, Brenda

    This instructional guide, one of a series developed by the Technical Education Advancement Modules (TEAM) project, is a 3-hour introduction to computers. The purpose is to develop the following competencies: (1) orientation to data processing; (2) use of data entry devices; (3) use of computer menus; and (4) entry of data with accuracy and…

  18. Advanced Computational Thermal Studies and their Assessment for Supercritical-Pressure Reactors (SCRs)

    SciTech Connect

    D. M. McEligot; J. Y. Yoo; J. S. Lee; S. T. Ro; E. Lurien; S. O. Park; R. H. Pletcher; B. L. Smith; P. Vukoslavcevic; J. M. Wallace

    2009-04-01

    The goal of this laboratory / university collaboration of coupled computational and experimental studies is the improvement of predictive methods for supercritical-pressure reactors. The general objective is to develop supporting knowledge needed of advanced computational techniques for the technology development of the concepts and their safety systems.

  19. UNEDF: Advanced Scientific Computing Collaboration Transforms the Low-Energy Nuclear Many-Body Problem

    NASA Astrophysics Data System (ADS)

    Nam, H.; Stoitsov, M.; Nazarewicz, W.; Bulgac, A.; Hagen, G.; Kortelainen, M.; Maris, P.; Pei, J. C.; Roche, K. J.; Schunck, N.; Thompson, I.; Vary, J. P.; Wild, S. M.

    2012-12-01

    The demands of cutting-edge science are driving the need for larger and faster computing resources. With the rapidly growing scale of computing systems and the prospect of technologically disruptive architectures to meet these needs, scientists face the challenge of effectively using complex computational resources to advance scientific discovery. Multi-disciplinary collaborating networks of researchers with diverse scientific backgrounds are needed to address these complex challenges. The UNEDF SciDAC collaboration of nuclear theorists, applied mathematicians, and computer scientists is developing a comprehensive description of nuclei and their reactions that delivers maximum predictive power with quantified uncertainties. This paper describes UNEDF and identifies attributes that classify it as a successful computational collaboration. We illustrate significant milestones accomplished by UNEDF through integrative solutions using the most reliable theoretical approaches, most advanced algorithms, and leadership-class computational resources.

  20. UNEDF: Advanced Scientific Computing Collaboration Transforms the Low-Energy Nuclear Many-Body Problem

    SciTech Connect

    Nam, H.; Stoitsov, M.; Nazarewicz, W.; Bulgac, A.; Hagen, G.; Kortelainen, M.; Maris, P.; Pei, J. C.; Roche, K. J.; Schunck, N.; Thompson, I.; Vary, J. P.; Wild, S. M.

    2012-12-20

    The demands of cutting-edge science are driving the need for larger and faster computing resources. With the rapidly growing scale of computing systems and the prospect of technologically disruptive architectures to meet these needs, scientists face the challenge of effectively using complex computational resources to advance scientific discovery. Multi-disciplinary collaborating networks of researchers with diverse scientific backgrounds are needed to address these complex challenges. The UNEDF SciDAC collaboration of nuclear theorists, applied mathematicians, and computer scientists is developing a comprehensive description of nuclei and their reactions that delivers maximum predictive power with quantified uncertainties. This paper describes UNEDF and identifies attributes that classify it as a successful computational collaboration. Finally, we illustrate significant milestones accomplished by UNEDF through integrative solutions using the most reliable theoretical approaches, most advanced algorithms, and leadership-class computational resources.

  1. The ergonomics of computer aided design within advanced manufacturing technology.

    PubMed

    John, P A

    1988-03-01

    Many manufacturing companies have now awakened to the significance of computer aided design (CAD), although the majority of them have only been able to purchase computerised draughting systems of which only a subset produce direct manufacturing data. Such companies are moving steadily towards the concept of computer integrated manufacture (CIM), and this demands CAD to address more than draughting. CAD architects are thus having to rethink the basic specification of such systems, although they typically suffer from an insufficient understanding of the design task and have consequently been working with inadequate specifications. It is at this fundamental level that ergonomics has much to offer, making its contribution by encouraging user-centred design. The discussion considers the relationships between CAD and: the design task; the organisation and people; creativity; and artificial intelligence. It finishes with a summary of the contribution of ergonomics.

  2. Cogeneration computer model assessment: Advanced cogeneration research study

    NASA Technical Reports Server (NTRS)

    Rosenberg, L.

    1983-01-01

    Cogeneration computer simulation models to recommend the most desirable models or their components for use by the Southern California Edison Company (SCE) in evaluating potential cogeneration projects was assessed. Existing cogeneration modeling capabilities are described, preferred models are identified, and an approach to the development of a code which will best satisfy SCE requirements is recommended. Five models (CELCAP, COGEN 2, CPA, DEUS, and OASIS) are recommended for further consideration.

  3. National facility for advanced computational science: A sustainable path to scientific discovery

    SciTech Connect

    Simon, Horst; Kramer, William; Saphir, William; Shalf, John; Bailey, David; Oliker, Leonid; Banda, Michael; McCurdy, C. William; Hules, John; Canning, Andrew; Day, Marc; Colella, Philip; Serafini, David; Wehner, Michael; Nugent, Peter

    2004-04-02

    Lawrence Berkeley National Laboratory (Berkeley Lab) proposes to create a National Facility for Advanced Computational Science (NFACS) and to establish a new partnership between the American computer industry and a national consortium of laboratories, universities, and computing facilities. NFACS will provide leadership-class scientific computing capability to scientists and engineers nationwide, independent of their institutional affiliation or source of funding. This partnership will bring into existence a new class of computational capability in the United States that is optimal for science and will create a sustainable path towards petaflops performance.

  4. Computer vision-based technologies and commercial best practices for the advancement of the motion imagery tradecraft

    NASA Astrophysics Data System (ADS)

    Phipps, Marja; Capel, David; Srinivasan, James

    2014-06-01

    Motion imagery capabilities within the Department of Defense/Intelligence Community (DoD/IC) have advanced significantly over the last decade, attempting to meet continuously growing data collection, video processing and analytical demands in operationally challenging environments. The motion imagery tradecraft has evolved accordingly, enabling teams of analysts to effectively exploit data and generate intelligence reports across multiple phases in structured Full Motion Video (FMV) Processing Exploitation and Dissemination (PED) cells. Yet now the operational requirements are drastically changing. The exponential growth in motion imagery data continues, but to this the community adds multi-INT data, interoperability with existing and emerging systems, expanded data access, nontraditional users, collaboration, automation, and support for ad hoc configurations beyond the current FMV PED cells. To break from the legacy system lifecycle, we look towards a technology application and commercial adoption model course which will meet these future Intelligence, Surveillance and Reconnaissance (ISR) challenges. In this paper, we explore the application of cutting edge computer vision technology to meet existing FMV PED shortfalls and address future capability gaps. For example, real-time georegistration services developed from computer-vision-based feature tracking, multiple-view geometry, and statistical methods allow the fusion of motion imagery with other georeferenced information sources - providing unparalleled situational awareness. We then describe how these motion imagery capabilities may be readily deployed in a dynamically integrated analytical environment; employing an extensible framework, leveraging scalable enterprise-wide infrastructure and following commercial best practices.

  5. Parallel computing in genomic research: advances and applications.

    PubMed

    Ocaña, Kary; de Oliveira, Daniel

    2015-01-01

    Today's genomic experiments have to process the so-called "biological big data" that is now reaching the size of Terabytes and Petabytes. To process this huge amount of data, scientists may require weeks or months if they use their own workstations. Parallelism techniques and high-performance computing (HPC) environments can be applied for reducing the total processing time and to ease the management, treatment, and analyses of this data. However, running bioinformatics experiments in HPC environments such as clouds, grids, clusters, and graphics processing unit requires the expertise from scientists to integrate computational, biological, and mathematical techniques and technologies. Several solutions have already been proposed to allow scientists for processing their genomic experiments using HPC capabilities and parallelism techniques. This article brings a systematic review of literature that surveys the most recently published research involving genomics and parallel computing. Our objective is to gather the main characteristics, benefits, and challenges that can be considered by scientists when running their genomic experiments to benefit from parallelism techniques and HPC capabilities.

  6. Modeling emergency department operations using advanced computer simulation systems.

    PubMed

    Saunders, C E; Makens, P K; Leblanc, L J

    1989-02-01

    We developed a computer simulation model of emergency department operations using simulation software. This model uses multiple levels of preemptive patient priority; assigns each patient to an individual nurse and physician; incorporates all standard tests, procedures, and consultations; and allows patient service processes to proceed simultaneously, sequentially, repetitively, or a combination of these. Selected input data, including the number of physicians, nurses, and treatment beds, and the blood test turnaround time, then were varied systematically to determine their simulated effect on patient throughput time, selected queue sizes, and rates of resource utilization. Patient throughput time varied directly with laboratory service times and inversely with the number of physician or nurse servers. Resource utilization rates varied inversely with resource availability, and patient waiting time and patient throughput time varied indirectly with the level of patient acuity. The simulation can be animated on a computer monitor, showing simulated patients, specimens, and staff members moving throughout the ED. Computer simulation is a potentially useful tool that can help predict the results of changes in the ED system without actually altering it and may have implications for planning, optimizing resources, and improving the efficiency and quality of care.

  7. Parallel computing in genomic research: advances and applications.

    PubMed

    Ocaña, Kary; de Oliveira, Daniel

    2015-01-01

    Today's genomic experiments have to process the so-called "biological big data" that is now reaching the size of Terabytes and Petabytes. To process this huge amount of data, scientists may require weeks or months if they use their own workstations. Parallelism techniques and high-performance computing (HPC) environments can be applied for reducing the total processing time and to ease the management, treatment, and analyses of this data. However, running bioinformatics experiments in HPC environments such as clouds, grids, clusters, and graphics processing unit requires the expertise from scientists to integrate computational, biological, and mathematical techniques and technologies. Several solutions have already been proposed to allow scientists for processing their genomic experiments using HPC capabilities and parallelism techniques. This article brings a systematic review of literature that surveys the most recently published research involving genomics and parallel computing. Our objective is to gather the main characteristics, benefits, and challenges that can be considered by scientists when running their genomic experiments to benefit from parallelism techniques and HPC capabilities. PMID:26604801

  8. Parallel computing in genomic research: advances and applications

    PubMed Central

    Ocaña, Kary; de Oliveira, Daniel

    2015-01-01

    Today’s genomic experiments have to process the so-called “biological big data” that is now reaching the size of Terabytes and Petabytes. To process this huge amount of data, scientists may require weeks or months if they use their own workstations. Parallelism techniques and high-performance computing (HPC) environments can be applied for reducing the total processing time and to ease the management, treatment, and analyses of this data. However, running bioinformatics experiments in HPC environments such as clouds, grids, clusters, and graphics processing unit requires the expertise from scientists to integrate computational, biological, and mathematical techniques and technologies. Several solutions have already been proposed to allow scientists for processing their genomic experiments using HPC capabilities and parallelism techniques. This article brings a systematic review of literature that surveys the most recently published research involving genomics and parallel computing. Our objective is to gather the main characteristics, benefits, and challenges that can be considered by scientists when running their genomic experiments to benefit from parallelism techniques and HPC capabilities. PMID:26604801

  9. Advanced Computational Methods for Security Constrained Financial Transmission Rights

    SciTech Connect

    Kalsi, Karanjit; Elbert, Stephen T.; Vlachopoulou, Maria; Zhou, Ning; Huang, Zhenyu

    2012-07-26

    Financial Transmission Rights (FTRs) are financial insurance tools to help power market participants reduce price risks associated with transmission congestion. FTRs are issued based on a process of solving a constrained optimization problem with the objective to maximize the FTR social welfare under power flow security constraints. Security constraints for different FTR categories (monthly, seasonal or annual) are usually coupled and the number of constraints increases exponentially with the number of categories. Commercial software for FTR calculation can only provide limited categories of FTRs due to the inherent computational challenges mentioned above. In this paper, first an innovative mathematical reformulation of the FTR problem is presented which dramatically improves the computational efficiency of optimization problem. After having re-formulated the problem, a novel non-linear dynamic system (NDS) approach is proposed to solve the optimization problem. The new formulation and performance of the NDS solver is benchmarked against widely used linear programming (LP) solvers like CPLEX™ and tested on both standard IEEE test systems and large-scale systems using data from the Western Electricity Coordinating Council (WECC). The performance of the NDS is demonstrated to be comparable and in some cases is shown to outperform the widely used CPLEX algorithms. The proposed formulation and NDS based solver is also easily parallelizable enabling further computational improvement.

  10. Vision 20/20: Automation and advanced computing in clinical radiation oncology

    SciTech Connect

    Moore, Kevin L. Moiseenko, Vitali; Kagadis, George C.; McNutt, Todd R.; Mutic, Sasa

    2014-01-15

    This Vision 20/20 paper considers what computational advances are likely to be implemented in clinical radiation oncology in the coming years and how the adoption of these changes might alter the practice of radiotherapy. Four main areas of likely advancement are explored: cloud computing, aggregate data analyses, parallel computation, and automation. As these developments promise both new opportunities and new risks to clinicians and patients alike, the potential benefits are weighed against the hazards associated with each advance, with special considerations regarding patient safety under new computational platforms and methodologies. While the concerns of patient safety are legitimate, the authors contend that progress toward next-generation clinical informatics systems will bring about extremely valuable developments in quality improvement initiatives, clinical efficiency, outcomes analyses, data sharing, and adaptive radiotherapy.

  11. An integrated computer system for preliminary design of advanced aircraft.

    NASA Technical Reports Server (NTRS)

    Fulton, R. E.; Sobieszczanski, J.; Landrum, E. J.

    1972-01-01

    A progress report is given on the first phase of a research project to develop a system of Integrated Programs for Aerospace-Vehicle Design (IPAD) which is intended to automate to the largest extent possible the preliminary and detailed design of advanced aircraft. The approach used is to build a pilot system and simultaneously to carry out two major contractual studies to define a practical IPAD system preparatory to programing. The paper summarizes the specifications and goals of the IPAD system, the progress to date, and any conclusion reached regarding its feasibility and scope. Sample calculations obtained with the pilot system are given for aircraft preliminary designs optimized with respect to discipline parameters, such as weight or L/D, and these results are compared with designs optimized with respect to overall performance parameters, such as range or payload.

  12. Inference on arthropod demographic parameters: computational advances using R.

    PubMed

    Maia, Aline De Holanda Nunes; Pazianotto, Ricardo Antonio De Almeida; Luiz, Alfredo José Barreto; Marinho-Prado, Jeanne Scardini; Pervez, Ahmad

    2014-02-01

    We developed a computer program for life table analysis using the open source, free software programming environment R. It is useful to quantify chronic nonlethal effects of treatments on arthropod populations by summarizing information on their survival and fertility in key population parameters referred to as fertility life table parameters. Statistical inference on fertility life table parameters is not trivial because it requires the use of computationally intensive methods for variance estimation. Our codes present some advantages with respect to a previous program developed in Statistical Analysis System. Additional multiple comparison tests were incorporated for the analysis of qualitative factors; a module for regression analysis was implemented, thus, allowing analysis of quantitative factors such as temperature or agrochemical doses; availability is granted for users, once it was developed using an open source, free software programming environment. To illustrate the descriptive and inferential analysis implemented in lifetable.R, we present and discuss two examples: 1) a study quantifying the influence of the proteinase inhibitor berenil on the eucalyptus defoliator Thyrinteina arnobia (Stoll) and 2) a study investigating the influence of temperature on demographic parameters of a predaceous ladybird, Hippodamia variegata (Goeze). PMID:24665730

  13. Recent advances in computer camera methods for machine vision

    NASA Astrophysics Data System (ADS)

    Olson, Gaylord G.; Walker, Jo N.

    1998-10-01

    During the past year, several new computer camera methods (hardware and software) have been developed which have applications in machine vision. These are described below, along with some test results. The improvements are generally in the direction of higher speed and greater parallelism. A PCI interface card has been designed which is adaptable to multiple CCD types, both color and monochrome. A newly designed A/D converter allows for a choice of 8 or 10-bit conversion resolution and a choice of two different analog inputs. Thus, by using four of these converters feeding the 32-bit PCI data bus, up to 8 camera heads can be used with a single PCI card, and four camera heads can be operated in parallel. The card has been designed so that any of 8 different CCD types can be used with it (6 monochrome and 2 color CCDs) ranging in resolution from 192 by 165 pixels up to 1134 by 972 pixels. In the area of software, a method has been developed to better utilize the decision-making capability of the computer along with the sub-array scan capabilities of many CCDs. Specifically, it is shown below how to achieve a dual scan mode camera system wherein one scan mode is a low density, high speed scan of a complete image area, and a higher density sub-array scan is used in those areas where changes have been observed. The name given to this technique is adaptive sub-array scanning.

  14. ADVANCED COMPUTATIONAL MODEL FOR THREE-PHASE SLURRY REACTORS

    SciTech Connect

    Goodarz Ahmadi

    2001-10-01

    In the second year of the project, the Eulerian-Lagrangian formulation for analyzing three-phase slurry flows in a bubble column is further developed. The approach uses an Eulerian analysis of liquid flows in the bubble column, and makes use of the Lagrangian trajectory analysis for the bubbles and particle motions. An experimental set for studying a two-dimensional bubble column is also developed. The operation of the bubble column is being tested and diagnostic methodology for quantitative measurements is being developed. An Eulerian computational model for the flow condition in the two-dimensional bubble column is also being developed. The liquid and bubble motions are being analyzed and the results are being compared with the experimental setup. Solid-fluid mixture flows in ducts and passages at different angle of orientations were analyzed. The model predictions were compared with the experimental data and good agreement was found. Gravity chute flows of solid-liquid mixtures is also being studied. Further progress was also made in developing a thermodynamically consistent model for multiphase slurry flows with and without chemical reaction in a state of turbulent motion. The balance laws are obtained and the constitutive laws are being developed. Progress was also made in measuring concentration and velocity of particles of different sizes near a wall in a duct flow. The technique of Phase-Doppler anemometry was used in these studies. The general objective of this project is to provide the needed fundamental understanding of three-phase slurry reactors in Fischer-Tropsch (F-T) liquid fuel synthesis. The other main goal is to develop a computational capability for predicting the transport and processing of three-phase coal slurries. The specific objectives are: (1) To develop a thermodynamically consistent rate-dependent anisotropic model for multiphase slurry flows with and without chemical reaction for application to coal liquefaction. Also establish the

  15. Development Model for Research Infrastructures

    NASA Astrophysics Data System (ADS)

    Wächter, Joachim; Hammitzsch, Martin; Kerschke, Dorit; Lauterjung, Jörn

    2015-04-01

    Research infrastructures (RIs) are platforms integrating facilities, resources and services used by the research communities to conduct research and foster innovation. RIs include scientific equipment, e.g., sensor platforms, satellites or other instruments, but also scientific data, sample repositories or archives. E-infrastructures on the other hand provide the technological substratum and middleware to interlink distributed RI components with computing systems and communication networks. The resulting platforms provide the foundation for the design and implementation of RIs and play an increasing role in the advancement and exploitation of knowledge and technology. RIs are regarded as essential to achieve and maintain excellence in research and innovation crucial for the European Research Area (ERA). The implementation of RIs has to be considered as a long-term, complex development process often over a period of 10 or more years. The ongoing construction of Spatial Data Infrastructures (SDIs) provides a good example for the general complexity of infrastructure development processes especially in system-of-systems environments. A set of directives issued by the European Commission provided a framework of guidelines for the implementation processes addressing the relevant content and the encoding of data as well as the standards for service interfaces and the integration of these services into networks. Additionally, a time schedule for the overall construction process has been specified. As a result this process advances with a strong participation of member states and responsible organisations. Today, SDIs provide the operational basis for new digital business processes in both national and local authorities. Currently, the development of integrated RIs in Earth and Environmental Sciences is characterised by the following properties: • A high number of parallel activities on European and national levels with numerous institutes and organisations participating

  16. Activities and operations of the Advanced Computing Research Facility, July-October 1986

    SciTech Connect

    Pieper, G.W.

    1986-01-01

    Research activities and operations of the Advanced Computing Research Facility (ACRF) at Argonne National Laboratory are discussed for the period from July 1986 through October 1986. The facility is currently supported by the Department of Energy, and is operated by the Mathematics and Computer Science Division at Argonne. Over the past four-month period, a new commercial multiprocessor, the Intel iPSC-VX/d4 hypercube was installed. In addition, four other commercial multiprocessors continue to be available for research - an Encore Multimax, a Sequent Balance 21000, an Alliant FX/8, and an Intel iPSC/d5 - as well as a locally designed multiprocessor, the Lemur. These machines are being actively used by scientists at Argonne and throughout the nation in a wide variety of projects concerning computer systems with parallel and vector architectures. A variety of classes, workshops, and seminars have been sponsored to train researchers on computing techniques for the advanced computer systems at the Advanced Computing Research Facility. For example, courses were offered on writing programs for parallel computer systems and hosted the first annual Alliant users group meeting. A Sequent users group meeting and a two-day workshop on performance evaluation of parallel computers and programs are being organized.

  17. National Energy Research Scientific Computing Center (NERSC): Advancing the frontiers of computational science and technology

    SciTech Connect

    Hules, J.

    1996-11-01

    National Energy Research Scientific Computing Center (NERSC) provides researchers with high-performance computing tools to tackle science`s biggest and most challenging problems. Founded in 1974 by DOE/ER, the Controlled Thermonuclear Research Computer Center was the first unclassified supercomputer center and was the model for those that followed. Over the years the center`s name was changed to the National Magnetic Fusion Energy Computer Center and then to NERSC; it was relocated to LBNL. NERSC, one of the largest unclassified scientific computing resources in the world, is the principal provider of general-purpose computing services to DOE/ER programs: Magnetic Fusion Energy, High Energy and Nuclear Physics, Basic Energy Sciences, Health and Environmental Research, and the Office of Computational and Technology Research. NERSC users are a diverse community located throughout US and in several foreign countries. This brochure describes: the NERSC advantage, its computational resources and services, future technologies, scientific resources, and computational science of scale (interdisciplinary research over a decade or longer; examples: combustion in engines, waste management chemistry, global climate change modeling).

  18. Multi-Hazard Sustainability: Towards Infrastructure Resilience

    NASA Astrophysics Data System (ADS)

    Lin, T.

    2015-12-01

    Natural and anthropogenic hazards pose significant challenges to civil infrastructure. This presents opportunities in investigating site-specific hazards in structural engineering to aid mitigation and adaptation efforts. This presentation will highlight: (a) recent advances in hazard-consistent ground motion selection methodology for nonlinear dynamic analyses, (b) ongoing efforts in validation of earthquake simulations and their effects on tall buildings, and (c) a pilot study on probabilistic sea-level rise hazard analysis incorporating aleatory and epistemic uncertainties. High performance computing and visualization further facilitate research and outreach to improve resilience under multiple hazards in the face of climate change.

  19. CART V: recent advancements in computer-aided camouflage assessment

    NASA Astrophysics Data System (ADS)

    Müller, Thomas; Müller, Markus

    2011-05-01

    In order to facilitate systematic, computer aided improvements of camouflage and concealment assessment methods, the software system CART (Camouflage Assessment in Real-Time) was built up for the camouflage assessment of objects in multispectral image sequences (see contributions to SPIE 2007-2010 [1], [2], [3], [4]). It comprises a semi-automatic marking of target objects (ground truth generation) including their propagation over the image sequence and the evaluation via user-defined feature extractors as well as methods to assess the object's movement conspicuity. In this fifth part in an annual series at the SPIE conference in Orlando, this paper presents the enhancements over the recent year and addresses the camouflage assessment of static and moving objects in multispectral image data that can show noise or image artefacts. The presented methods fathom the correlations between image processing and camouflage assessment. A novel algorithm is presented based on template matching to assess the structural inconspicuity of an object objectively and quantitatively. The results can easily be combined with an MTI (moving target indication) based movement conspicuity assessment function in order to explore the influence of object movement to a camouflage effect in different environments. As the results show, the presented methods contribute to a significant benefit in the field of camouflage assessment.

  20. Infrastructure web: distributed monitoring and managing critical infrastructures

    NASA Astrophysics Data System (ADS)

    Jiang, Guofei; Cybenko, George; McGrath, Dennis

    2001-02-01

    National-scale critical infrastructure protection depends on many processes: intelligence gathering, analysis, interdiction, detection, response and recovery, to name a few. These processes are typically carried out by different individuals, agencies and industry sectors. Many new threats to national infrastructure are arising from the complex couplings that exist between advanced information technologies (telecommunications and internet), physical components (utilities), human services (health, law enforcement, emergency management) and commerce (financial services, logistics). Those threats arise and evolve at a rate governed by human intelligence and innovation, on `internet time' so to speak. The processes for infrastructure protection must operate on the same time scale to be effective. To achieve this, a new approach to integrating, coordinating and managing infrastructure protection must be deployed. To this end, we have designed an underlying web-like architecture that will serve as a platform for the decentralized monitoring and management of national critical infrastructures.

  1. ADVANCED METHODS FOR THE COMPUTATION OF PARTICLE BEAM TRANSPORT AND THE COMPUTATION OF ELECTROMAGNETIC FIELDS AND MULTIPARTICLE PHENOMENA

    SciTech Connect

    Alex J. Dragt

    2012-08-31

    Since 1980, under the grant DEFG02-96ER40949, the Department of Energy has supported the educational and research work of the University of Maryland Dynamical Systems and Accelerator Theory (DSAT) Group. The primary focus of this educational/research group has been on the computation and analysis of charged-particle beam transport using Lie algebraic methods, and on advanced methods for the computation of electromagnetic fields and multiparticle phenomena. This Final Report summarizes the accomplishments of the DSAT Group from its inception in 1980 through its end in 2011.

  2. ADVANCED COMPUTATIONAL MODEL FOR THREE-PHASE SLURRY REACTORS

    SciTech Connect

    Goodarz Ahmadi

    2004-10-01

    In this project, an Eulerian-Lagrangian formulation for analyzing three-phase slurry flows in a bubble column was developed. The approach used an Eulerian analysis of liquid flows in the bubble column, and made use of the Lagrangian trajectory analysis for the bubbles and particle motions. The bubble-bubble and particle-particle collisions are included the model. The model predictions are compared with the experimental data and good agreement was found An experimental setup for studying two-dimensional bubble columns was developed. The multiphase flow conditions in the bubble column were measured using optical image processing and Particle Image Velocimetry techniques (PIV). A simple shear flow device for bubble motion in a constant shear flow field was also developed. The flow conditions in simple shear flow device were studied using PIV method. Concentration and velocity of particles of different sizes near a wall in a duct flow was also measured. The technique of Phase-Doppler anemometry was used in these studies. An Eulerian volume of fluid (VOF) computational model for the flow condition in the two-dimensional bubble column was also developed. The liquid and bubble motions were analyzed and the results were compared with observed flow patterns in the experimental setup. Solid-fluid mixture flows in ducts and passages at different angle of orientations were also analyzed. The model predictions were compared with the experimental data and good agreement was found. Gravity chute flows of solid-liquid mixtures were also studied. The simulation results were compared with the experimental data and discussed A thermodynamically consistent model for multiphase slurry flows with and without chemical reaction in a state of turbulent motion was developed. The balance laws were obtained and the constitutive laws established.

  3. WAATS: A computer program for Weights Analysis of Advanced Transportation Systems

    NASA Technical Reports Server (NTRS)

    Glatt, C. R.

    1974-01-01

    A historical weight estimating technique for advanced transportation systems is presented. The classical approach to weight estimation is discussed and sufficient data is presented to estimate weights for a large spectrum of flight vehicles including horizontal and vertical takeoff aircraft, boosters and reentry vehicles. A computer program, WAATS (Weights Analysis for Advanced Transportation Systems) embracing the techniques discussed has been written and user instructions are presented. The program was developed for use in the ODIN (Optimal Design Integration System) system.

  4. Advanced entry guidance algorithm with landing footprint computation

    NASA Astrophysics Data System (ADS)

    Leavitt, James Aaron

    -determined angle of attack profile. The method is also capable of producing orbital footprints using an automatically-generated set of angle of attack profiles of varying range, with the lowest profile designed for near-maximum range in the absence of an active heat load constraint. The accuracy of the footprint method is demonstrated by direct comparison with footprints computed independently by an optimization program.

  5. The Nuclear Energy Advanced Modeling and Simulation Enabling Computational Technologies FY09 Report

    SciTech Connect

    Diachin, L F; Garaizar, F X; Henson, V E; Pope, G

    2009-10-12

    In this document we report on the status of the Nuclear Energy Advanced Modeling and Simulation (NEAMS) Enabling Computational Technologies (ECT) effort. In particular, we provide the context for ECT In the broader NEAMS program and describe the three pillars of the ECT effort, namely, (1) tools and libraries, (2) software quality assurance, and (3) computational facility (computers, storage, etc) needs. We report on our FY09 deliverables to determine the needs of the integrated performance and safety codes (IPSCs) in these three areas and lay out the general plan for software quality assurance to meet the requirements of DOE and the DOE Advanced Fuel Cycle Initiative (AFCI). We conclude with a brief description of our interactions with the Idaho National Laboratory computer center to determine what is needed to expand their role as a NEAMS user facility.

  6. A Cloud-Based Infrastructure for Feedback-Driven Training and Image Recognition.

    PubMed

    Abedini, Mani; von Cavallar, Stefan; Chakravorty, Rajib; Davis, Matthew; Garnavi, Rahil

    2015-01-01

    Advanced techniques in machine learning combined with scalable "cloud" computing infrastructure are driving the creation of new and innovative health diagnostic applications. We describe a service and application for performing image training and recognition, tailored to dermatology and melanoma identification. The system implements new machine learning approaches to provide a feedback-driven training loop. This training sequence enhances classification performance by incrementally retraining the classifier model from expert responses. To easily provide this application and associated web service to clinical practices, we also describe a scalable cloud infrastructure, deployable in public cloud infrastructure and private, on-premise systems.

  7. Development Model for Research Infrastructures

    NASA Astrophysics Data System (ADS)

    Wächter, Joachim; Hammitzsch, Martin; Kerschke, Dorit; Lauterjung, Jörn

    2015-04-01

    Research infrastructures (RIs) are platforms integrating facilities, resources and services used by the research communities to conduct research and foster innovation. RIs include scientific equipment, e.g., sensor platforms, satellites or other instruments, but also scientific data, sample repositories or archives. E-infrastructures on the other hand provide the technological substratum and middleware to interlink distributed RI components with computing systems and communication networks. The resulting platforms provide the foundation for the design and implementation of RIs and play an increasing role in the advancement and exploitation of knowledge and technology. RIs are regarded as essential to achieve and maintain excellence in research and innovation crucial for the European Research Area (ERA). The implementation of RIs has to be considered as a long-term, complex development process often over a period of 10 or more years. The ongoing construction of Spatial Data Infrastructures (SDIs) provides a good example for the general complexity of infrastructure development processes especially in system-of-systems environments. A set of directives issued by the European Commission provided a framework of guidelines for the implementation processes addressing the relevant content and the encoding of data as well as the standards for service interfaces and the integration of these services into networks. Additionally, a time schedule for the overall construction process has been specified. As a result this process advances with a strong participation of member states and responsible organisations. Today, SDIs provide the operational basis for new digital business processes in both national and local authorities. Currently, the development of integrated RIs in Earth and Environmental Sciences is characterised by the following properties: • A high number of parallel activities on European and national levels with numerous institutes and organisations participating

  8. Projected role of advanced computational aerodynamic methods at the Lockheed-Georgia company

    NASA Technical Reports Server (NTRS)

    Lores, M. E.

    1978-01-01

    Experience with advanced computational methods being used at the Lockheed-Georgia Company to aid in the evaluation and design of new and modified aircraft indicates that large and specialized computers will be needed to make advanced three-dimensional viscous aerodynamic computations practical. The Numerical Aerodynamic Simulation Facility should be used to provide a tool for designing better aerospace vehicles while at the same time reducing development costs by performing computations using Navier-Stokes equations solution algorithms and permitting less sophisticated but nevertheless complex calculations to be made efficiently. Configuration definition procedures and data output formats can probably best be defined in cooperation with industry, therefore, the computer should handle many remote terminals efficiently. The capability of transferring data to and from other computers needs to be provided. Because of the significant amount of input and output associated with 3-D viscous flow calculations and because of the exceedingly fast computation speed envisioned for the computer, special attention should be paid to providing rapid, diversified, and efficient input and output.

  9. Research in Computational Aeroscience Applications Implemented on Advanced Parallel Computing Systems

    NASA Technical Reports Server (NTRS)

    Wigton, Larry

    1996-01-01

    Improving the numerical linear algebra routines for use in new Navier-Stokes codes, specifically Tim Barth's unstructured grid code, with spin-offs to TRANAIR is reported. A fast distance calculation routine for Navier-Stokes codes using the new one-equation turbulence models is written. The primary focus of this work was devoted to improving matrix-iterative methods. New algorithms have been developed which activate the full potential of classical Cray-class computers as well as distributed-memory parallel computers.

  10. Impact of computer advances on future finite elements computations. [for aircraft and spacecraft design

    NASA Technical Reports Server (NTRS)

    Fulton, Robert E.

    1985-01-01

    Research performed over the past 10 years in engineering data base management and parallel computing is discussed, and certain opportunities for research toward the next generation of structural analysis capability are proposed. Particular attention is given to data base management associated with the IPAD project and parallel processing associated with the Finite Element Machine project, both sponsored by NASA, and a near term strategy for a distributed structural analysis capability based on relational data base management software and parallel computers for a future structural analysis system.

  11. Uncertainty Analyses of Advanced Fuel Cycles

    SciTech Connect

    Laurence F. Miller; J. Preston; G. Sweder; T. Anderson; S. Janson; M. Humberstone; J. MConn; J. Clark

    2008-12-12

    The Department of Energy is developing technology, experimental protocols, computational methods, systems analysis software, and many other capabilities in order to advance the nuclear power infrastructure through the Advanced Fuel Cycle Initiative (AFDI). Our project, is intended to facilitate will-informed decision making for the selection of fuel cycle options and facilities for development.

  12. 5 CFR 550.404 - Computation of advance payments and evacuation payments; time periods.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... evacuation payments; time periods. 550.404 Section 550.404 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY ADMINISTRATION (GENERAL) Payments During Evacuation § 550.404 Computation of advance payments and evacuation payments; time periods. (a) Payments shall be based on the...

  13. 5 CFR 550.404 - Computation of advance payments and evacuation payments; time periods.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... evacuation payments; time periods. 550.404 Section 550.404 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY ADMINISTRATION (GENERAL) Payments During Evacuation § 550.404 Computation of advance payments and evacuation payments; time periods. (a) Payments shall be based on the...

  14. Computers-for-edu: An Advanced Business Application Programming (ABAP) Teaching Case

    ERIC Educational Resources Information Center

    Boyle, Todd A.

    2007-01-01

    The "Computers-for-edu" case is designed to provide students with hands-on exposure to creating Advanced Business Application Programming (ABAP) reports and dialogue programs, as well as navigating various mySAP Enterprise Resource Planning (ERP) transactions needed by ABAP developers. The case requires students to apply a wide variety of ABAP…

  15. COMPUTATIONAL TOXICOLOGY ADVANCES: EMERGING CAPABILITIES FOR DATA EXPLORATION AND SAR MODEL DEVELOPMENT

    EPA Science Inventory

    Computational Toxicology Advances: Emerging capabilities for data exploration and SAR model development
    Ann M. Richard and ClarLynda R. Williams, National Health & Environmental Effects Research Laboratory, US EPA, Research Triangle Park, NC, USA; email: richard.ann@epa.gov

  16. Advanced Telecommunications and Computer Technologies in Georgia Public Elementary School Library Media Centers.

    ERIC Educational Resources Information Center

    Rogers, Jackie L.

    The purpose of this study was to determine what recent progress had been made in Georgia public elementary school library media centers regarding access to advanced telecommunications and computer technologies as a result of special funding. A questionnaire addressed the following areas: automation and networking of the school library media center…

  17. PARTNERING WITH DOE TO APPLY ADVANCED BIOLOGICAL, ENVIRONMENTAL, AND COMPUTATIONAL SCIENCE TO ENVIRONMENTAL ISSUES

    EPA Science Inventory

    On February 18, 2004, the U.S. Environmental Protection Agency and Department of Energy signed a Memorandum of Understanding to expand the research collaboration of both agencies to advance biological, environmental, and computational sciences for protecting human health and the ...

  18. Scientific Discovery through Advanced Computing (SciDAC-3) Partnership Project Annual Report

    SciTech Connect

    Hoffman, Forest M.; Bochev, Pavel B.; Cameron-Smith, Philip J..; Easter, Richard C; Elliott, Scott M.; Ghan, Steven J.; Liu, Xiaohong; Lowrie, Robert B.; Lucas, Donald D.; Ma, Po-lun; Sacks, William J.; Shrivastava, Manish; Singh, Balwinder; Tautges, Timothy J.; Taylor, Mark A.; Vertenstein, Mariana; Worley, Patrick H.

    2014-01-15

    The Applying Computationally Efficient Schemes for BioGeochemical Cycles ACES4BGC Project is advancing the predictive capabilities of Earth System Models (ESMs) by reducing two of the largest sources of uncertainty, aerosols and biospheric feedbacks, with a highly efficient computational approach. In particular, this project is implementing and optimizing new computationally efficient tracer advection algorithms for large numbers of tracer species; adding important biogeochemical interactions between the atmosphere, land, and ocean models; and applying uncertainty quanti cation (UQ) techniques to constrain process parameters and evaluate uncertainties in feedbacks between biogeochemical cycles and the climate system.

  19. A first attempt to bring computational biology into advanced high school biology classrooms.

    PubMed

    Gallagher, Suzanne Renick; Coon, William; Donley, Kristin; Scott, Abby; Goldberg, Debra S

    2011-10-01

    Computer science has become ubiquitous in many areas of biological research, yet most high school and even college students are unaware of this. As a result, many college biology majors graduate without adequate computational skills for contemporary fields of biology. The absence of a computational element in secondary school biology classrooms is of growing concern to the computational biology community and biology teachers who would like to acquaint their students with updated approaches in the discipline. We present a first attempt to correct this absence by introducing a computational biology element to teach genetic evolution into advanced biology classes in two local high schools. Our primary goal was to show students how computation is used in biology and why a basic understanding of computation is necessary for research in many fields of biology. This curriculum is intended to be taught by a computational biologist who has worked with a high school advanced biology teacher to adapt the unit for his/her classroom, but a motivated high school teacher comfortable with mathematics and computing may be able to teach this alone. In this paper, we present our curriculum, which takes into consideration the constraints of the required curriculum, and discuss our experiences teaching it. We describe the successes and challenges we encountered while bringing this unit to high school students, discuss how we addressed these challenges, and make suggestions for future versions of this curriculum.We believe that our curriculum can be a valuable seed for further development of computational activities aimed at high school biology students. Further, our experiences may be of value to others teaching computational biology at this level. Our curriculum can be obtained at http://ecsite.cs.colorado.edu/?page_id=149#biology or by contacting the authors.

  20. U.S. Department of Energy Vehicle Technologies Program -- Advanced Vehicle Testing Activity -- Plug-in Hybrid Electric Vehicle Charging Infrastructure Review

    SciTech Connect

    Kevin Morrow; Donald Darner; James Francfort

    2008-11-01

    Plug-in hybrid electric vehicles (PHEVs) are under evaluation by various stake holders to better understand their capability and potential benefits. PHEVs could allow users to significantly improve fuel economy over a standard HEV and in some cases, depending on daily driving requirements and vehicle design, have the ability to eliminate fuel consumption entirely for daily vehicle trips. The cost associated with providing charge infrastructure for PHEVs, along with the additional costs for the on-board power electronics and added battery requirements associated with PHEV technology will be a key factor in the success of PHEVs. This report analyzes the infrastructure requirements for PHEVs in single family residential, multi-family residential and commercial situations. Costs associated with this infrastructure are tabulated, providing an estimate of the infrastructure costs associated with PHEV deployment.

  1. Building an advanced climate model: Program plan for the CHAMMP (Computer Hardware, Advanced Mathematics, and Model Physics) Climate Modeling Program

    SciTech Connect

    Not Available

    1990-12-01

    The issue of global warming and related climatic changes from increasing concentrations of greenhouse gases in the atmosphere has received prominent attention during the past few years. The Computer Hardware, Advanced Mathematics, and Model Physics (CHAMMP) Climate Modeling Program is designed to contribute directly to this rapid improvement. The goal of the CHAMMP Climate Modeling Program is to develop, verify, and apply a new generation of climate models within a coordinated framework that incorporates the best available scientific and numerical approaches to represent physical, biogeochemical, and ecological processes, that fully utilizes the hardware and software capabilities of new computer architectures, that probes the limits of climate predictability, and finally that can be used to address the challenging problem of understanding the greenhouse climate issue through the ability of the models to simulate time-dependent climatic changes over extended times and with regional resolution.

  2. Making green infrastructure healthier infrastructure.

    PubMed

    Lõhmus, Mare; Balbus, John

    2015-01-01

    Increasing urban green and blue structure is often pointed out to be critical for sustainable development and climate change adaptation, which has led to the rapid expansion of greening activities in cities throughout the world. This process is likely to have a direct impact on the citizens' quality of life and public health. However, alongside numerous benefits, green and blue infrastructure also has the potential to create unexpected, undesirable, side-effects for health. This paper considers several potential harmful public health effects that might result from increased urban biodiversity, urban bodies of water, and urban tree cover projects. It does so with the intent of improving awareness and motivating preventive measures when designing and initiating such projects. Although biodiversity has been found to be associated with physiological benefits for humans in several studies, efforts to increase the biodiversity of urban environments may also promote the introduction and survival of vector or host organisms for infectious pathogens with resulting spread of a variety of diseases. In addition, more green connectivity in urban areas may potentiate the role of rats and ticks in the spread of infectious diseases. Bodies of water and wetlands play a crucial role in the urban climate adaptation and mitigation process. However, they also provide habitats for mosquitoes and toxic algal blooms. Finally, increasing urban green space may also adversely affect citizens allergic to pollen. Increased awareness of the potential hazards of urban green and blue infrastructure should not be a reason to stop or scale back projects. Instead, incorporating public health awareness and interventions into urban planning at the earliest stages can help insure that green and blue infrastructure achieves full potential for health promotion. PMID:26615823

  3. Making green infrastructure healthier infrastructure

    PubMed Central

    Lõhmus, Mare; Balbus, John

    2015-01-01

    Increasing urban green and blue structure is often pointed out to be critical for sustainable development and climate change adaptation, which has led to the rapid expansion of greening activities in cities throughout the world. This process is likely to have a direct impact on the citizens’ quality of life and public health. However, alongside numerous benefits, green and blue infrastructure also has the potential to create unexpected, undesirable, side-effects for health. This paper considers several potential harmful public health effects that might result from increased urban biodiversity, urban bodies of water, and urban tree cover projects. It does so with the intent of improving awareness and motivating preventive measures when designing and initiating such projects. Although biodiversity has been found to be associated with physiological benefits for humans in several studies, efforts to increase the biodiversity of urban environments may also promote the introduction and survival of vector or host organisms for infectious pathogens with resulting spread of a variety of diseases. In addition, more green connectivity in urban areas may potentiate the role of rats and ticks in the spread of infectious diseases. Bodies of water and wetlands play a crucial role in the urban climate adaptation and mitigation process. However, they also provide habitats for mosquitoes and toxic algal blooms. Finally, increasing urban green space may also adversely affect citizens allergic to pollen. Increased awareness of the potential hazards of urban green and blue infrastructure should not be a reason to stop or scale back projects. Instead, incorporating public health awareness and interventions into urban planning at the earliest stages can help insure that green and blue infrastructure achieves full potential for health promotion. PMID:26615823

  4. Making green infrastructure healthier infrastructure.

    PubMed

    Lõhmus, Mare; Balbus, John

    2015-01-01

    Increasing urban green and blue structure is often pointed out to be critical for sustainable development and climate change adaptation, which has led to the rapid expansion of greening activities in cities throughout the world. This process is likely to have a direct impact on the citizens' quality of life and public health. However, alongside numerous benefits, green and blue infrastructure also has the potential to create unexpected, undesirable, side-effects for health. This paper considers several potential harmful public health effects that might result from increased urban biodiversity, urban bodies of water, and urban tree cover projects. It does so with the intent of improving awareness and motivating preventive measures when designing and initiating such projects. Although biodiversity has been found to be associated with physiological benefits for humans in several studies, efforts to increase the biodiversity of urban environments may also promote the introduction and survival of vector or host organisms for infectious pathogens with resulting spread of a variety of diseases. In addition, more green connectivity in urban areas may potentiate the role of rats and ticks in the spread of infectious diseases. Bodies of water and wetlands play a crucial role in the urban climate adaptation and mitigation process. However, they also provide habitats for mosquitoes and toxic algal blooms. Finally, increasing urban green space may also adversely affect citizens allergic to pollen. Increased awareness of the potential hazards of urban green and blue infrastructure should not be a reason to stop or scale back projects. Instead, incorporating public health awareness and interventions into urban planning at the earliest stages can help insure that green and blue infrastructure achieves full potential for health promotion.

  5. FY05-FY06 Advanced Simulation and Computing Implementation Plan, Volume 2

    SciTech Connect

    Baron, A L

    2004-07-19

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the safety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program will require the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear weapon design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable stockpile life extension programs and the resolution of significant finding investigations (SFIs). This requires a balanced system of technical staff, hardware, simulation software, and computer science solutions.

  6. Modeling, Simulation and Analysis of Public Key Infrastructure

    NASA Technical Reports Server (NTRS)

    Liu, Yuan-Kwei; Tuey, Richard; Ma, Paul (Technical Monitor)

    1998-01-01

    Security is an essential part of network communication. The advances in cryptography have provided solutions to many of the network security requirements. Public Key Infrastructure (PKI) is the foundation of the cryptography applications. The main objective of this research is to design a model to simulate a reliable, scalable, manageable, and high-performance public key infrastructure. We build a model to simulate the NASA public key infrastructure by using SimProcess and MatLab Software. The simulation is from top level all the way down to the computation needed for encryption, decryption, digital signature, and secure web server. The application of secure web server could be utilized in wireless communications. The results of the simulation are analyzed and confirmed by using queueing theory.

  7. AFDX Based Data Infrastructures in Space

    NASA Astrophysics Data System (ADS)

    Vogel, Torsten

    2010-08-01

    An assessment on the requirements for the avionics infrastructure of future spacecraft like the Advanced Re-Entry Vehicle ARV has identified a distinctive lack of performance on the traditional MIL-STD-1553B installations and thus initiated a quest for an equally reliable but much more powerful successor. Ever increasing processing performance available for onboard computers opens opportunities for advanced applications like real time image processing for close range navigation. In a distributed data handling system these functions heavily rely on equally advanced and powerful network architecture. This paper will identify the requirements for modern avionics data-buses and evaluate the suitability of possible candidates. Subsequently a dedicated focus will be given to an Avionics Full DupleX Switched Ethernet (AFDX) infrastructure as backbone for future spacecraft control applications. A detailed section will focus on the central aspects that make AFDX an interesting choice for space network installations. It will cover specifically the enhancements of the underlying IEEE 802.3 Ethernet technology.

  8. Advances in computer-aided design and computer-aided manufacture technology.

    PubMed

    Calamia, J R

    1996-01-01

    Although the development of computer-aided design (CAD) and computer-aided manufacture (CAM) technology and the benefits of increased productivity became obvious in the automobile and aerospace industries in the 1970s, investigations of this technology's application in the field of dentistry did not begin until the 1980s. Only now are we beginning to see the fruits of this work with the commercial availability of some systems; the potential for this technology seems boundless. This article reviews the recent literature with emphasis on the period from June 1992 to May 1993. This review should familiarize the reader with some of the latest developments in this technology, including a brief description of some systems currently available and the clinical and economical rationale for their acceptance into the dental mainstream. This article concentrates on a particular system, the Cerec (Siemens/Pelton and Crane, Charlotte, NC) system, for three reasons: First, this system has been available since 1985 and, as a result, has a track record of almost 7 years of data. Most of the data have just recently been released and consequently, much of this year's literature on CAD-CAM is monopolized by studies using this system. Second, this system was developed as a mobile, affordable, direct chairside CAD-CAM restorative method. As such, it is of special interest to the patient, providing a one-visit restoration. Third, the author is currently engaged in research using this particular system and has a working knowledge of this system's capabilities.

  9. Advances in computer-aided design and computer-aided manufacture technology.

    PubMed

    Calamia, J R

    1994-01-01

    Although the development of computer-aided design (CAD) and computer-aided manufacture (CAM) technology and the benefits of increased productivity became obvious in the automobile and aerospace industries in the 1970s, investigations of this technology's application in the field of dentistry did not begin until the 1980s. Only now are we beginning to see the fruits of this work with the commercial availability of some systems; the potential for this technology seems boundless. This article reviews the recent literature with emphasis on the period from June 1992 to May 1993. This review should familiarize the reader with some of the latest developments in this technology, including a brief description of some systems currently available and the clinical and economical rationale for their acceptance into the dental mainstream. This article concentrates on a particular system, the Cerec (Siemens/Pelton and Crane, Charlotte, NC) system, for three reasons: first, this system has been available since 1985 and, as a result, has a track record of almost 7 years of data. Most of the data have just recently been released and consequently, much of this year's literature on CAD-CAM is monopolized by studies using this system. Second, this system was developed as a mobile, affordable, direct chairside CAD-CAM restorative method. As such, it is of special interest to the dentist who will offer this new technology directly to the patient, providing a one-visit restoration. Third, the author is currently engaged in research using this particular system and has a working knowledge of this system's capabilities.

  10. Computation of Loads on the McDonnell Douglas Advanced Bearingless Rotor

    NASA Technical Reports Server (NTRS)

    Nguyen, Khanh; Lauzon, Dan; Anand, Vaidyanathan

    1994-01-01

    Computed results from UMARC and DART analyses are compared with the blade bending moments and vibratory hub loads data obtained from a full-scale wind tunnel test of the McDonnell Douglas five-bladed advanced bearingless rotor. The 5 per-rev vibratory hub loads data are corrected using results from a dynamic calibration of the rotor balance. The comparison between UMARC computed blade bending moments at different flight conditions are poor to fair, while DART results are fair to good. Using the free wake module, UMARC adequately computes the 5P vibratory hub loads for this rotor, capturing both magnitude and variations with forward speed. DART employs a uniform inflow wake model and does not adequately compute the 5P vibratory hub loads for this rotor.

  11. Advanced Simulation & Computing FY15 Implementation Plan Volume 2, Rev. 0.5

    SciTech Connect

    McCoy, Michel; Archer, Bill; Matzen, M. Keith

    2014-09-16

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources that support annual stockpile assessment and certification, study advanced nuclear weapons design and manufacturing processes, analyze accident scenarios and weapons aging, and provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balance of resource, including technical staff, hardware, simulation software, and computer science solutions. As the program approaches the end of its second decade, ASC is intently focused on increasing predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (sufficient resolution, dimensionality, and scientific details), quantify critical margins and uncertainties, and resolve increasingly difficult analyses needed for the SSP. Where possible, the program also enables the use of high-performance simulation and computing tools to address broader national security needs, such as foreign nuclear weapon assessments and counternuclear terrorism.

  12. PREFACE: 16th International workshop on Advanced Computing and Analysis Techniques in physics research (ACAT2014)

    NASA Astrophysics Data System (ADS)

    Fiala, L.; Lokajicek, M.; Tumova, N.

    2015-05-01

    This volume of the IOP Conference Series is dedicated to scientific contributions presented at the 16th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2014), this year the motto was ''bridging disciplines''. The conference took place on September 1-5, 2014, at the Faculty of Civil Engineering, Czech Technical University in Prague, Czech Republic. The 16th edition of ACAT explored the boundaries of computing system architectures, data analysis algorithmics, automatic calculations, and theoretical calculation technologies. It provided a forum for confronting and exchanging ideas among these fields, where new approaches in computing technologies for scientific research were explored and promoted. This year's edition of the workshop brought together over 140 participants from all over the world. The workshop's 16 invited speakers presented key topics on advanced computing and analysis techniques in physics. During the workshop, 60 talks and 40 posters were presented in three tracks: Computing Technology for Physics Research, Data Analysis - Algorithms and Tools, and Computations in Theoretical Physics: Techniques and Methods. The round table enabled discussions on expanding software, knowledge sharing and scientific collaboration in the respective areas. ACAT 2014 was generously sponsored by Western Digital, Brookhaven National Laboratory, Hewlett Packard, DataDirect Networks, M Computers, Bright Computing, Huawei and PDV-Systemhaus. Special appreciations go to the track liaisons Lorenzo Moneta, Axel Naumann and Grigory Rubtsov for their work on the scientific program and the publication preparation. ACAT's IACC would also like to express its gratitude to all referees for their work on making sure the contributions are published in the proceedings. Our thanks extend to the conference liaisons Andrei Kataev and Jerome Lauret who worked with the local contacts and made this conference possible as well as to the program

  13. Recovery Act: Advanced Interaction, Computation, and Visualization Tools for Sustainable Building Design

    SciTech Connect

    Greenberg, Donald P.; Hencey, Brandon M.

    2013-08-20

    Current building energy simulation technology requires excessive labor, time and expertise to create building energy models, excessive computational time for accurate simulations and difficulties with the interpretation of the results. These deficiencies can be ameliorated using modern graphical user interfaces and algorithms which take advantage of modern computer architectures and display capabilities. To prove this hypothesis, we developed an experimental test bed for building energy simulation. This novel test bed environment offers an easy-to-use interactive graphical interface, provides access to innovative simulation modules that run at accelerated computational speeds, and presents new graphics visualization methods to interpret simulation results. Our system offers the promise of dramatic ease of use in comparison with currently available building energy simulation tools. Its modular structure makes it suitable for early stage building design, as a research platform for the investigation of new simulation methods, and as a tool for teaching concepts of sustainable design. Improvements in the accuracy and execution speed of many of the simulation modules are based on the modification of advanced computer graphics rendering algorithms. Significant performance improvements are demonstrated in several computationally expensive energy simulation modules. The incorporation of these modern graphical techniques should advance the state of the art in the domain of whole building energy analysis and building performance simulation, particularly at the conceptual design stage when decisions have the greatest impact. More importantly, these better simulation tools will enable the transition from prescriptive to performative energy codes, resulting in better, more efficient designs for our future built environment.

  14. DOE Advanced Scientific Computing Advisory Subcommittee (ASCAC) Report: Top Ten Exascale Research Challenges

    SciTech Connect

    Lucas, Robert; Ang, James; Bergman, Keren; Borkar, Shekhar; Carlson, William; Carrington, Laura; Chiu, George; Colwell, Robert; Dally, William; Dongarra, Jack; Geist, Al; Haring, Rud; Hittinger, Jeffrey; Hoisie, Adolfy; Klein, Dean Micron; Kogge, Peter; Lethin, Richard; Sarkar, Vivek; Schreiber, Robert; Shalf, John; Sterling, Thomas; Stevens, Rick; Bashor, Jon; Brightwell, Ron; Coteus, Paul; Debenedictus, Erik; Hiller, Jon; Kim, K. H.; Langston, Harper; Murphy, Richard Micron; Webster, Clayton; Wild, Stefan; Grider, Gary; Ross, Rob; Leyffer, Sven; Laros III, James

    2014-02-10

    Exascale computing systems are essential for the scientific fields that will transform the 21st century global economy, including energy, biotechnology, nanotechnology, and materials science. Progress in these fields is predicated on the ability to perform advanced scientific and engineering simulations, and analyze the deluge of data. On July 29, 2013, ASCAC was charged by Patricia Dehmer, the Acting Director of the Office of Science, to assemble a subcommittee to provide advice on exascale computing. This subcommittee was directed to return a list of no more than ten technical approaches (hardware and software) that will enable the development of a system that achieves the Department's goals for exascale computing. Numerous reports over the past few years have documented the technical challenges and the non¬-viability of simply scaling existing computer designs to reach exascale. The technical challenges revolve around energy consumption, memory performance, resilience, extreme concurrency, and big data. Drawing from these reports and more recent experience, this ASCAC subcommittee has identified the top ten computing technology advancements that are critical to making a capable, economically viable, exascale system.

  15. A comparison of computer architectures for the NASA demonstration advanced avionics system

    NASA Technical Reports Server (NTRS)

    Seacord, C. L.; Bailey, D. G.; Larson, J. C.

    1979-01-01

    The paper compares computer architectures for the NASA demonstration advanced avionics system. Two computer architectures are described with an unusual approach to fault tolerance: a single spare processor can correct for faults in any of the distributed processors by taking on the role of a failed module. It was shown the system must be used from a functional point of view to properly apply redundancy and achieve fault tolerance and ultra reliability. Data are presented on complexity and mission failure probability which show that the revised version offers equivalent mission reliability at lower cost as measured by hardware and software complexity.

  16. Robotics, Stem Cells and Brain Computer Interfaces in Rehabilitation and Recovery from Stroke; Updates and Advances

    PubMed Central

    Boninger, Michael L; Wechsler, Lawrence R.; Stein, Joel

    2014-01-01

    Objective To describe the current state and latest advances in robotics, stem cells, and brain computer interfaces in rehabilitation and recovery for stroke. Design The authors of this summary recently reviewed this work as part of a national presentation. The paper represents the information included in each area. Results Each area has seen great advances and challenges as products move to market and experiments are ongoing. Conclusion Robotics, stem cells, and brain computer interfaces all have tremendous potential to reduce disability and lead to better outcomes for patients with stroke. Continued research and investment will be needed as the field moves forward. With this investment, the potential for recovery of function is likely substantial PMID:25313662

  17. High Performance Computing Modeling Advances Accelerator Science for High-Energy Physics

    SciTech Connect

    Amundson, James; Macridin, Alexandru; Spentzouris, Panagiotis

    2014-07-28

    The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space, and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing high-performance computing are essential for accurately modeling them. In the past decade, the US Department of Energy's SciDAC program has produced accelerator-modeling tools that have been employed to tackle some of the most difficult accelerator science problems. The authors discuss the Synergia framework and its applications to high-intensity particle accelerator physics. Synergia is an accelerator simulation package capable of handling the entire spectrum of beam dynamics simulations. Our authors present Synergia's design principles and its performance on HPC platforms.

  18. High Performance Computing Modeling Advances Accelerator Science for High-Energy Physics

    DOE PAGES

    Amundson, James; Macridin, Alexandru; Spentzouris, Panagiotis

    2014-07-28

    The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space, and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing high-performance computing are essential for accurately modeling them. In the past decade, the US Department of Energy's SciDAC program has produced accelerator-modeling tools that have been employed to tackle some of the most difficult accelerator science problems. The authors discuss the Synergia framework and its applications to high-intensity particle accelerator physics. Synergia is an accelerator simulation package capable ofmore » handling the entire spectrum of beam dynamics simulations. Our authors present Synergia's design principles and its performance on HPC platforms.« less

  19. Computational methods in the prediction of advanced subsonic and supersonic propeller induced noise: ASSPIN users' manual

    NASA Technical Reports Server (NTRS)

    Dunn, M. H.; Tarkenton, G. M.

    1992-01-01

    This document describes the computational aspects of propeller noise prediction in the time domain and the use of high speed propeller noise prediction program ASSPIN (Advanced Subsonic and Supersonic Propeller Induced Noise). These formulations are valid in both the near and far fields. Two formulations are utilized by ASSPIN: (1) one is used for subsonic portions of the propeller blade; and (2) the second is used for transonic and supersonic regions on the blade. Switching between the two formulations is done automatically. ASSPIN incorporates advanced blade geometry and surface pressure modelling, adaptive observer time grid strategies, and contains enhanced numerical algorithms that result in reduced computational time. In addition, the ability to treat the nonaxial inflow case has been included.

  20. Advances in Single-Photon Emission Computed Tomography Hardware and Software.

    PubMed

    Piccinelli, Marina; Garcia, Ernest V

    2016-02-01

    Nuclear imaging techniques remain today's most reliable modality for the assessment and quantification of myocardial perfusion. In recent years, the field has experienced tremendous progress both in terms of dedicated cameras for cardiac applications and software techniques for image reconstruction. The most recent advances in single-photon emission computed tomography hardware and software are reviewed, focusing on how these improvements have resulted in an even more powerful diagnostic tool with reduced injected radiation dose and acquisition time.

  1. Recent Advances in Cardiac Computed Tomography: Dual Energy, Spectral and Molecular CT Imaging

    PubMed Central

    Danad, Ibrahim; Fayad, Zahi A.; Willemink, Martin J.; Min, James K.

    2015-01-01

    Computed tomography (CT) evolved into a powerful diagnostic tool and it is impossible to imagine current clinical practice without CT imaging. Due to its widespread availability, ease of clinical application, superb sensitivity for detection of CAD, and non-invasive nature, CT has become a valuable tool within the armamentarium of the cardiologist. In the last few years, numerous technological advances in CT have occurred—including dual energy CT (DECT), spectral CT and CT-based molecular imaging. By harnessing the advances in technology, cardiac CT has advanced beyond the mere evaluation of coronary stenosis to an imaging modality tool that permits accurate plaque characterization, assessment of myocardial perfusion and even probing of molecular processes that are involved in coronary atherosclerosis. Novel innovations in CT contrast agents and pre-clinical spectral CT devices have paved the way for CT-based molecular imaging. PMID:26068288

  2. Condition monitoring through advanced sensor and computational technology : final report (January 2002 to May 2005).

    SciTech Connect

    Kim, Jung-Taek; Luk, Vincent K.

    2005-05-01

    The overall goal of this joint research project was to develop and demonstrate advanced sensors and computational technology for continuous monitoring of the condition of components, structures, and systems in advanced and next-generation nuclear power plants (NPPs). This project included investigating and adapting several advanced sensor technologies from Korean and US national laboratory research communities, some of which were developed and applied in non-nuclear industries. The project team investigated and developed sophisticated signal processing, noise reduction, and pattern recognition techniques and algorithms. The researchers installed sensors and conducted condition monitoring tests on two test loops, a check valve (an active component) and a piping elbow (a passive component), to demonstrate the feasibility of using advanced sensors and computational technology to achieve the project goal. Acoustic emission (AE) devices, optical fiber sensors, accelerometers, and ultrasonic transducers (UTs) were used to detect mechanical vibratory response of check valve and piping elbow in normal and degraded configurations. Chemical sensors were also installed to monitor the water chemistry in the piping elbow test loop. Analysis results of processed sensor data indicate that it is feasible to differentiate between the normal and degraded (with selected degradation mechanisms) configurations of these two components from the acquired sensor signals, but it is questionable that these methods can reliably identify the level and type of degradation. Additional research and development efforts are needed to refine the differentiation techniques and to reduce the level of uncertainties.

  3. Recent advances in 3D computed tomography techniques for simulation and navigation in hepatobiliary pancreatic surgery.

    PubMed

    Uchida, Masafumi

    2014-04-01

    A few years ago it could take several hours to complete a 3D image using a 3D workstation. Thanks to advances in computer science, obtaining results of interest now requires only a few minutes. Many recent 3D workstations or multimedia computers are equipped with onboard 3D virtual patient modeling software, which enables patient-specific preoperative assessment and virtual planning, navigation, and tool positioning. Although medical 3D imaging can now be conducted using various modalities, including computed tomography (CT), magnetic resonance imaging (MRI), positron emission tomography (PET), and ultrasonography (US) among others, the highest quality images are obtained using CT data, and CT images are now the most commonly used source of data for 3D simulation and navigation image. If the 2D source image is bad, no amount of 3D image manipulation in software will provide a quality 3D image. In this exhibition, the recent advances in CT imaging technique and 3D visualization of the hepatobiliary and pancreatic abnormalities are featured, including scan and image reconstruction technique, contrast-enhanced techniques, new application of advanced CT scan techniques, and new virtual reality simulation and navigation imaging.

  4. Research Institute for Advanced Computer Science: Annual Report October 1998 through September 1999

    NASA Technical Reports Server (NTRS)

    Leiner, Barry M.; Gross, Anthony R. (Technical Monitor)

    1999-01-01

    The Research Institute for Advanced Computer Science (RIACS) carries out basic research and technology development in computer science, in support of the National Aeronautics and Space Administration's missions. RIACS is located at the NASA Ames Research Center (ARC). It currently operates under a multiple year grant/cooperative agreement that began on October 1, 1997 and is up for renewal in the year 2002. ARC has been designated NASA's Center of Excellence in Information Technology. In this capacity, ARC is charged with the responsibility to build an Information Technology Research Program that is preeminent within NASA. RIACS serves as a bridge between NASA ARC and the academic community, and RIACS scientists and visitors work in close collaboration with NASA scientists. RIACS has the additional goal of broadening the base of researchers in these areas of importance to the nation's space and aeronautics enterprises. RIACS research focuses on the three cornerstones of information technology research necessary to meet the future challenges of NASA missions: (1) Automated Reasoning for Autonomous Systems. Techniques are being developed enabling spacecraft that will be self-guiding and self-correcting to the extent that they will require little or no human intervention. Such craft will be equipped to independently solve problems as they arise, and fulfill their missions with minimum direction from Earth. (2) Human-Centered Computing. Many NASA missions require synergy between humans and computers, with sophisticated computational aids amplifying human cognitive and perceptual abilities; (3) High Performance Computing and Networking Advances in the performance of computing and networking continue to have major impact on a variety of NASA endeavors, ranging from modeling and simulation to data analysis of large datasets to collaborative engineering, planning and execution. In addition, RIACS collaborates with NASA scientists to apply information technology research to

  5. Advanced methods for the computation of particle beam transport and the computation of electromagnetic fields and beam-cavity interactions

    SciTech Connect

    Dragt, A.J.; Gluckstern, R.L.

    1990-11-01

    The University of Maryland Dynamical Systems and Accelerator Theory Group carries out research in two broad areas: the computation of charged particle beam transport using Lie algebraic methods and advanced methods for the computation of electromagnetic fields and beam-cavity interactions. Important improvements in the state of the art are believed to be possible in both of these areas. In addition, applications of these methods are made to problems of current interest in accelerator physics including the theoretical performance of present and proposed high energy machines. The Lie algebraic method of computing and analyzing beam transport handles both linear and nonlinear beam elements. Tests show this method to be superior to the earlier matrix or numerical integration methods. It has wide application to many areas including accelerator physics, intense particle beams, ion microprobes, high resolution electron microscopy, and light optics. With regard to the area of electromagnetic fields and beam cavity interactions, work is carried out on the theory of beam breakup in single pulses. Work is also done on the analysis of the high behavior of longitudinal and transverse coupling impendances, including the examination of methods which may be used to measure these impedances. Finally, work is performed on the electromagnetic analysis of coupled cavities and on the coupling of cavities to waveguides.

  6. Agile Infrastructure Monitoring

    NASA Astrophysics Data System (ADS)

    Andrade, P.; Ascenso, J.; Fedorko, I.; Fiorini, B.; Paladin, M.; Pigueiras, L.; Santos, M.

    2014-06-01

    At the present time, data centres are facing a massive rise in virtualisation and cloud computing. The Agile Infrastructure (AI) project is working to deliver new solutions to ease the management of CERN data centres. Part of the solution consists in a new "shared monitoring architecture" which collects and manages monitoring data from all data centre resources. In this article, we present the building blocks of this new monitoring architecture, the different open source technologies selected for each architecture layer, and how we are building a community around this common effort.

  7. A computational study of advanced exhaust system transition ducts with experimental validation

    NASA Technical Reports Server (NTRS)

    Wu, C.; Farokhi, S.; Taghavi, R.

    1992-01-01

    The current study is an application of CFD to a 'real' design and analysis environment. A subsonic, three-dimensional parabolized Navier-Stokes (PNS) code is used to construct stall margin design charts for optimum-length advanced exhaust systems' circular-to-rectangular transition ducts. Computer code validation has been conducted to examine the capability of wall static pressure predictions. The comparison of measured and computed wall static pressures indicates a reasonable accuracy of the PNS computer code results. Computations have also been conducted on 15 transition ducts, three area ratios, and five aspect ratios. The three area ratios investigated are constant area ratio of unity, moderate contracting area ratio of 0.8, and highly contracting area ratio of 0.5. The degree of mean flow acceleration is identified as a dominant parameter in establishing the minimum duct length requirement. The effect of increasing aspect ratio in the minimum length transition duct is to increase the length requirement, as well as to increase the mass-averaged total pressure losses. The design guidelines constructed from this investigation may aid in the design and manufacture of advanced exhaust systems for modern fighter aircraft.

  8. ADVANCING THE FUNDAMENTAL UNDERSTANDING AND SCALE-UP OF TRISO FUEL COATERS VIA ADVANCED MEASUREMENT AND COMPUTATIONAL TECHNIQUES

    SciTech Connect

    Biswas, Pratim; Al-Dahhan, Muthanna

    2012-11-01

    to advance the fundamental understanding of the hydrodynamics by systematically investigating the effect of design and operating variables, to evaluate the reported dimensionless groups as scaling factors, and to establish a reliable scale-up methodology for the TRISO fuel particle spouted bed coaters based on hydrodynamic similarity via advanced measurement and computational techniques. An additional objective is to develop an on-line non-invasive measurement technique based on gamma ray densitometry (i.e. Nuclear Gauge Densitometry) that can be installed and used for coater process monitoring to ensure proper performance and operation and to facilitate the developed scale-up methodology. To achieve the objectives set for the project, the work will use optical probes and gamma ray computed tomography (CT) (for the measurements of solids/voidage holdup cross-sectional distribution and radial profiles along the bed height, spouted diameter, and fountain height) and radioactive particle tracking (RPT) (for the measurements of the 3D solids flow field, velocity, turbulent parameters, circulation time, solids lagrangian trajectories, and many other of spouted bed related hydrodynamic parameters). In addition, gas dynamic measurement techniques and pressure transducers will be utilized to complement the obtained information. The measurements obtained by these techniques will be used as benchmark data to evaluate and validate the computational fluid dynamic (CFD) models (two fluid model or discrete particle model) and their closures. The validated CFD models and closures will be used to facilitate the developed methodology for scale-up, design and hydrodynamic similarity. Successful execution of this work and the proposed tasks will advance the fundamental understanding of the coater flow field and quantify it for proper and safe design, scale-up, and performance. Such achievements will overcome the barriers to AGR applications and will help assure that the US maintains

  9. PREFACE: 15th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT2013)

    NASA Astrophysics Data System (ADS)

    Wang, Jianxiong

    2014-06-01

    This volume of Journal of Physics: Conference Series is dedicated to scientific contributions presented at the 15th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2013) which took place on 16-21 May 2013 at the Institute of High Energy Physics, Chinese Academy of Sciences, Beijing, China. The workshop series brings together computer science researchers and practitioners, and researchers from particle physics and related fields to explore and confront the boundaries of computing and of automatic data analysis and theoretical calculation techniques. This year's edition of the workshop brought together over 120 participants from all over the world. 18 invited speakers presented key topics on the universe in computer, Computing in Earth Sciences, multivariate data analysis, automated computation in Quantum Field Theory as well as computing and data analysis challenges in many fields. Over 70 other talks and posters presented state-of-the-art developments in the areas of the workshop's three tracks: Computing Technologies, Data Analysis Algorithms and Tools, and Computational Techniques in Theoretical Physics. The round table discussions on open-source, knowledge sharing and scientific collaboration stimulate us to think over the issue in the respective areas. ACAT 2013 was generously sponsored by the Chinese Academy of Sciences (CAS), National Natural Science Foundation of China (NFSC), Brookhaven National Laboratory in the USA (BNL), Peking University (PKU), Theoretical Physics Cernter for Science facilities of CAS (TPCSF-CAS) and Sugon. We would like to thank all the participants for their scientific contributions and for the en- thusiastic participation in all its activities of the workshop. Further information on ACAT 2013 can be found at http://acat2013.ihep.ac.cn. Professor Jianxiong Wang Institute of High Energy Physics Chinese Academy of Science Details of committees and sponsors are available in the PDF

  10. CRADA ORNL 91-0046B final report: Assessment of IBM advanced computing architectures

    SciTech Connect

    Geist, G.A.

    1996-02-01

    This was a Cooperative Research and Development Agreement (CRADA) with IBM to assess their advanced computer architectures. Over the course of this project three different architectures were evaluated. The POWER/4 RIOS1 based shared memory multiprocessor, the POWER/2 RIOS2 based high performance workstation, and the J30 PowerPC based shared memory multiprocessor. In addition to this hardware several software packages where beta tested for IBM including: ESSO scientific computing library, nv video-conferencing package, Ultimedia multimedia display environment, FORTRAN 90 and C++ compilers, and the AIX 4.1 operating system. Both IBM and ORNL benefited from the research performed in this project and even though access to the POWER/4 computer was delayed several months, all milestones were met.

  11. Intelligent systems technology infrastructure for integrated systems

    NASA Technical Reports Server (NTRS)

    Lum, Henry

    1991-01-01

    A system infrastructure must be properly designed and integrated from the conceptual development phase to accommodate evolutionary intelligent technologies. Several technology development activities were identified that may have application to rendezvous and capture systems. Optical correlators in conjunction with fuzzy logic control might be used for the identification, tracking, and capture of either cooperative or non-cooperative targets without the intensive computational requirements associated with vision processing. A hybrid digital/analog system was developed and tested with a robotic arm. An aircraft refueling application demonstration is planned within two years. Initially this demonstration will be ground based with a follow-on air based demonstration. System dependability measurement and modeling techniques are being developed for fault management applications. This involves usage of incremental solution/evaluation techniques and modularized systems to facilitate reuse and to take advantage of natural partitions in system models. Though not yet commercially available and currently subject to accuracy limitations, technology is being developed to perform optical matrix operations to enhance computational speed. Optical terrain recognition using camera image sequencing processed with optical correlators is being developed to determine position and velocity in support of lander guidance. The system is planned for testing in conjunction with Dryden Flight Research Facility. Advanced architecture technology is defining open architecture design constraints, test bed concepts (processors, multiple hardware/software and multi-dimensional user support, knowledge/tool sharing infrastructure), and software engineering interface issues.

  12. California Hydrogen Infrastructure Project

    SciTech Connect

    Heydorn, Edward C

    2013-03-12

    Air Products and Chemicals, Inc. has completed a comprehensive, multiyear project to demonstrate a hydrogen infrastructure in California. The specific primary objective of the project was to demonstrate a model of a real-world retail hydrogen infrastructure and acquire sufficient data within the project to assess the feasibility of achieving the nation's hydrogen infrastructure goals. The project helped to advance hydrogen station technology, including the vehicle-to-station fueling interface, through consumer experiences and feedback. By encompassing a variety of fuel cell vehicles, customer profiles and fueling experiences, this project was able to obtain a complete portrait of real market needs. The project also opened its stations to other qualified vehicle providers at the appropriate time to promote widespread use and gain even broader public understanding of a hydrogen infrastructure. The project engaged major energy companies to provide a fueling experience similar to traditional gasoline station sites to foster public acceptance of hydrogen. Work over the course of the project was focused in multiple areas. With respect to the equipment needed, technical design specifications (including both safety and operational considerations) were written, reviewed, and finalized. After finalizing individual equipment designs, complete station designs were started including process flow diagrams and systems safety reviews. Material quotes were obtained, and in some cases, depending on the project status and the lead time, equipment was placed on order and fabrication began. Consideration was given for expected vehicle usage and station capacity, standard features needed, and the ability to upgrade the station at a later date. In parallel with work on the equipment, discussions were started with various vehicle manufacturers to identify vehicle demand (short- and long-term needs). Discussions included identifying potential areas most suited for hydrogen fueling stations

  13. Further advancements for large area-detector based computed tomography system

    SciTech Connect

    Davis, A. W.; Keating, S. C.; Claytor, T. N.

    2001-01-01

    We present advancements made to a large area-detector based system for industrial x-ray computed tomography. Past performance improvements in data acquisition speeds were made by use of high-resolution large area, flat-panel amorphous-silicon (a-Si) detectors. The detectors have proven, over several years, to be a robust alternative to CCD-optics and image intensifier CT systems. These detectors also provide the advantage of area detection as compared with the single slice geometry of linear array systems. New advancements in this system include parallel processing of sinogram reconstructions, improved visualization software and migration to frame-rate a-Si detectors. Parallel processing provides significant speed improvements for data reconstruction, and is implemented for parallel-beam, fan-beam and Feldkamp cone-beam reconstruction algorithms. Reconstruction times are reduced by an order of magnitude by use of a cluster of ten or more equal-speed computers. Advancements in data visualization are made through interactive software, which allows interrogation of the full three-dimensional dataset. Inspection examples presented in this paper include an electromechanical device, a nonliving biological specimen and a press-cast plastic specimen. We also present a commonplace item for the benefit of the layperson.

  14. Recent advances in computer-aided drug design as applied to anti-influenza drug discovery.

    PubMed

    Mallipeddi, Prema L; Kumar, Gyanendra; White, Stephen W; Webb, Thomas R

    2014-01-01

    Influenza is a seasonal and serious health threat, and the recent outbreak of H7N9 following the pandemic spread of H1N1 in 2009 has served to emphasize the importance of anti-influenza drug discovery. Zanamivir (Relenza™) and oseltamivir (Tamiflu(®)) are two antiviral drugs currently recommended by the CDC for treating influenza. Both are examples of the successful application of structure-based drug design strategies. These strategies have combined computer- based approaches, such as docking- and pharmacophore-based virtual screening with X-ray crystallographic structural analyses. Docking is a routinely used computational method to identify potential hits from large compound libraries. This method has evolved from simple rigid docking approaches to flexible docking methods to handle receptor flexibility and to enhance hit rates in virtual screening. Virtual screening approaches can employ both ligand-based and structurebased pharmacophore models depending on the available information. The exponential growth in computing power has increasingly facilitated the application of computer-aided methods in drug discovery, and they now play significant roles in the search for novel therapeutics. An overview of these computational tools is presented in this review, and recent advances and challenges will be discussed. The focus of the review will be anti-influenza drug discovery and how advances in our understanding of viral biology have led to the discovery of novel influenza protein targets. Also discussed will be strategies to circumvent the problem of resistance emerging from rapid mutations that has seriously compromised the efficacy of current anti-influenza therapies.

  15. High Resolution Traction Force Microscopy Based on Experimental and Computational Advances

    PubMed Central

    Sabass, Benedikt; Gardel, Margaret L.; Waterman, Clare M.; Schwarz, Ulrich S.

    2008-01-01

    Cell adhesion and migration crucially depend on the transmission of actomyosin-generated forces through sites of focal adhesion to the extracellular matrix. Here we report experimental and computational advances in improving the resolution and reliability of traction force microscopy. First, we introduce the use of two differently colored nanobeads as fiducial markers in polyacrylamide gels and explain how the displacement field can be computationally extracted from the fluorescence data. Second, we present different improvements regarding standard methods for force reconstruction from the displacement field, which are the boundary element method, Fourier-transform traction cytometry, and traction reconstruction with point forces. Using extensive data simulation, we show that the spatial resolution of the boundary element method can be improved considerably by splitting the elastic field into near, intermediate, and far field. Fourier-transform traction cytometry requires considerably less computer time, but can achieve a comparable resolution only when combined with Wiener filtering or appropriate regularization schemes. Both methods tend to underestimate forces, especially at small adhesion sites. Traction reconstruction with point forces does not suffer from this limitation, but is only applicable with stationary and well-developed adhesion sites. Third, we combine these advances and for the first time reconstruct fibroblast traction with a spatial resolution of ∼1 μm. PMID:17827246

  16. Computational mechanics - Advances and trends; Proceedings of the Session - Future directions of Computational Mechanics of the ASME Winter Annual Meeting, Anaheim, CA, Dec. 7-12, 1986

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K. (Editor)

    1986-01-01

    The papers contained in this volume provide an overview of the advances made in a number of aspects of computational mechanics, identify some of the anticipated industry needs in this area, discuss the opportunities provided by new hardware and parallel algorithms, and outline some of the current government programs in computational mechanics. Papers are included on advances and trends in parallel algorithms, supercomputers for engineering analysis, material modeling in nonlinear finite-element analysis, the Navier-Stokes computer, and future finite-element software systems.

  17. Modeling, Simulation and Analysis of Complex Networked Systems: A Program Plan for DOE Office of Advanced Scientific Computing Research

    SciTech Connect

    Brown, D L

    2009-05-01

    Many complex systems of importance to the U.S. Department of Energy consist of networks of discrete components. Examples are cyber networks, such as the internet and local area networks over which nearly all DOE scientific, technical and administrative data must travel, the electric power grid, social networks whose behavior can drive energy demand, and biological networks such as genetic regulatory networks and metabolic networks. In spite of the importance of these complex networked systems to all aspects of DOE's operations, the scientific basis for understanding these systems lags seriously behind the strong foundations that exist for the 'physically-based' systems usually associated with DOE research programs that focus on such areas as climate modeling, fusion energy, high-energy and nuclear physics, nano-science, combustion, and astrophysics. DOE has a clear opportunity to develop a similarly strong scientific basis for understanding the structure and dynamics of networked systems by supporting a strong basic research program in this area. Such knowledge will provide a broad basis for, e.g., understanding and quantifying the efficacy of new security approaches for computer networks, improving the design of computer or communication networks to be more robust against failures or attacks, detecting potential catastrophic failure on the power grid and preventing or mitigating its effects, understanding how populations will respond to the availability of new energy sources or changes in energy policy, and detecting subtle vulnerabilities in large software systems to intentional attack. This white paper outlines plans for an aggressive new research program designed to accelerate the advancement of the scientific basis for complex networked systems of importance to the DOE. It will focus principally on four research areas: (1) understanding network structure, (2) understanding network dynamics, (3) predictive modeling and simulation for complex networked systems

  18. Advanced computer techniques for inverse modeling of electric current in cardiac tissue

    SciTech Connect

    Hutchinson, S.A.; Romero, L.A.; Diegert, C.F.

    1996-08-01

    For many years, ECG`s and vector cardiograms have been the tools of choice for non-invasive diagnosis of cardiac conduction problems, such as found in reentrant tachycardia or Wolff-Parkinson-White (WPW) syndrome. Through skillful analysis of these skin-surface measurements of cardiac generated electric currents, a physician can deduce the general location of heart conduction irregularities. Using a combination of high-fidelity geometry modeling, advanced mathematical algorithms and massively parallel computing, Sandia`s approach would provide much more accurate information and thus allow the physician to pinpoint the source of an arrhythmia or abnormal conduction pathway.

  19. Unenhanced CT in the evaluation of urinary calculi: application of advanced computer methods.

    PubMed

    Olcott, E W; Sommer, F G

    1999-04-01

    Recent advances in computer hardware and software technology enable radiologists to examine tissues and structures using three-dimensional figures constructed from the multiple planar images acquired during a spiral CT examination. Three-dimensional CT techniques permit the linear dimensions of renal calculi to be determined along all three coordinate axes with a high degree of accuracy and enable direct volumetric analysis of calculi, yielding information that is not available from any other diagnostic modality. Additionally, three-dimensional techniques can help to identify and localize calculi in patients with suspected urinary colic.

  20. DOE Advanced Scientific Computing Advisory Committee (ASCAC) Subcommittee Report on Scientific and Technical Information

    SciTech Connect

    Hey, Tony; Agarwal, Deborah; Borgman, Christine; Cartaro, Concetta; Crivelli, Silvia; Van Dam, Kerstin Kleese; Luce, Richard; Arjun, Shankar; Trefethen, Anne; Wade, Alex; Williams, Dean

    2015-09-04

    The Advanced Scientific Computing Advisory Committee (ASCAC) was charged to form a standing subcommittee to review the Department of Energy’s Office of Scientific and Technical Information (OSTI) and to begin by assessing the quality and effectiveness of OSTI’s recent and current products and services and to comment on its mission and future directions in the rapidly changing environment for scientific publication and data. The Committee met with OSTI staff and reviewed available products, services and other materials. This report summaries their initial findings and recommendations.

  1. Advanced Computing Technologies for Rocket Engine Propulsion Systems: Object-Oriented Design with C++

    NASA Technical Reports Server (NTRS)

    Bekele, Gete

    2002-01-01

    This document explores the use of advanced computer technologies with an emphasis on object-oriented design to be applied in the development of software for a rocket engine to improve vehicle safety and reliability. The primary focus is on phase one of this project, the smart start sequence module. The objectives are: 1) To use current sound software engineering practices, object-orientation; 2) To improve on software development time, maintenance, execution and management; 3) To provide an alternate design choice for control, implementation, and performance.

  2. Computational Models of Exercise on the Advanced Resistance Exercise Device (ARED)

    NASA Technical Reports Server (NTRS)

    Newby, Nate; Caldwell, Erin; Scott-Pandorf, Melissa; Peters,Brian; Fincke, Renita; DeWitt, John; Poutz-Snyder, Lori

    2011-01-01

    Muscle and bone loss remain a concern for crew returning from space flight. The advanced resistance exercise device (ARED) is used for on-orbit resistance exercise to help mitigate these losses. However, characterization of how the ARED loads the body in microgravity has yet to be determined. Computational models allow us to analyze ARED exercise in both 1G and 0G environments. To this end, biomechanical models of the squat, single-leg squat, and deadlift exercise on the ARED have been developed to further investigate bone and muscle forces resulting from the exercises.

  3. IrLaW an OGC compliant infrared thermography measurement system developed on mini PC with real time computing capabilities for long term monitoring of transport infrastructures

    NASA Astrophysics Data System (ADS)

    Dumoulin, J.; Averty, R.

    2012-04-01

    One of the objectives of ISTIMES project is to evaluate the potentialities offered by the integration of different electromagnetic techniques able to perform non-invasive diagnostics for surveillance and monitoring of transport infrastructures. Among the EM methods investigated, uncooled infrared camera is a promising technique due to its dissemination potential according to its relative low cost on the market. Infrared thermography, when it is used in quantitative mode (not in laboratory conditions) and not in qualitative mode (vision applied to survey), requires to process in real time thermal radiative corrections on raw data acquired to take into account influences of natural environment evolution with time. But, camera sensor has to be enough smart to apply in real time calibration law and radiometric corrections in a varying atmosphere. So, a complete measurement system was studied and developed with low cost infrared cameras available on the market. In the system developed, infrared camera is coupled with other sensors to feed simplified radiative models running, in real time, on GPU available on small PC. The system studied and developed uses a fast Ethernet camera FLIR A320 [1] coupled with a VAISALA WXT520 [2] weather station and a light GPS unit [3] for positioning and dating. It can be used with other Ethernet infrared cameras (i.e. visible ones) but requires to be able to access measured data at raw level. In the present study, it has been made possible thanks to a specific agreement signed with FLIR Company. The prototype system studied and developed is implemented on low cost small computer that integrates a GPU card to allow real time parallel computing [4] of simplified radiometric [5] heat balance using information measured with the weather station. An HMI was developed under Linux using OpenSource and complementary pieces of software developed at IFSTTAR. This new HMI called "IrLaW" has various functionalities that let it compliant to be use in

  4. Advanced Simulation and Computing Fiscal Year 2016 Implementation Plan, Version 0

    SciTech Connect

    McCoy, M.; Archer, B.; Hendrickson, B.

    2015-08-27

    The Stockpile Stewardship Program (SSP) is an integrated technical program for maintaining the safety, surety, and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational capabilities to support these programs. The purpose of this IP is to outline key work requirements to be performed and to control individual work activities within the scope of work. Contractors may not deviate from this plan without a revised WA or subsequent IP.

  5. Computational methods to extract meaning from text and advance theories of human cognition.

    PubMed

    McNamara, Danielle S

    2011-01-01

    Over the past two decades, researchers have made great advances in the area of computational methods for extracting meaning from text. This research has to a large extent been spurred by the development of latent semantic analysis (LSA), a method for extracting and representing the meaning of words using statistical computations applied to large corpora of text. Since the advent of LSA, researchers have developed and tested alternative statistical methods designed to detect and analyze meaning in text corpora. This research exemplifies how statistical models of semantics play an important role in our understanding of cognition and contribute to the field of cognitive science. Importantly, these models afford large-scale representations of human knowledge and allow researchers to explore various questions regarding knowledge, discourse processing, text comprehension, and language. This topic includes the latest progress by the leading researchers in the endeavor to go beyond LSA.

  6. Computational methods to extract meaning from text and advance theories of human cognition.

    PubMed

    McNamara, Danielle S

    2011-01-01

    Over the past two decades, researchers have made great advances in the area of computational methods for extracting meaning from text. This research has to a large extent been spurred by the development of latent semantic analysis (LSA), a method for extracting and representing the meaning of words using statistical computations applied to large corpora of text. Since the advent of LSA, researchers have developed and tested alternative statistical methods designed to detect and analyze meaning in text corpora. This research exemplifies how statistical models of semantics play an important role in our understanding of cognition and contribute to the field of cognitive science. Importantly, these models afford large-scale representations of human knowledge and allow researchers to explore various questions regarding knowledge, discourse processing, text comprehension, and language. This topic includes the latest progress by the leading researchers in the endeavor to go beyond LSA. PMID:25164173

  7. Integrated Computational Materials Engineering (ICME) for Third Generation Advanced High-Strength Steel Development

    SciTech Connect

    Savic, Vesna; Hector, Louis G.; Ezzat, Hesham; Sachdev, Anil K.; Quinn, James; Krupitzer, Ronald; Sun, Xin

    2015-06-01

    This paper presents an overview of a four-year project focused on development of an integrated computational materials engineering (ICME) toolset for third generation advanced high-strength steels (3GAHSS). Following a brief look at ICME as an emerging discipline within the Materials Genome Initiative, technical tasks in the ICME project will be discussed. Specific aims of the individual tasks are multi-scale, microstructure-based material model development using state-of-the-art computational and experimental techniques, forming, toolset assembly, design optimization, integration and technical cost modeling. The integrated approach is initially illustrated using a 980 grade transformation induced plasticity (TRIP) steel, subject to a two-step quenching and partitioning (Q&P) heat treatment, as an example.

  8. Advancing global marine biogeography research with open-source GIS software and cloud-computing

    USGS Publications Warehouse

    Fujioka, Ei; Vanden Berghe, Edward; Donnelly, Ben; Castillo, Julio; Cleary, Jesse; Holmes, Chris; McKnight, Sean; Halpin, patrick

    2012-01-01

    Across many scientific domains, the ability to aggregate disparate datasets enables more meaningful global analyses. Within marine biology, the Census of Marine Life served as the catalyst for such a global data aggregation effort. Under the Census framework, the Ocean Biogeographic Information System was established to coordinate an unprecedented aggregation of global marine biogeography data. The OBIS data system now contains 31.3 million observations, freely accessible through a geospatial portal. The challenges of storing, querying, disseminating, and mapping a global data collection of this complexity and magnitude are significant. In the face of declining performance and expanding feature requests, a redevelopment of the OBIS data system was undertaken. Following an Open Source philosophy, the OBIS technology stack was rebuilt using PostgreSQL, PostGIS, GeoServer and OpenLayers. This approach has markedly improved the performance and online user experience while maintaining a standards-compliant and interoperable framework. Due to the distributed nature of the project and increasing needs for storage, scalability and deployment flexibility, the entire hardware and software stack was built on a Cloud Computing environment. The flexibility of the platform, combined with the power of the application stack, enabled rapid re-development of the OBIS infrastructure, and ensured complete standards-compliance.

  9. Nuclear hybrid energy infrastructure

    SciTech Connect

    Agarwal, Vivek; Tawfik, Magdy S.

    2015-02-01

    The nuclear hybrid energy concept is becoming a reality for the US energy infrastructure where combinations of the various potential energy sources (nuclear, wind, solar, biomass, and so on) are integrated in a hybrid energy system. This paper focuses on challenges facing a hybrid system with a Small Modular Reactor at its core. The core of the paper will discuss efforts required to develop supervisory control center that collects data, supports decision-making, and serves as an information hub for supervisory control center. Such a center will also be a model for integrating future technologies and controls. In addition, advanced operations research, thermal cycle analysis, energy conversion analysis, control engineering, and human factors engineering will be part of the supervisory control center. Nuclear hybrid energy infrastructure would allow operators to optimize the cost of energy production by providing appropriate means of integrating different energy sources. The data needs to be stored, processed, analyzed, trended, and projected at right time to right operator to integrate different energy sources.

  10. A fission matrix based validation protocol for computed power distributions in the advanced test reactor

    SciTech Connect

    Nielsen, J. W.; Nigg, D. W.; LaPorta, A. W.

    2013-07-01

    The Idaho National Laboratory (INL) has been engaged in a significant multi year effort to modernize the computational reactor physics tools and validation procedures used to support operations of the Advanced Test Reactor (ATR) and its companion critical facility (ATRC). Several new protocols for validation of computed neutron flux distributions and spectra as well as for validation of computed fission power distributions, based on new experiments and well-recognized least-squares statistical analysis techniques, have been under development. In the case of power distributions, estimates of the a priori ATR-specific fuel element-to-element fission power correlation and covariance matrices are required for validation analysis. A practical method for generating these matrices using the element-to-element fission matrix is presented, along with a high-order scheme for estimating the underlying fission matrix itself. The proposed methodology is illustrated using the MCNP5 neutron transport code for the required neutronics calculations. The general approach is readily adaptable for implementation using any multidimensional stochastic or deterministic transport code that offers the required level of spatial, angular, and energy resolution in the computed solution for the neutron flux and fission source. (authors)

  11. High Performance Computing: Advanced Research Projects Agency Should Do More To Foster Program Goals. Report to the Chairman, Committee on Armed Services, House of Representatives.

    ERIC Educational Resources Information Center

    General Accounting Office, Washington, DC. Information Management and Technology Div.

    High-performance computing refers to the use of advanced computing technologies to solve highly complex problems in the shortest possible time. The federal High Performance Computing and Communications Initiative of the Advanced Research Project Agency (ARPA) attempts to accelerate availability and use of high performance computers and networks.…

  12. Advanced computational sensors technology: testing and evaluation in visible, SWIR, and LWIR imaging

    NASA Astrophysics Data System (ADS)

    Rizk, Charbel G.; Wilson, John P.; Pouliquen, Philippe

    2015-05-01

    The Advanced Computational Sensors Team at the Johns Hopkins University Applied Physics Laboratory and the Johns Hopkins University Department of Electrical and Computer Engineering has been developing advanced readout integrated circuit (ROIC) technology for more than 10 years with a particular focus on the key challenges of dynamic range, sampling rate, system interface and bandwidth, and detector materials or band dependencies. Because the pixel array offers parallel sampling by default, the team successfully demonstrated that adding smarts in the pixel and the chip can increase performance significantly. Each pixel becomes a smart sensor and can operate independently in collecting, processing, and sharing data. In addition, building on the digital circuit revolution, the effective well size can be increased by orders of magnitude within the same pixel pitch over analog designs. This research has yielded an innovative class of a system-on-chip concept: the Flexible Readout and Integration Sensor (FRIS) architecture. All key parameters are programmable and/or can be adjusted dynamically, and this architecture can potentially be sensor and application agnostic. This paper reports on the testing and evaluation of one prototype that can support either detector polarity and includes sample results with visible, short-wavelength infrared (SWIR), and long-wavelength infrared (LWIR) imaging.

  13. Proceedings of the workshop on advanced computer technologies and biological sequencing

    SciTech Connect

    Not Available

    1988-11-01

    The participants in the workshop agree that advanced computer technologies will play a significant role in biological sequencing. They suggest a strategy based on the following four recommendations: define a set of model projects, and develop a complete set of data management and analysis tools for these model projects; seek to consolidate appropriate databases, while allowing for the flexible development and design of tools that will permit further consolidation, In the longer term, develop a coordinated effort that will allow networking of all relevant databases; encourage the development, collection, and distribution of analysis tools; and address user interface issues and encourage the development of graphics and visualization tools. Section 3 of this report elaborates on each of these recommendations. Section 2 contains the tutorials presented at the workshop and a summary of the comments made in the discussion period following the tutorials. These tutorials were an integral part of the workshop: they provided a forum for the discussion of the needs of biologists in managing and analyzing biological sequencing data, and the capabilities of advanced computer technologies in meeting those needs. Also included in Section 2 is an informal paper on fifth generation technologies, prepared by two of the participants. Appendix A contains the documents (edited for grammar) prepared by the participants and groups at the workshop. Appendix B contains the workshop program.

  14. Development of a lunar infrastructure

    NASA Astrophysics Data System (ADS)

    Burke, J. D.

    If humans are to reside continuously and productively on the Moon, they must be surrounded and supported there by an infrastructure having some attributes of the support systems that have made advanced civilization possible on Earth. Building this lunar infrastructure will, in a sense, be an investment. Creating it will require large resources from Earth, but once it exists it can do much to limit the further demands of a lunar base for Earthside support. What is needed for a viable lunar infrastructure? This question can be approached from two directions. The first is to examine history, which is essentially a record of growing information structures among humans on Earth (tribes, agriculture, specialization of work, education, ethics, arts and sciences, cities and states, technology). The second approach is much less secure but may provide useful insights: it is to examine the minimal needs of a small human community - not just for physical survival but for a stable existence with a net product output. This paper presents a summary, based on present knowledge of the Moon and of the likely functions of a human community there, of some of these infrastructure requirements, and also discusses possible ways to proceed toward meeting early infrastructure needs.

  15. Advanced Simulation and Computing: A Summary Report to the Director's Review

    SciTech Connect

    McCoy, M G; Peck, T

    2003-06-01

    It has now been three years since the Advanced Simulation and Computing Program (ASCI), as managed by Defense and Nuclear Technologies (DNT) Directorate, has been reviewed by this Director's Review Committee (DRC). Since that time, there has been considerable progress for all components of the ASCI Program, and these developments will be highlighted in this document and in the presentations planned for June 9 and 10, 2003. There have also been some name changes. Today, the Program is called ''Advanced Simulation and Computing,'' Although it retains the familiar acronym ASCI, the initiative nature of the effort has given way to sustained services as an integral part of the Stockpile Stewardship Program (SSP). All computing efforts at LLNL and the other two Defense Program (DP) laboratories are funded and managed under ASCI. This includes the so-called legacy codes, which remain essential tools in stockpile stewardship. The contract between the Department of Energy (DOE) and the University of California (UC) specifies an independent appraisal of Directorate technical work and programmatic management. Such represents the work of this DNT Review Committee. Beginning this year, the Laboratory is implementing a new review system. This process was negotiated between UC, the National Nuclear Security Administration (NNSA), and the Laboratory Directors. Central to this approach are eight performance objectives that focus on key programmatic and administrative goals. Associated with each of these objectives are a number of performance measures to more clearly characterize the attainment of the objectives. Each performance measure has a lead directorate and one or more contributing directorates. Each measure has an evaluation plan and has identified expected documentation to be included in the ''Assessment File''.

  16. Recent advances in computational methodology for simulation of mechanical circulatory assist devices

    PubMed Central

    Marsden, Alison L.; Bazilevs, Yuri; Long, Christopher C.; Behr, Marek

    2014-01-01

    Ventricular assist devices (VADs) provide mechanical circulatory support to offload the work of one or both ventricles during heart failure. They are used in the clinical setting as destination therapy, as bridge to transplant, or more recently as bridge to recovery to allow for myocardial remodeling. Recent developments in computational simulation allow for detailed assessment of VAD hemodynamics for device design and optimization for both children and adults. Here, we provide a focused review of the recent literature on finite element methods and optimization for VAD simulations. As VAD designs typically fall into two categories, pulsatile and continuous flow devices, we separately address computational challenges of both types of designs, and the interaction with the circulatory system with three representative case studies. In particular, we focus on recent advancements in finite element methodology that has increased the fidelity of VAD simulations. We outline key challenges, which extend to the incorporation of biological response such as thrombosis and hemolysis, as well as shape optimization methods and challenges in computational methodology. PMID:24449607

  17. Intelligent systems technology infrastructure for integrated systems

    NASA Technical Reports Server (NTRS)

    Lum, Henry, Jr.

    1991-01-01

    Significant advances have occurred during the last decade in intelligent systems technologies (a.k.a. knowledge-based systems, KBS) including research, feasibility demonstrations, and technology implementations in operational environments. Evaluation and simulation data obtained to date in real-time operational environments suggest that cost-effective utilization of intelligent systems technologies can be realized for Automated Rendezvous and Capture applications. The successful implementation of these technologies involve a complex system infrastructure integrating the requirements of transportation, vehicle checkout and health management, and communication systems without compromise to systems reliability and performance. The resources that must be invoked to accomplish these tasks include remote ground operations and control, built-in system fault management and control, and intelligent robotics. To ensure long-term evolution and integration of new validated technologies over the lifetime of the vehicle, system interfaces must also be addressed and integrated into the overall system interface requirements. An approach for defining and evaluating the system infrastructures including the testbed currently being used to support the on-going evaluations for the evolutionary Space Station Freedom Data Management System is presented and discussed. Intelligent system technologies discussed include artificial intelligence (real-time replanning and scheduling), high performance computational elements (parallel processors, photonic processors, and neural networks), real-time fault management and control, and system software development tools for rapid prototyping capabilities.

  18. Aging Water Infrastructure

    EPA Science Inventory

    The Aging Water Infrastructure (AWI) research program is part of EPA’s larger effort called the Sustainable Water Infrastructure (SI) initiative. The SI initiative brings together drinking water and wastewater utility managers; trade associations; local watershed protection organ...

  19. Advances in text analytics for drug discovery.

    PubMed

    Roberts, Phoebe M; Hayes, William S

    2005-05-01

    The automated extraction of biological and chemical information has improved over the past year, with advances in access to content, entity extraction of genes, chemicals, kinetic data and relationships, and algorithms for generating and testing hypotheses. As the systems for reading and understanding scientific literature grow more powerful, so must the infrastructure in which to assemble information. Advances in infrastructure systems are discussed in this review. Research efforts have flourished as a result of text analytics competitions that attract participants from various disciplines, from computer science to bioinformatics.

  20. Advanced Computational Modeling of Vapor Deposition in a High-Pressure Reactor

    NASA Technical Reports Server (NTRS)

    Cardelino, Beatriz H.; Moore, Craig E.; McCall, Sonya D.; Cardelino, Carlos A.; Dietz, Nikolaus; Bachmann, Klaus

    2004-01-01

    In search of novel approaches to produce new materials for electro-optic technologies, advances have been achieved in the development of computer models for vapor deposition reactors in space. Numerical simulations are invaluable tools for costly and difficult processes, such as those experiments designed for high pressures and microgravity conditions. Indium nitride is a candidate compound for high-speed laser and photo diodes for optical communication system, as well as for semiconductor lasers operating into the blue and ultraviolet regions. But InN and other nitride compounds exhibit large thermal decomposition at its optimum growth temperature. In addition, epitaxy at lower temperatures and subatmospheric pressures incorporates indium droplets into the InN films. However, surface stabilization data indicate that InN could be grown at 900 K in high nitrogen pressures, and microgravity could provide laminar flow conditions. Numerical models for chemical vapor deposition have been developed, coupling complex chemical kinetics with fluid dynamic properties.

  1. Advanced Computational Modeling of Vapor Deposition in a High-pressure Reactor

    NASA Technical Reports Server (NTRS)

    Cardelino, Beatriz H.; Moore, Craig E.; McCall, Sonya D.; Cardelino, Carlos A.; Dietz, Nikolaus; Bachmann, Klaus

    2004-01-01

    In search of novel approaches to produce new materials for electro-optic technologies, advances have been achieved in the development of computer models for vapor deposition reactors in space. Numerical simulations are invaluable tools for costly and difficult processes, such as those experiments designed for high pressures and microgravity conditions. Indium nitride is a candidate compound for high-speed laser and photo diodes for optical communication system, as well as for semiconductor lasers operating into the blue and ultraviolet regions. But InN and other nitride compounds exhibit large thermal decomposition at its optimum growth temperature. In addition, epitaxy at lower temperatures and subatmospheric pressures incorporates indium droplets into the InN films. However, surface stabilization data indicate that InN could be grown at 900 K in high nitrogen pressures, and microgravity could provide laminar flow conditions. Numerical models for chemical vapor deposition have been developed, coupling complex chemical kinetics with fluid dynamic properties.

  2. Advanced computational tools for optimization and uncertainty quantification of carbon capture processes

    SciTech Connect

    Miller, David C.; Ng, Brenda; Eslick, John

    2014-01-01

    Advanced multi-scale modeling and simulation has the potential to dramatically reduce development time, resulting in considerable cost savings. The Carbon Capture Simulation Initiative (CCSI) is a partnership among national laboratories, industry and universities that is developing, demonstrating, and deploying a suite of multi-scale modeling and simulation tools. One significant computational tool is FOQUS, a Framework for Optimization and Quantification of Uncertainty and Sensitivity, which enables basic data submodels, including thermodynamics and kinetics, to be used within detailed process models to rapidly synthesize and optimize a process and determine the level of uncertainty associated with the resulting process. The overall approach of CCSI is described with a more detailed discussion of FOQUS and its application to carbon capture systems.

  3. Development of an Advanced Computational Model for OMCVD of Indium Nitride

    NASA Technical Reports Server (NTRS)

    Cardelino, Carlos A.; Moore, Craig E.; Cardelino, Beatriz H.; Zhou, Ning; Lowry, Sam; Krishnan, Anantha; Frazier, Donald O.; Bachmann, Klaus J.

    1999-01-01

    An advanced computational model is being developed to predict the formation of indium nitride (InN) film from the reaction of trimethylindium (In(CH3)3) with ammonia (NH3). The components are introduced into the reactor in the gas phase within a background of molecular nitrogen (N2). Organometallic chemical vapor deposition occurs on a heated sapphire surface. The model simulates heat and mass transport with gas and surface chemistry under steady state and pulsed conditions. The development and validation of an accurate model for the interactions between the diffusion of gas phase species and surface kinetics is essential to enable the regulation of the process in order to produce a low defect material. The validation of the model will be performed in concert with a NASA-North Carolina State University project.

  4. Design and experimental analysis of an advanced static VAR compensator with computer aided control.

    PubMed

    Irmak, Erdal; Bayındır, Ramazan; Köse, Ali

    2016-09-01

    This study presents integration of a real-time energy monitoring and control system with an advanced reactive power compensation unit based on fixed capacitor-thyristor controlled reactor (FC-TCR). Firing angles of the thyristors located in the FC-TCR are controlled by a microcontroller in order to keep the power factor within the limits. Electrical parameters of the system are measured by specially designed circuits and simultaneously transferred to the computer via a data acquisition board. Thus, real time data of the system can be observed through a visual user interface. The data obtained is not only analyzed for control process, but also regularly saved into a database. The system has been tested in laboratory conditions under different load characteristics and experimental results verified that the system successfully and accurately achieves compensation process against the all operational conditions.

  5. A Computational Methodology for Simulating Thermal Loss Testing of the Advanced Stirling Convertor

    NASA Technical Reports Server (NTRS)

    Reid, Terry V.; Wilson, Scott D.; Schifer, Nicholas A.; Briggs, Maxwell H.

    2012-01-01

    The U.S. Department of Energy (DOE) and Lockheed Martin Space Systems Company (LMSSC) have been developing the Advanced Stirling Radioisotope Generator (ASRG) for use as a power system for space science missions. This generator would use two highefficiency Advanced Stirling Convertors (ASCs), developed by Sunpower Inc. and NASA Glenn Research Center (GRC). The ASCs convert thermal energy from a radioisotope heat source into electricity. As part of ground testing of these ASCs, different operating conditions are used to simulate expected mission conditions. These conditions require achieving a particular operating frequency, hot end and cold end temperatures, and specified electrical power output for a given net heat input. In an effort to improve net heat input predictions, numerous tasks have been performed which provided a more accurate value for net heat input into the ASCs, including the use of multidimensional numerical models. Validation test hardware has also been used to provide a direct comparison of numerical results and validate the multi-dimensional numerical models used to predict convertor net heat input and efficiency. These validation tests were designed to simulate the temperature profile of an operating Stirling convertor and resulted in a measured net heat input of 244.4 W. The methodology was applied to the multi-dimensional numerical model which resulted in a net heat input of 240.3 W. The computational methodology resulted in a value of net heat input that was 1.7 percent less than that measured during laboratory testing. The resulting computational methodology and results are discussed.

  6. Current Advances in the Computational Simulation of the Formation of Low-Mass Stars

    SciTech Connect

    Klein, R I; Inutsuka, S; Padoan, P; Tomisaka, K

    2005-10-24

    Developing a theory of low-mass star formation ({approx} 0.1 to 3 M{sub {circle_dot}}) remains one of the most elusive and important goals of theoretical astrophysics. The star-formation process is the outcome of the complex dynamics of interstellar gas involving non-linear interactions of turbulence, gravity, magnetic field and radiation. The evolution of protostellar condensations, from the moment they are assembled by turbulent flows to the time they reach stellar densities, spans an enormous range of scales, resulting in a major computational challenge for simulations. Since the previous Protostars and Planets conference, dramatic advances in the development of new numerical algorithmic techniques have been successfully implemented on large scale parallel supercomputers. Among such techniques, Adaptive Mesh Refinement and Smooth Particle Hydrodynamics have provided frameworks to simulate the process of low-mass star formation with a very large dynamic range. It is now feasible to explore the turbulent fragmentation of molecular clouds and the gravitational collapse of cores into stars self-consistently within the same calculation. The increased sophistication of these powerful methods comes with substantial caveats associated with the use of the techniques and the interpretation of the numerical results. In this review, we examine what has been accomplished in the field and present a critique of both numerical methods and scientific results. We stress that computational simulations should obey the available observational constraints and demonstrate numerical convergence. Failing this, results of large scale simulations do not advance our understanding of low-mass star formation.

  7. Identifying human disease genes: advances in molecular genetics and computational approaches.

    PubMed

    Bakhtiar, S M; Ali, A; Baig, S M; Barh, D; Miyoshi, A; Azevedo, V

    2014-07-04

    The human genome project is one of the significant achievements that have provided detailed insight into our genetic legacy. During the last two decades, biomedical investigations have gathered a considerable body of evidence by detecting more than 2000 disease genes. Despite the imperative advances in the genetic understanding of various diseases, the pathogenesis of many others remains obscure. With recent advances, the laborious methodologies used to identify DNA variations are replaced by direct sequencing of genomic DNA to detect genetic changes. The ability to perform such studies depends equally on the development of high-throughput and economical genotyping methods. Currently, basically for every disease whose origen is still unknown, genetic approaches are available which could be pedigree-dependent or -independent with the capacity to elucidate fundamental disease mechanisms. Computer algorithms and programs for linkage analysis have formed the foundation for many disease gene detection projects, similarly databases of clinical findings have been widely used to support diagnostic decisions in dysmorphology and general human disease. For every disease type, genome sequence variations, particularly single nucleotide polymorphisms are mapped by comparing the genetic makeup of case and control groups. Methods that predict the effects of polymorphisms on protein stability are useful for the identification of possible disease associations, whereas structural effects can be assessed using methods to predict stability changes in proteins using sequence and/or structural information.

  8. SCIENCE BRIEF: ADVANCED CONCEPTS

    EPA Science Inventory

    Research on advanced concepts will evaluate and demonstrate the application of innovative infrastructure designs, management procedures and operational approaches. Advanced concepts go beyond simple asset management. The infusion of these advanced concepts into established wastew...

  9. Computational fluid dynamics in the design and analysis of thermal processes: a review of recent advances.

    PubMed

    Norton, Tomás; Tiwari, Brijesh; Sun, Da Wen

    2013-01-01

    The design of thermal processes in the food industry has undergone great developments in the last two decades due to the availability of cheap computer power alongside advanced modelling techniques such as computational fluid dynamics (CFD). CFD uses numerical algorithms to solve the non-linear partial differential equations of fluid mechanics and heat transfer so that the complex mechanisms that govern many food-processing systems can be resolved. In thermal processing applications, CFD can be used to build three-dimensional models that are both spatially and temporally representative of a physical system to produce solutions with high levels of physical realism without the heavy costs associated with experimental analyses. Therefore, CFD is playing an ever growing role in the development of optimization of conventional as well as the development of new thermal processes in the food industry. This paper discusses the fundamental aspects involved in developing CFD solutions and forms a state-of-the-art review on various CFD applications in conventional as well as novel thermal processes. The challenges facing CFD modellers of thermal processes are also discussed. From this review it is evident that present-day CFD software, with its rich tapestries of mathematical physics, numerical methods and visualization techniques, is currently recognized as a formidable and pervasive technology which can permit comprehensive analyses of thermal processing.

  10. Computational and experimental advances in drug repositioning for accelerated therapeutic stratification.

    PubMed

    Shameer, Khader; Readhead, Ben; Dudley, Joel T

    2015-01-01

    Drug repositioning is an important component of therapeutic stratification in the precision medicine paradigm. Molecular profiling and more sophisticated analysis of longitudinal clinical data are refining definitions of human diseases, creating needs and opportunities to re-target or reposition approved drugs for alternative indications. Drug repositioning studies have demonstrated success in complex diseases requiring improved therapeutic interventions as well as orphan diseases without any known treatments. An increasing collection of available computational and experimental methods that leverage molecular and clinical data enable diverse drug repositioning strategies. Integration of translational bioinformatics resources, statistical methods, chemoinformatics tools and experimental techniques (including medicinal chemistry techniques) can enable the rapid application of drug repositioning on an increasingly broad scale. Efficient tools are now available for systematic drug-repositioning methods using large repositories of compounds with biological activities. Medicinal chemists along with other translational researchers can play a key role in various aspects of drug repositioning. In this review article, we briefly summarize the history of drug repositioning, explain concepts behind drug repositioning methods, discuss recent computational and experimental advances and highlight available open access resources for effective drug repositioning investigations. We also discuss recent approaches in utilizing electronic health record for outcome assessment of drug repositioning and future avenues of drug repositioning in the light of targeting disease comorbidities, underserved patient communities, individualized medicine and socioeconomic impact.

  11. Advanced display object selection methods for enhancing user-computer productivity

    NASA Technical Reports Server (NTRS)

    Osga, Glenn A.

    1993-01-01

    The User-Interface Technology Branch at NCCOSC RDT&E Division has been conducting a series of studies to address the suitability of commercial off-the-shelf (COTS) graphic user-interface (GUI) methods for efficiency and performance in critical naval combat systems. This paper presents an advanced selection algorithm and method developed to increase user performance when making selections on tactical displays. The method has also been applied with considerable success to a variety of cursor and pointing tasks. Typical GUI's allow user selection by: (1) moving a cursor with a pointing device such as a mouse, trackball, joystick, touchscreen; and (2) placing the cursor on the object. Examples of GUI objects are the buttons, icons, folders, scroll bars, etc. used in many personal computer and workstation applications. This paper presents an improved method of selection and the theoretical basis for the significant performance gains achieved with various input devices tested. The method is applicable to all GUI styles and display sizes, and is particularly useful for selections on small screens such as notebook computers. Considering the amount of work-hours spent pointing and clicking across all styles of available graphic user-interfaces, the cost/benefit in applying this method to graphic user-interfaces is substantial, with the potential for increasing productivity across thousands of users and applications.

  12. Advances in the biomedical applications of the EELA Project.

    PubMed

    Hernández, Vicente; Blanquer, Ignacio; Aparicio, Gabriel; Isea, Raúl; Chaves, Juan Luis; Hernández, Alvaro; Mora, Henry Ricardo; Fernández, Manuel; Acero, Alicia; Montes, Esther; Mayo, Rafael

    2007-01-01

    In the last years an increasing demand for Grid Infrastructures has resulted in several international collaborations. This is the case of the EELA Project, which has brought together collaborating groups of Latin America and Europe. One year ago we presented this e-infrastructure used, among others, by the biomedical groups for the studies of oncological analysis, neglected diseases, sequence alignments and computational phylogenetics. After this period, the achieved advances are summarised in this paper.

  13. Current advances in molecular, biochemical, and computational modeling analysis of microalgal triacylglycerol biosynthesis.

    PubMed

    Lenka, Sangram K; Carbonaro, Nicole; Park, Rudolph; Miller, Stephen M; Thorpe, Ian; Li, Yantao

    2016-01-01

    Triacylglycerols (TAGs) are highly reduced energy storage molecules ideal for biodiesel production. Microalgal TAG biosynthesis has been studied extensively in recent years, both at the molecular level and systems level through experimental studies and computational modeling. However, discussions of the strategies and products of the experimental and modeling approaches are rarely integrated and summarized together in a way that promotes collaboration among modelers and biologists in this field. In this review, we outline advances toward understanding the cellular and molecular factors regulating TAG biosynthesis in unicellular microalgae with an emphasis on recent studies on rate-limiting steps in fatty acid and TAG synthesis, while also highlighting new insights obtained from the integration of multi-omics datasets with mathematical models. Computational methodologies such as kinetic modeling, metabolic flux analysis, and new variants of flux balance analysis are explained in detail. We discuss how these methods have been used to simulate algae growth and lipid metabolism in response to changing culture conditions and how they have been used in conjunction with experimental validations. Since emerging evidence indicates that TAG synthesis in microalgae operates through coordinated crosstalk between multiple pathways in diverse subcellular destinations including the endoplasmic reticulum and plastids, we discuss new experimental studies and models that incorporate these findings for discovering key regulatory checkpoints. Finally, we describe tools for genetic manipulation of microalgae and their potential for future rational algal strain design. This comprehensive review explores the potential synergistic impact of pathway analysis, computational approaches, and molecular genetic manipulation strategies on improving TAG production in microalgae.

  14. Current advances in molecular, biochemical, and computational modeling analysis of microalgal triacylglycerol biosynthesis.

    PubMed

    Lenka, Sangram K; Carbonaro, Nicole; Park, Rudolph; Miller, Stephen M; Thorpe, Ian; Li, Yantao

    2016-01-01

    Triacylglycerols (TAGs) are highly reduced energy storage molecules ideal for biodiesel production. Microalgal TAG biosynthesis has been studied extensively in recent years, both at the molecular level and systems level through experimental studies and computational modeling. However, discussions of the strategies and products of the experimental and modeling approaches are rarely integrated and summarized together in a way that promotes collaboration among modelers and biologists in this field. In this review, we outline advances toward understanding the cellular and molecular factors regulating TAG biosynthesis in unicellular microalgae with an emphasis on recent studies on rate-limiting steps in fatty acid and TAG synthesis, while also highlighting new insights obtained from the integration of multi-omics datasets with mathematical models. Computational methodologies such as kinetic modeling, metabolic flux analysis, and new variants of flux balance analysis are explained in detail. We discuss how these methods have been used to simulate algae growth and lipid metabolism in response to changing culture conditions and how they have been used in conjunction with experimental validations. Since emerging evidence indicates that TAG synthesis in microalgae operates through coordinated crosstalk between multiple pathways in diverse subcellular destinations including the endoplasmic reticulum and plastids, we discuss new experimental studies and models that incorporate these findings for discovering key regulatory checkpoints. Finally, we describe tools for genetic manipulation of microalgae and their potential for future rational algal strain design. This comprehensive review explores the potential synergistic impact of pathway analysis, computational approaches, and molecular genetic manipulation strategies on improving TAG production in microalgae. PMID:27321475

  15. Advanced Decentralized Water/Energy Network Design for Sustainable Infrastructure presentation at the 2012 National Association of Home Builders (NAHB) International Builders'Show

    EPA Science Inventory

    A university/industry panel will report on the progress and findings of a fivesteve-year project funded by the US Environmental Protection Agency. The project's end product will be a Web-based, 3D computer-simulated residential environment with a decision support system to assist...

  16. Parallel digital forensics infrastructure.

    SciTech Connect

    Liebrock, Lorie M.; Duggan, David Patrick

    2009-10-01

    This report documents the architecture and implementation of a Parallel Digital Forensics infrastructure. This infrastructure is necessary for supporting the design, implementation, and testing of new classes of parallel digital forensics tools. Digital Forensics has become extremely difficult with data sets of one terabyte and larger. The only way to overcome the processing time of these large sets is to identify and develop new parallel algorithms for performing the analysis. To support algorithm research, a flexible base infrastructure is required. A candidate architecture for this base infrastructure was designed, instantiated, and tested by this project, in collaboration with New Mexico Tech. Previous infrastructures were not designed and built specifically for the development and testing of parallel algorithms. With the size of forensics data sets only expected to increase significantly, this type of infrastructure support is necessary for continued research in parallel digital forensics. This report documents the implementation of the parallel digital forensics (PDF) infrastructure architecture and implementation.

  17. JINR cloud infrastructure evolution

    NASA Astrophysics Data System (ADS)

    Baranov, A. V.; Balashov, N. A.; Kutovskiy, N. A.; Semenov, R. N.

    2016-09-01

    To fulfil JINR commitments in different national and international projects related to the use of modern information technologies such as cloud and grid computing as well as to provide a modern tool for JINR users for their scientific research a cloud infrastructure was deployed at Laboratory of Information Technologies of Joint Institute for Nuclear Research. OpenNebula software was chosen as a cloud platform. Initially it was set up in simple configuration with single front-end host and a few cloud nodes. Some custom development was done to tune JINR cloud installation to fit local needs: web form in the cloud web-interface for resources request, a menu item with cloud utilization statistics, user authentication via Kerberos, custom driver for OpenVZ containers. Because of high demand in that cloud service and its resources over-utilization it was re-designed to cover increasing users' needs in capacity, availability and reliability. Recently a new cloud instance has been deployed in high-availability configuration with distributed network file system and additional computing power.

  18. Energy Transmission and Infrastructure

    SciTech Connect

    Mathison, Jane

    2012-12-31

    The objective of Energy Transmission and Infrastructure Northern Ohio (OH) was to lay the conceptual and analytical foundation for an energy economy in northern Ohio that will: • improve the efficiency with which energy is used in the residential, commercial, industrial, agricultural, and transportation sectors for Oberlin, Ohio as a district-wide model for Congressional District OH-09; • identify the potential to deploy wind and solar technologies and the most effective configuration for the regional energy system (i.e., the ratio of distributed or centralized power generation); • analyze the potential within the district to utilize farm wastes to produce biofuels; • enhance long-term energy security by identifying ways to deploy local resources and building Ohio-based enterprises; • identify the policy, regulatory, and financial barriers impeding development of a new energy system; and • improve energy infrastructure within Congressional District OH-09. This objective of laying the foundation for a renewable energy system in Ohio was achieved through four primary areas of activity: 1. district-wide energy infrastructure assessments and alternative-energy transmission studies; 2. energy infrastructure improvement projects undertaken by American Municipal Power (AMP) affiliates in the northern Ohio communities of Elmore, Oak Harbor, and Wellington; 3. Oberlin, OH-area energy assessment initiatives; and 4. a district-wide conference held in September 2011 to disseminate year-one findings. The grant supported 17 research studies by leading energy, policy, and financial specialists, including studies on: current energy use in the district and the Oberlin area; regional potential for energy generation from renewable sources such as solar power, wind, and farm-waste; energy and transportation strategies for transitioning the City of Oberlin entirely to renewable resources and considering pedestrians, bicyclists, and public transportation as well as drivers

  19. Exploratory Mixed-Method Study of End-User Computing within an Information Technology Infrastructure Library U.S. Army Service Delivery Environment

    ERIC Educational Resources Information Center

    Manzano, Sancho J., Jr.

    2012-01-01

    Empirical studies have been conducted on what is known as end-user computing from as early as the 1980s to present-day IT employees. There have been many studies on using quantitative instruments by Cotterman and Kumar (1989) and Rockart and Flannery (1983). Qualitative studies on end-user computing classifications have been conducted by…

  20. Guest Editorial Introduction to the Special Issue on 'Advanced Signal Processing Techniques and Telecommunications Network Infrastructures for Smart Grid Analysis, Monitoring, and Management'

    SciTech Connect

    Bracale, Antonio; Barros, Julio; Cacciapuoti, Angela Sara; Chang, Gary; Dall'Anese, Emiliano

    2015-06-10

    Electrical power systems are undergoing a radical change in structure, components, and operational paradigms, and are progressively approaching the new concept of smart grids (SGs). Future power distribution systems will be characterized by the simultaneous presence of various distributed resources, such as renewable energy systems (i.e., photovoltaic power plant and wind farms), storage systems, and controllable/non-controllable loads. Control and optimization architectures will enable network-wide coordination of these grid components in order to improve system efficiency and reliability and to limit greenhouse gas emissions. In this context, the energy flows will be bidirectional from large power plants to end users and vice versa; producers and consumers will continuously interact at different voltage levels to determine in advance the requests of loads and to adapt the production and demand for electricity flexibly and efficiently also taking into account the presence of storage systems.

  1. Guest Editorial Introduction to the Special Issue on 'Advanced Signal Processing Techniques and Telecommunications Network Infrastructures for Smart Grid Analysis, Monitoring, and Management'

    DOE PAGES

    Bracale, Antonio; Barros, Julio; Cacciapuoti, Angela Sara; Chang, Gary; Dall'Anese, Emiliano

    2015-06-10

    Electrical power systems are undergoing a radical change in structure, components, and operational paradigms, and are progressively approaching the new concept of smart grids (SGs). Future power distribution systems will be characterized by the simultaneous presence of various distributed resources, such as renewable energy systems (i.e., photovoltaic power plant and wind farms), storage systems, and controllable/non-controllable loads. Control and optimization architectures will enable network-wide coordination of these grid components in order to improve system efficiency and reliability and to limit greenhouse gas emissions. In this context, the energy flows will be bidirectional from large power plants to end users andmore » vice versa; producers and consumers will continuously interact at different voltage levels to determine in advance the requests of loads and to adapt the production and demand for electricity flexibly and efficiently also taking into account the presence of storage systems.« less

  2. An infrastructure for the creation of high end scientific and engineering software tools and applications

    SciTech Connect

    Drummond, L.A.; Marques, O.A.; Wilson, G.V.

    2003-04-01

    This document has been prepared as a response to the High End Computing Revitalization Task Force (HECRTF) call for white papers. Our main goal is to identify mechanism necessary for the design and implementation of an infrastructure to support development of high-end scientific and engineering software tools and applications. This infrastructure will provide a plethora of software services to facilitate the efficient deployment of future HEC technology as well as collaborations among researchers and engineers across disciplines and institutions. In particular, we address here the following points; Key software technologies that must be advanced to strengthen the foundation for developing new generations of HEC systems. A Software Infrastructure for minimizing ''time to solution'' by users of HEC systems.

  3. Temporality Matters: Advancing a Method for Analyzing Problem-Solving Processes in a Computer-Supported Collaborative Environment

    ERIC Educational Resources Information Center

    Kapur, Manu

    2011-01-01

    This paper argues for a need to develop methods for examining temporal patterns in computer-supported collaborative learning (CSCL) groups. It advances one such quantitative method--Lag-sequential Analysis (LsA)--and instantiates it in a study of problem-solving interactions of collaborative groups in an online, synchronous environment. LsA…

  4. Using an Advanced Computational Laboratory Experiment to Extend and Deepen Physical Chemistry Students' Understanding of Atomic Structure

    ERIC Educational Resources Information Center

    Hoffman, Gary G.

    2015-01-01

    A computational laboratory experiment is described, which involves the advanced study of an atomic system. The students use concepts and techniques typically covered in a physical chemistry course but extend those concepts and techniques to more complex situations. The students get a chance to explore the study of atomic states and perform…

  5. Computer experiments on periodic systems identification using rotor blade transient flapping-torsion responses at high advance ratio

    NASA Technical Reports Server (NTRS)

    Hohenemser, K. H.; Prelewicz, D. A.

    1974-01-01

    Systems identification methods have recently been applied to rotorcraft to estimate stability derivatives from transient flight control response data. While these applications assumed a linear constant coefficient representation of the rotorcraft, the computer experiments described in this paper used transient responses in flap-bending and torsion of a rotor blade at high advance ratio which is a rapidly time varying periodic system.

  6. ISAAC: An Introduction to IBM's Information System for Advanced Academic Computing at the University of Washington-Seattle.

    ERIC Educational Resources Information Center

    Hernandez, Nicolas, Jr.

    1988-01-01

    Traces the origin of ISAAC (Information System for Advanced Academic Computing) and the development of a languages and linguistics "room" at the University of Washington-Seattle. ISAAC, a free, valuable resource, consists of two databases and an electronic bulletin board spanning broad areas of pedagogical and research fields. (Author/CB)

  7. Cloud Infrastructure & Applications - CloudIA

    NASA Astrophysics Data System (ADS)

    Sulistio, Anthony; Reich, Christoph; Doelitzscher, Frank

    The idea behind Cloud Computing is to deliver Infrastructure-as-a-Services and Software-as-a-Service over the Internet on an easy pay-per-use business model. To harness the potentials of Cloud Computing for e-Learning and research purposes, and to small- and medium-sized enterprises, the Hochschule Furtwangen University establishes a new project, called Cloud Infrastructure & Applications (CloudIA). The CloudIA project is a market-oriented cloud infrastructure that leverages different virtualization technologies, by supporting Service-Level Agreements for various service offerings. This paper describes the CloudIA project in details and mentions our early experiences in building a private cloud using an existing infrastructure.

  8. Improved computational neutronics methods and validation protocols for the advanced test reactor

    SciTech Connect

    Nigg, D. W.; Nielsen, J. W.; Chase, B. M.; Murray, R. K.; Steuhm, K. A.; Unruh, T.

    2012-07-01

    The Idaho National Laboratory (INL) is in the process of updating the various reactor physics modeling and simulation tools used to support operation and safety assurance of the Advanced Test Reactor (ATR). Key accomplishments so far have encompassed both computational as well as experimental work. A new suite of stochastic and deterministic transport theory based reactor physics codes and their supporting nuclear data libraries (HELIOS, KENO6/SCALE, NEWT/SCALE, ATTILA, and an extended implementation of MCNP5) has been installed at the INL. Corresponding models of the ATR and ATRC are now operational with all five codes, demonstrating the basic feasibility of the new code packages for their intended purposes. On the experimental side of the project, new hardware was fabricated, measurement protocols were finalized, and the first four of six planned physics code validation experiments based on neutron activation spectrometry have been conducted at the ATRC facility. Data analysis for the first three experiments, focused on characterization of the neutron spectrum in one of the ATR flux traps, has been completed. The six experiments will ultimately form the basis for flexible and repeatable ATR physics code validation protocols that are consistent with applicable national standards. (authors)

  9. Study of flutter related computational procedures for minimum weight structural sizing of advanced aircraft

    NASA Technical Reports Server (NTRS)

    Oconnell, R. F.; Hassig, H. J.; Radovcich, N. A.

    1976-01-01

    Results of a study of the development of flutter modules applicable to automated structural design of advanced aircraft configurations, such as a supersonic transport, are presented. Automated structural design is restricted to automated sizing of the elements of a given structural model. It includes a flutter optimization procedure; i.e., a procedure for arriving at a structure with minimum mass for satisfying flutter constraints. Methods of solving the flutter equation and computing the generalized aerodynamic force coefficients in the repetitive analysis environment of a flutter optimization procedure are studied, and recommended approaches are presented. Five approaches to flutter optimization are explained in detail and compared. An approach to flutter optimization incorporating some of the methods discussed is presented. Problems related to flutter optimization in a realistic design environment are discussed and an integrated approach to the entire flutter task is presented. Recommendations for further investigations are made. Results of numerical evaluations, applying the five methods of flutter optimization to the same design task, are presented.

  10. IMPROVED COMPUTATIONAL NEUTRONICS METHODS AND VALIDATION PROTOCOLS FOR THE ADVANCED TEST REACTOR

    SciTech Connect

    David W. Nigg; Joseph W. Nielsen; Benjamin M. Chase; Ronnie K. Murray; Kevin A. Steuhm

    2012-04-01

    The Idaho National Laboratory (INL) is in the process of modernizing the various reactor physics modeling and simulation tools used to support operation and safety assurance of the Advanced Test Reactor (ATR). Key accomplishments so far have encompassed both computational as well as experimental work. A new suite of stochastic and deterministic transport theory based reactor physics codes and their supporting nuclear data libraries (HELIOS, KENO6/SCALE, NEWT/SCALE, ATTILA, and an extended implementation of MCNP5) has been installed at the INL. Corresponding models of the ATR and ATRC are now operational with all five codes, demonstrating the basic feasibility of the new code packages for their intended purpose. Of particular importance, a set of as-run core depletion HELIOS calculations for all ATR cycles since August 2009 was successfully completed during 2011. This demonstration supported a decision late in the year to proceed with the phased incorporation of the HELIOS methodology into the ATR fuel cycle management process beginning in 2012. On the experimental side of the project, new hardware was fabricated, measurement protocols were finalized, and the first four of six planned physics code validation experiments based on neutron activation spectrometry were conducted at the ATRC facility. Data analysis for the first three experiments, focused on characterization of the neutron spectrum in one of the ATR flux traps, has been completed. The six experiments will ultimately form the basis for a flexible, easily-repeatable ATR physics code validation protocol that is consistent with applicable ASTM standards.

  11. Evaluation of Computed Tomography of Mock Uranium Fuel Rods at the Advanced Photon Source

    DOE PAGES

    Hunter, James F.; Brown, Donald William; Okuniewski, Maria

    2015-06-01

    This study discusses a multi-year effort to evaluate the utility of computed tomography at the Advanced Photon Source (APS) as a tool for non-destructive evaluation of uranium based fuel rods. The majority of the data presented is on mock material made with depleted uranium which mimics the x-ray attenuation characteristics of fuel rods while allowing for simpler handling. A range of data is presented including full thickness (5mm diameter) fuel rodlets, reduced thickness (1.8mm) sintering test samples, and pre/post irradiation samples (< 1mm thick). These data were taken on both a white beam (bending magnet) beamline and a high energy,more » monochromatic beamline. This data shows the utility of a synchrotron type source in the evealuation of manufacturing defects (pre-irradiation) and lays out the case for in situ CT of fuel pellet sintering. Finally, in addition data is shown from small post-irradiation samples and a case is made for post-irradiation CT of larger samples.« less

  12. Evaluation of Computed Tomography of Mock Uranium Fuel Rods at the Advanced Photon Source

    SciTech Connect

    Hunter, James F.; Brown, Donald William; Okuniewski, Maria

    2015-06-01

    This study discusses a multi-year effort to evaluate the utility of computed tomography at the Advanced Photon Source (APS) as a tool for non-destructive evaluation of uranium based fuel rods. The majority of the data presented is on mock material made with depleted uranium which mimics the x-ray attenuation characteristics of fuel rods while allowing for simpler handling. A range of data is presented including full thickness (5mm diameter) fuel rodlets, reduced thickness (1.8mm) sintering test samples, and pre/post irradiation samples (< 1mm thick). These data were taken on both a white beam (bending magnet) beamline and a high energy, monochromatic beamline. This data shows the utility of a synchrotron type source in the evealuation of manufacturing defects (pre-irradiation) and lays out the case for in situ CT of fuel pellet sintering. Finally, in addition data is shown from small post-irradiation samples and a case is made for post-irradiation CT of larger samples.

  13. Observations on computational methodologies for use in large-scale, gradient-based, multidisciplinary design incorporating advanced CFD codes

    NASA Technical Reports Server (NTRS)

    Newman, P. A.; Hou, G. J.-W.; Jones, H. E.; Taylor, A. C., III; Korivi, V. M.

    1992-01-01

    How a combination of various computational methodologies could reduce the enormous computational costs envisioned in using advanced CFD codes in gradient based optimized multidisciplinary design (MdD) procedures is briefly outlined. Implications of these MdD requirements upon advanced CFD codes are somewhat different than those imposed by a single discipline design. A means for satisfying these MdD requirements for gradient information is presented which appear to permit: (1) some leeway in the CFD solution algorithms which can be used; (2) an extension to 3-D problems; and (3) straightforward use of other computational methodologies. Many of these observations have previously been discussed as possibilities for doing parts of the problem more efficiently; the contribution here is observing how they fit together in a mutually beneficial way.

  14. Distributed Data Integration Infrastructure

    SciTech Connect

    Critchlow, T; Ludaescher, B; Vouk, M; Pu, C

    2003-02-24

    The Internet is becoming the preferred method for disseminating scientific data from a variety of disciplines. This can result in information overload on the part of the scientists, who are unable to query all of the relevant sources, even if they knew where to find them, what they contained, how to interact with them, and how to interpret the results. A related issue is keeping up with current trends in information technology often taxes the end-user's expertise and time. Thus instead of benefiting from this information rich environment, scientists become experts on a small number of sources and technologies, use them almost exclusively, and develop a resistance to innovations that can enhance their productivity. Enabling information based scientific advances, in domains such as functional genomics, requires fully utilizing all available information and the latest technologies. In order to address this problem we are developing a end-user centric, domain-sensitive workflow-based infrastructure, shown in Figure 1, that will allow scientists to design complex scientific workflows that reflect the data manipulation required to perform their research without an undue burden. We are taking a three-tiered approach to designing this infrastructure utilizing (1) abstract workflow definition, construction, and automatic deployment, (2) complex agent-based workflow execution and (3) automatic wrapper generation. In order to construct a workflow, the scientist defines an abstract workflow (AWF) in terminology (semantics and context) that is familiar to him/her. This AWF includes all of the data transformations, selections, and analyses required by the scientist, but does not necessarily specify particular data sources. This abstract workflow is then compiled into an executable workflow (EWF, in our case XPDL) that is then evaluated and executed by the workflow engine. This EWF contains references to specific data source and interfaces capable of performing the desired

  15. Overview of NASA communications infrastructure

    NASA Technical Reports Server (NTRS)

    Arnold, Ray J.; Fuechsel, Charles

    1991-01-01

    The infrastructure of NASA communications systems for effecting coordination across NASA offices and with the national and international research and technological communities is discussed. The offices and networks of the communication system include the Office of Space Science and Applications (OSSA), which manages all NASA missions, and the Office of Space Operations, which furnishes communication support through the NASCOM, the mission critical communications support network, and the Program Support Communications network. The NASA Science Internet was established by OSSA to centrally manage, develop, and operate an integrated computer network service dedicated to NASA's space science and application research. Planned for the future is the National Research and Education Network, which will provide communications infrastructure to enhance science resources at a national level.

  16. A Scalable Tools Communication Infrastructure

    SciTech Connect

    Buntinas, Darius; Bosilca, George; Graham, Richard L; Vallee, Geoffroy R; Watson, Gregory R.

    2008-01-01

    The Scalable Tools Communication Infrastructure (STCI) is an open source collaborative effort intended to provide high-performance, scalable, resilient, and portable communications and process control services for a wide variety of user and system tools. STCI is aimed specifically at tools for ultrascale computing and uses a component architecture to simplify tailoring the infrastructure to a wide range of scenarios. This paper describes STCI's design philosophy, the various components that will be used to provide an STCI implementation for a range of ultrascale platforms, and a range of tool types. These include tools supporting parallel run-time environments, such as MPI, parallel application correctness tools and performance analysis tools, as well as system monitoring and management tools.

  17. Smart Valley Infrastructure.

    ERIC Educational Resources Information Center

    Maule, R. William

    1994-01-01

    Discusses prototype information infrastructure projects in northern California's Silicon Valley. The strategies of the public and private telecommunications carriers vying for backbone services and industries developing end-user infrastructure technologies via office networks, set-top box networks, Internet multimedia, and "smart homes" are…

  18. Green Infrastructure 101

    EPA Science Inventory

    Green Infrastructure 101 • What is it? What does it do? What doesn’t it do? • Green Infrastructure as a stormwater and combined sewer control • GI Controls and Best Management Practices that make sense for Yonkers o (Include operations and maintenance requirements for each)

  19. Infrastructure Survey 2011

    ERIC Educational Resources Information Center

    Group of Eight (NJ1), 2012

    2012-01-01

    In 2011, the Group of Eight (Go8) conducted a survey on the state of its buildings and infrastructure. The survey is the third Go8 Infrastructure survey, with previous surveys being conducted in 2007 and 2009. The current survey updated some of the information collected in the previous surveys. It also collated data related to aspects of the…

  20. Recent Advances and Issues in Computers. Oryx Frontiers of Science Series.

    ERIC Educational Resources Information Center

    Gay, Martin K.

    Discussing recent issues in computer science, this book contains 11 chapters covering: (1) developments that have the potential for changing the way computers operate, including microprocessors, mass storage systems, and computing environments; (2) the national computational grid for high-bandwidth, high-speed collaboration among scientists, and…

  1. New directions in scientific computing: impact of advances in microprocessor architecture and system design.

    PubMed

    Malyj, W; Smith, R E; Horowitz, J M

    1984-01-01

    The new generation of microcomputers has brought computing power previously restricted to mainframe and supermini computers within the reach of individual scientific laboratories. Microcomputers can now provide computing speeds rivaling mainframes and computational accuracies exceeding those available in most computer centers. Inexpensive memory makes possible the transfer to microcomputers of software packages developed for mainframes and tested by years of experience. Combinations of high level languages and assembler subroutines permit the efficient design of specialized applications programs. Microprocessor architecture is approaching that of superminis, with coprocessors providing major contributions to computing power. The combined result of these developments is a major and perhaps revolutionary increase in the computing power now available to scientists.

  2. ENHANCING THE ATOMIC-LEVEL UNDERSTANDING OF CO2 MINERAL SEQUESTRATION MECHANISMS VIA ADVANCED COMPUTATIONAL MODELING

    SciTech Connect

    A.V.G. Chizmeshya

    2003-12-19

    /NETL managed National Mineral Sequestration Working Group we have already significantly improved our understanding of mineral carbonation. Group members at the Albany Research Center have recently shown that carbonation of olivine and serpentine, which naturally occurs over geological time (i.e., 100,000s of years), can be accelerated to near completion in hours. Further process refinement will require a synergetic science/engineering approach that emphasizes simultaneous investigation of both thermodynamic processes and the detailed microscopic, atomic-level mechanisms that govern carbonation kinetics. Our previously funded Phase I Innovative Concepts project demonstrated the value of advanced quantum-mechanical modeling as a complementary tool in bridging important gaps in our understanding of the atomic/molecular structure and reaction mechanisms that govern CO{sub 2} mineral sequestration reaction processes for the model Mg-rich lamellar hydroxide feedstock material Mg(OH){sub 2}. In the present simulation project, improved techniques and more efficient computational schemes have allowed us to expand and augment these capabilities and explore more complex Mg-rich, lamellar hydroxide-based feedstock materials, including the serpentine-based minerals. These feedstock materials are being actively investigated due to their wide availability, and low-cost CO{sub 2} mineral sequestration potential. Cutting-edge first principles quantum chemical, computational solid-state and materials simulation methodology studies proposed herein, have been strategically integrated with our new DOE supported (ASU-Argonne National Laboratory) project to investigate the mechanisms that govern mineral feedstock heat-treatment and aqueous/fluid-phase serpentine mineral carbonation in situ. This unified, synergetic theoretical and experimental approach will provide a deeper understanding of the key reaction mechanisms than either individual approach can alone. Ab initio techniques will also

  3. ENHANCING THE ATOMIC-LEVEL UNDERSTANDING OF CO2 MINERAL SEQUESTRATION MECHANISMS VIA ADVANCED COMPUTATIONAL MODELING

    SciTech Connect

    A.V.G. Chizmeshya; M.J. McKelvy; G.H. Wolf; R.W. Carpenter; D.A. Gormley; J.R. Diefenbacher; R. Marzke

    2006-03-01

    significantly improved our understanding of mineral carbonation. Group members at the Albany Research Center have recently shown that carbonation of olivine and serpentine, which naturally occurs over geological time (i.e., 100,000s of years), can be accelerated to near completion in hours. Further process refinement will require a synergetic science/engineering approach that emphasizes simultaneous investigation of both thermodynamic processes and the detailed microscopic, atomic-level mechanisms that govern carbonation kinetics. Our previously funded Phase I Innovative Concepts project demonstrated the value of advanced quantum-mechanical modeling as a complementary tool in bridging important gaps in our understanding of the atomic/molecular structure and reaction mechanisms that govern CO2 mineral sequestration reaction processes for the model Mg-rich lamellar hydroxide feedstock material Mg(OH)2. In the present simulation project, improved techniques and more efficient computational schemes have allowed us to expand and augment these capabilities and explore more complex Mg-rich, lamellar hydroxide-based feedstock materials, including the serpentine-based minerals. These feedstock materials are being actively investigated due to their wide availability, and low-cost CO2 mineral sequestration potential. Cutting-edge first principles quantum chemical, computational solid-state and materials simulation methodology studies proposed herein, have been strategically integrated with our new DOE supported (ASU-Argonne National Laboratory) project to investigate the mechanisms that govern mineral feedstock heat-treatment and aqueous/fluid-phase serpentine mineral carbonation in situ. This unified, synergetic theoretical and experimental approach has provided a deeper understanding of the key reaction mechanisms than either individual approach can alone. We used ab initio techniques to significantly advance our understanding of atomic-level processes at the solid/solution interface by

  4. ENHANCING THE ATOMIC-LEVEL UNDERSTANDING OF CO2 MINERAL SEQUESTRATION MECHANISMS VIA ADVANCED COMPUTATIONAL MODELING

    SciTech Connect

    A.V.G. Chizmeshya

    2002-12-19

    /NETL managed National Mineral Sequestration Working Group we have already significantly improved our understanding of mineral carbonation. Group members at the Albany Research Center have recently shown that carbonation of olivine and serpentine, which naturally occurs over geological time (i.e., 100,000s of years), can be accelerated to near completion in hours. Further process refinement will require a synergetic science/engineering approach that emphasizes simultaneous investigation of both thermodynamic processes and the detailed microscopic, atomic-level mechanisms that govern carbonation kinetics. Our previously funded Phase I Innovative Concepts project demonstrated the value of advanced quantum-mechanical modeling as a complementary tool in bridging important gaps in our understanding of the atomic/molecular structure and reaction mechanisms that govern CO{sub 2} mineral sequestration reaction processes for the model Mg-rich lamellar hydroxide feedstock material Mg(OH){sub 2}. In the present simulation project, improved techniques and more efficient computational schemes have allowed us to expand and augment these capabilities and explore more complex Mg-rich, lamellar hydroxide-based feedstock materials, including the serpentine-based minerals. These feedstock materials are being actively investigated due to their wide availability, and low-cost CO{sub 2} mineral sequestration potential. Cutting-edge first principles quantum chemical, computational solid-state and materials simulation methodology studies proposed herein, have been strategically integrated with our new DOE supported (ASU-Argonne National Laboratory) project to investigate the mechanisms that govern mineral feedstock heat-treatment and aqueous/fluid-phase serpentine mineral carbonation in situ. This unified, synergetic theoretical and experimental approach will provide a deeper understanding of the key reaction mechanisms than either individual approach can alone. Ab initio techniques will also

  5. In-Service Design & Performance Prediction of Advanced Fusion Material Systems by Computational Modeling and Simulation

    SciTech Connect

    G. R. Odette; G. E. Lucas

    2005-11-15

    This final report on "In-Service Design & Performance Prediction of Advanced Fusion Material Systems by Computational Modeling and Simulation" (DE-FG03-01ER54632) consists of a series of summaries of work that has been published, or presented at meetings, or both. It briefly describes results on the following topics: 1) A Transport and Fate Model for Helium and Helium Management; 2) Atomistic Studies of Point Defect Energetics, Dynamics and Interactions; 3) Multiscale Modeling of Fracture consisting of: 3a) A Micromechanical Model of the Master Curve (MC) Universal Fracture Toughness-Temperature Curve Relation, KJc(T - To), 3b) An Embrittlement DTo Prediction Model for the Irradiation Hardening Dominated Regime, 3c) Non-hardening Irradiation Assisted Thermal and Helium Embrittlement of 8Cr Tempered Martensitic Steels: Compilation and Analysis of Existing Data, 3d) A Model for the KJc(T) of a High Strength NFA MA957, 3e) Cracked Body Size and Geometry Effects of Measured and Effective Fracture Toughness-Model Based MC and To Evaluations of F82H and Eurofer 97, 3-f) Size and Geometry Effects on the Effective Toughness of Cracked Fusion Structures; 4) Modeling the Multiscale Mechanics of Flow Localization-Ductility Loss in Irradiation Damaged BCC Alloys; and 5) A Universal Relation Between Indentation Hardness and True Stress-Strain Constitutive Behavior. Further details can be found in the cited references or presentations that generally can be accessed on the internet, or provided upon request to the authors. Finally, it is noted that this effort was integrated with our base program in fusion materials, also funded by the DOE OFES.

  6. Integrated Graphics Operations and Analysis Lab Development of Advanced Computer Graphics Algorithms

    NASA Technical Reports Server (NTRS)

    Wheaton, Ira M.

    2011-01-01

    The focus of this project is to aid the IGOAL in researching and implementing algorithms for advanced computer graphics. First, this project focused on porting the current International Space Station (ISS) Xbox experience to the web. Previously, the ISS interior fly-around education and outreach experience only ran on an Xbox 360. One of the desires was to take this experience and make it into something that can be put on NASA s educational site for anyone to be able to access. The current code works in the Unity game engine which does have cross platform capability but is not 100% compatible. The tasks for an intern to complete this portion consisted of gaining familiarity with Unity and the current ISS Xbox code, porting the Xbox code to the web as is, and modifying the code to work well as a web application. In addition, a procedurally generated cloud algorithm will be developed. Currently, the clouds used in AGEA animations and the Xbox experiences are a texture map. The desire is to create a procedurally generated cloud algorithm to provide dynamically generated clouds for both AGEA animations and the Xbox experiences. This task consists of gaining familiarity with AGEA and the plug-in interface, developing the algorithm, creating an AGEA plug-in to implement the algorithm inside AGEA, and creating a Unity script to implement the algorithm for the Xbox. This portion of the project was unable to be completed in the time frame of the internship; however, the IGOAL will continue to work on it in the future.

  7. Development and Validation of a Fast, Accurate and Cost-Effective Aeroservoelastic Method on Advanced Parallel Computing Systems

    NASA Technical Reports Server (NTRS)

    Goodwin, Sabine A.; Raj, P.

    1999-01-01

    Progress to date towards the development and validation of a fast, accurate and cost-effective aeroelastic method for advanced parallel computing platforms such as the IBM SP2 and the SGI Origin 2000 is presented in this paper. The ENSAERO code, developed at the NASA-Ames Research Center has been selected for this effort. The code allows for the computation of aeroelastic responses by simultaneously integrating the Euler or Navier-Stokes equations and the modal structural equations of motion. To assess the computational performance and accuracy of the ENSAERO code, this paper reports the results of the Navier-Stokes simulations of the transonic flow over a flexible aeroelastic wing body configuration. In addition, a forced harmonic oscillation analysis in the frequency domain and an analysis in the time domain are done on a wing undergoing a rigid pitch and plunge motion. Finally, to demonstrate the ENSAERO flutter-analysis capability, aeroelastic Euler and Navier-Stokes computations on an L-1011 wind tunnel model including pylon, nacelle and empennage are underway. All computational solutions are compared with experimental data to assess the level of accuracy of ENSAERO. As the computations described above are performed, a meticulous log of computational performance in terms of wall clock time, execution speed, memory and disk storage is kept. Code scalability is also demonstrated by studying the impact of varying the number of processors on computational performance on the IBM SP2 and the Origin 2000 systems.

  8. Large-scale virtual high-throughput screening for the identification of new battery electrolyte solvents: computing infrastructure and collective properties.

    PubMed

    Husch, Tamara; Yilmazer, Nusret Duygu; Balducci, Andrea; Korth, Martin

    2015-02-01

    A volunteer computing approach is presented for the purpose of screening a large number of molecular structures with respect to their suitability as new battery electrolyte solvents. Collective properties like melting, boiling and flash points are evaluated using COSMOtherm and quantitative structure-property relationship (QSPR) based methods, while electronic structure theory methods are used for the computation of electrochemical stability window estimators. Two application examples are presented: first, the results of a previous large-scale screening test (PCCP, 2014, 16, 7919) are re-evaluated with respect to the mentioned collective properties. As a second application example, all reasonable nitrile solvents up to 12 heavy atoms are generated and used to illustrate a suitable filter protocol for picking Pareto-optimal candidates.

  9. Large-scale virtual high-throughput screening for the identification of new battery electrolyte solvents: computing infrastructure and collective properties.

    PubMed

    Husch, Tamara; Yilmazer, Nusret Duygu; Balducci, Andrea; Korth, Martin

    2015-02-01

    A volunteer computing approach is presented for the purpose of screening a large number of molecular structures with respect to their suitability as new battery electrolyte solvents. Collective properties like melting, boiling and flash points are evaluated using COSMOtherm and quantitative structure-property relationship (QSPR) based methods, while electronic structure theory methods are used for the computation of electrochemical stability window estimators. Two application examples are presented: first, the results of a previous large-scale screening test (PCCP, 2014, 16, 7919) are re-evaluated with respect to the mentioned collective properties. As a second application example, all reasonable nitrile solvents up to 12 heavy atoms are generated and used to illustrate a suitable filter protocol for picking Pareto-optimal candidates. PMID:25529013

  10. COOPEUS - connecting research infrastructures in environmental sciences

    NASA Astrophysics Data System (ADS)

    Koop-Jakobsen, Ketil; Waldmann, Christoph; Huber, Robert

    2015-04-01

    The COOPEUS project was initiated in 2012 bringing together 10 research infrastructures (RIs) in environmental sciences from the EU and US in order to improve the discovery, access, and use of environmental information and data across scientific disciplines and across geographical borders. The COOPEUS mission is to facilitate readily accessible research infrastructure data to advance our understanding of Earth systems through an international community-driven effort, by: Bringing together both user communities and top-down directives to address evolving societal and scientific needs; Removing technical, scientific, cultural and geopolitical barriers for data use; and Coordinating the flow, integrity and preservation of information. A survey of data availability was conducted among the COOPEUS research infrastructures for the purpose of discovering impediments for open international and cross-disciplinary sharing of environmental data. The survey showed that the majority of data offered by the COOPEUS research infrastructures is available via the internet (>90%), but the accessibility to these data differ significantly among research infrastructures; only 45% offer open access on their data, whereas the remaining infrastructures offer restricted access e.g. do not release raw data or sensible data, demand user registration or require permission prior to release of data. These rules and regulations are often installed as a form of standard practice, whereas formal data policies are lacking in 40% of the infrastructures, primarily in the EU. In order to improve this situation COOPEUS has installed a common data-sharing policy, which is agreed upon by all the COOPEUS research infrastructures. To investigate the existing opportunities for improving interoperability among environmental research infrastructures, COOPEUS explored the opportunities with the GEOSS common infrastructure (GCI) by holding a hands-on workshop. Through exercises directly registering resources

  11. Research infrastructure support to address ecosystem dynamics

    NASA Astrophysics Data System (ADS)

    Los, Wouter

    2014-05-01

    Predicting the evolution of ecosystems to climate change or human pressures is a challenge. Even understanding past or current processes is complicated as a result of the many interactions and feedbacks that occur within and between components of the system. This talk will present an example of current research on changes in landscape evolution, hydrology, soil biogeochemical processes, zoological food webs, and plant community succession, and how these affect feedbacks to components of the systems, including the climate system. Multiple observations, experiments, and simulations provide a wealth of data, but not necessarily understanding. Model development on the coupled processes on different spatial and temporal scales is sensitive for variations in data and of parameter change. Fast high performance computing may help to visualize the effect of these changes and the potential stability (and reliability) of the models. This may than allow for iteration between data production and models towards stable models reducing uncertainty and improving the prediction of change. The role of research infrastructures becomes crucial is overcoming barriers for such research. Environmental infrastructures are covering physical site facilities, dedicated instrumentation and e-infrastructure. The LifeWatch infrastructure for biodiversity and ecosystem research will provide services for data integration, analysis and modeling. But it has to cooperate intensively with the other kinds of infrastructures in order to support the iteration between data production and model computation. The cooperation in the ENVRI project (Common operations of environmental research infrastructures) is one of the initiatives to foster such multidisciplinary research.

  12. [Biobanks European infrastructure].

    PubMed

    Kinkorová, Judita; Topolčan, Ondřej

    2016-01-01

    Biobanks are structured repositories of human tissue samples connected with specific information. They became an integral part of personalized medicine in the new millennium. At the European research area biobanks are isolated not well coordinated and connected to the network. European commission supports European infrastructure BBMRI-ERIC (Biobanks and Biomolecular Resources Research Infrastructure European Research Infrastructure Consortium), consortium of 54 members with more than 225 associated organizations, largely biobanks from over 30 countries. The aim is to support biomedical research using stored samples. Czech Republic is a member of the consortium as a national node BBMRI_CZ, consisting of five partners.

  13. Proceedings of the topical meeting on advances in human factors research on man/computer interactions

    SciTech Connect

    Not Available

    1990-01-01

    This book discusses the following topics: expert systems and knowledge engineering-I; verification and validation of software; methods for modeling UMAN/computer performance; MAN/computer interaction problems in producing procedures -1-2; progress and problems with automation-1-2; experience with electronic presentation of procedures-2; intelligent displays and monitors; modeling user/computer interface; and computer-based human decision-making aids.

  14. Development of 3D multimedia with advanced computer animation tools for outreach activities related to Meteor Science and Meteoritics

    NASA Astrophysics Data System (ADS)

    Madiedo, J. M.

    2012-09-01

    Documentaries related to Astronomy and Planetary Sciences are a common and very attractive way to promote the interest of the public in these areas. These educational tools can get benefit from new advanced computer animation software and 3D technologies, as these allow making these documentaries even more attractive. However, special care must be taken in order to guarantee that the information contained in them is serious and objective. In this sense, an additional value is given when the footage is produced by the own researchers. With this aim, a new documentary produced and directed by Prof. Madiedo has been developed. The documentary, which has been entirely developed by means of advanced computer animation tools, is dedicated to several aspects of Meteor Science and Meteoritics. The main features of this outreach and education initiative are exposed here.

  15. Grand Challenges of Advanced Computing for Energy Innovation Report from the Workshop Held July 31-August 2, 2012

    SciTech Connect

    Larzelere, Alex R.; Ashby, Steven F.; Christensen, Dana C.; Crawford, Dona L.; Khaleel, Mohammad A.; John, Grosh; Stults, B. Ray; Lee, Steven L.; Hammond, Steven W.; Grover, Benjamin T.; Neely, Rob; Dudney, Lee Ann; Goldstein, Noah C.; Wells, Jack; Peltz, Jim

    2013-03-06

    On July 31-August 2 of 2012, the U.S. Department of Energy (DOE) held a workshop entitled Grand Challenges of Advanced Computing for Energy Innovation. This workshop built on three earlier workshops that clearly identified the potential for the Department and its national laboratories to enable energy innovation. The specific goal of the workshop was to identify the key challenges that the nation must overcome to apply the full benefit of taxpayer-funded advanced computing technologies to U.S. energy innovation in the ways that the country produces, moves, stores, and uses energy. Perhaps more importantly, the workshop also developed a set of recommendations to help the Department overcome those challenges. These recommendations provide an action plan for what the Department can do in the coming years to improve the nation’s energy future.

  16. Plan to mitigate infrastructure development released

    NASA Astrophysics Data System (ADS)

    Wendel, JoAnna

    2014-04-01

    Secretary of the Interior Sally Jewell released a new strategy on 10 April 2014 to advance landscape-scale, science-based management of America's public lands and wildlife. The strategy will implement mitigation policies and practices at the Department of the Interior that can encourage infrastructure development while protecting natural and cultural resources.

  17. Advanced Placement Computer Science (with Pascal). Teacher's Guide. Volume 1. Second Edition.

    ERIC Educational Resources Information Center

    Farkouh, Alice; And Others

    The purpose of this guide is to give teachers and supervisors a working knowledge of various approaches to enhancing pupil learning about computer science, particularly through the use of Pascal. It contains instructional units dealing with: (1) computer components; (2) computer languages; (3) compilers; (4) essential features of a Pascal program;…

  18. The Computation Directorate at Lawrence Livermore National Laboratory

    SciTech Connect

    Cook, L

    2006-09-07

    The Computation Directorate at Lawrence Livermore National Laboratory has four major areas of work: (1) Programmatic Support -- Programs are areas which receive funding to develop solutions to problems or advance basic science in their areas (Stockpile Stewardship, Homeland Security, the Human Genome project). Computer scientists are 'matrixed' to these programs to provide computer science support. (2) Livermore Computer Center (LCC) -- Development, support and advanced planning for the large, massively parallel computers, networks and storage facilities used throughout the laboratory. (3) Research -- Computer scientists research advanced solutions for programmatic work and for external contracts and research new HPC hardware solutions. (4) Infrastructure -- Support for thousands of desktop computers and numerous LANs, labwide unclassified networks, computer security, computer-use policy.

  19. An authentication infrastructure for today and tomorrow

    SciTech Connect

    Engert, D.E.

    1996-06-01

    The Open Software Foundation`s Distributed Computing Environment (OSF/DCE) was originally designed to provide a secure environment for distributed applications. By combining it with Kerberos Version 5 from MIT, it can be extended to provide network security as well. This combination can be used to build both an inter and intra organizational infrastructure while providing single sign-on for the user with overall improved security. The ESnet community of the Department of Energy is building just such an infrastructure. ESnet has modified these systems to improve their interoperability, while encouraging the developers to incorporate these changes and work more closely together to continue to improve the interoperability. The success of this infrastructure depends on its flexibility to meet the needs of many applications and network security requirements. The open nature of Kerberos, combined with the vendor support of OSF/DCE, provides the infrastructure for today and tomorrow.

  20. A computer program for estimating the power-density spectrum of advanced continuous simulation language generated time histories

    NASA Technical Reports Server (NTRS)

    Dunn, H. J.

    1981-01-01

    A computer program for performing frequency analysis of time history data is presented. The program uses circular convolution and the fast Fourier transform to calculate power density spectrum (PDS) of time history data. The program interfaces with the advanced continuous simulation language (ACSL) so that a frequency analysis may be performed on ACSL generated simulation variables. An example of the calculation of the PDS of a Van de Pol oscillator is presented.

  1. Computational structural mechanics and fluid dynamics: Advances and trends; Proceedings of the Symposium, Washington, DC, Oct. 17-19, 1988

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K. (Editor); Dwoyer, Douglas L. (Editor)

    1988-01-01

    Recent advances in computational structural and fluid dynamics are discussed in reviews and reports. Topics addressed include fluid-structure interaction and aeroelasticity, CFD techniques for reacting flows, micromechanics, stability and eigenproblems, probabilistic methods and chaotic dynamics, and perturbation and spectral methods. Consideration is given to finite-element, finite-volume, and boundary-element methods; adaptive methods; parallel processing machines and applications; and visualization, mesh generation, and AI interfaces.

  2. Advances in computer simulation of genome evolution: toward more realistic evolutionary genomics analysis by approximate bayesian computation.

    PubMed

    Arenas, Miguel

    2015-04-01

    NGS technologies present a fast and cheap generation of genomic data. Nevertheless, ancestral genome inference is not so straightforward due to complex evolutionary processes acting on this material such as inversions, translocations, and other genome rearrangements that, in addition to their implicit complexity, can co-occur and confound ancestral inferences. Recently, models of genome evolution that accommodate such complex genomic events are emerging. This letter explores these novel evolutionary models and proposes their incorporation into robust statistical approaches based on computer simulations, such as approximate Bayesian computation, that may produce a more realistic evolutionary analysis of genomic data. Advantages and pitfalls in using these analytical methods are discussed. Potential applications of these ancestral genomic inferences are also pointed out.

  3. Application of advanced computational procedures for modeling solar-wind interactions with Venus: Theory and computer code

    NASA Technical Reports Server (NTRS)

    Stahara, S. S.; Klenke, D.; Trudinger, B. C.; Spreiter, J. R.

    1980-01-01

    Computational procedures are developed and applied to the prediction of solar wind interaction with nonmagnetic terrestrial planet atmospheres, with particular emphasis to Venus. The theoretical method is based on a single fluid, steady, dissipationless, magnetohydrodynamic continuum model, and is appropriate for the calculation of axisymmetric, supersonic, super-Alfvenic solar wind flow past terrestrial planets. The procedures, which consist of finite difference codes to determine the gasdynamic properties and a variety of special purpose codes to determine the frozen magnetic field, streamlines, contours, plots, etc. of the flow, are organized into one computational program. Theoretical results based upon these procedures are reported for a wide variety of solar wind conditions and ionopause obstacle shapes. Plasma and magnetic field comparisons in the ionosheath are also provided with actual spacecraft data obtained by the Pioneer Venus Orbiter.

  4. Recent technological advances in computed tomography and the clinical impact therein.

    PubMed

    Runge, Val M; Marquez, Herman; Andreisek, Gustav; Valavanis, Anton; Alkadhi, Hatem

    2015-02-01

    Current technological advances in CT, specifically those with a major impact on clinical imaging, are discussed. The intent was to provide for both medical physicists and practicing radiologists a summary of the clinical impact of each advance, offering guidance in terms of utility and day-to-day clinical implementation, with specific attention to radiation dose reduction.

  5. Encouraging Advanced Second Language Speakers to Recognise Their Language Difficulties: A Personalised Computer-Based Approach

    ERIC Educational Resources Information Center

    Xu, Jing; Bull, Susan

    2010-01-01

    Despite holding advanced language qualifications, many overseas students studying at English-speaking universities still have difficulties in formulating grammatically correct sentences. This article introduces an "independent open learner model" for advanced second language speakers of English, which confronts students with the state of their…

  6. Computational Benefits Using an Advanced Concatenation Scheme Based on Reduced Order Models for RF Structures

    NASA Astrophysics Data System (ADS)

    Heller, Johann; Flisgen, Thomas; van Rienen, Ursula

    The computation of electromagnetic fields and parameters derived thereof for lossless radio frequency (RF) structures filled with isotropic media is an important task for the design and operation of particle accelerators. Unfortunately, these computations are often highly demanding with regard to computational effort. The entire computational demand of the problem can be reduced using decomposition schemes in order to solve the field problems on standard workstations. This paper presents one of the first detailed comparisons between the recently proposed state-space concatenation approach (SSC) and a direct computation for an accelerator cavity with coupler-elements that break the rotational symmetry.

  7. IPHE Infrastructure Workshop Proceedings

    SciTech Connect

    2010-02-01

    This proceedings contains information from the IPHE Infrastructure Workshop, a two-day interactive workshop held on February 25-26, 2010, to explore the market implementation needs for hydrogen fueling station development.

  8. Critical Infrastructure Modeling System

    2004-10-01

    The Critical Infrastructure Modeling System (CIMS) is a 3D modeling and simulation environment designed to assist users in the analysis of dependencies within individual infrastructure and also interdependencies between multiple infrastructures. Through visual cuing and textual displays, a use can evaluate the effect of system perturbation and identify the emergent patterns that evolve. These patterns include possible outage areas from a loss of power, denial of service or access, and disruption of operations. Method ofmore » Solution: CIMS allows the user to model a system, create an overlay of information, and create 3D representative images to illustrate key infrastructure elements. A geo-referenced scene, satellite, aerial images or technical drawings can be incorporated into the scene. Scenarios of events can be scripted, and the user can also interact during run time to alter system characteristics. CIMS operates as a discrete event simulation engine feeding a 3D visualization.« less

  9. Building safeguards infrastructure

    SciTech Connect

    Stevens, Rebecca S; Mcclelland - Kerr, John

    2009-01-01

    Much has been written in recent years about the nuclear renaissance - the rebirth of nuclear power as a clean and safe source of electricity around the world. Those who question the nuclear renaissance often cite the risk of proliferation, accidents or an attack on a facility as concerns, all of which merit serious consideration. The integration of these three areas - sometimes referred to as 3S, for safety, security and safeguards - is essential to supporting the growth of nuclear power, and the infrastructure that supports them should be strengthened. The focus of this paper will be on the role safeguards plays in the 3S concept and how to support the development of the infrastructure necessary to support safeguards. The objective of this paper has been to provide a working definition of safeguards infrastructure, and to discuss xamples of how building safeguards infrastructure is presented in several models. The guidelines outlined in the milestones document provide a clear path for establishing both the safeguards and the related infrastructures needed to support the development of nuclear power. The model employed by the INSEP program of engaging with partner states on safeguards-related topics that are of current interest to the level of nuclear development in that state provides another way of approaching the concept of building safeguards infrastructure. The Next Generation Safeguards Initiative is yet another approach that underscored five principal areas for growth, and the United States commitment to working with partners to promote this growth both at home and abroad.

  10. MFC Communications Infrastructure Study

    SciTech Connect

    Michael Cannon; Terry Barney; Gary Cook; George Danklefsen, Jr.; Paul Fairbourn; Susan Gihring; Lisa Stearns

    2012-01-01

    Unprecedented growth of required telecommunications services and telecommunications applications change the way the INL does business today. High speed connectivity compiled with a high demand for telephony and network services requires a robust communications infrastructure.   The current state of the MFC communication infrastructure limits growth opportunities of current and future communication infrastructure services. This limitation is largely due to equipment capacity issues, aging cabling infrastructure (external/internal fiber and copper cable) and inadequate space for telecommunication equipment. While some communication infrastructure improvements have been implemented over time projects, it has been completed without a clear overall plan and technology standard.   This document identifies critical deficiencies with the current state of the communication infrastructure in operation at the MFC facilities and provides an analysis to identify needs and deficiencies to be addressed in order to achieve target architectural standards as defined in STD-170. The intent of STD-170 is to provide a robust, flexible, long-term solution to make communications capabilities align with the INL mission and fit the various programmatic growth and expansion needs.

  11. Use Of The SYSCAP 2.5 Computer Analysis Program For Advanced Optical System Design And Analysis

    NASA Astrophysics Data System (ADS)

    Kleiner, C. T.

    1983-10-01

    The successful development of various electro-optical systems is highly dependent on precise electronic circuit design which must account for possible parameter drift in the various piece parts. The utilization of a comprehensive computer analysis program (SYSCAP) provides the electro-optical system designer and electro-optical management organization with a well-structured tool for a comprehensive system analysis'. As a result, the techniques described in this paper can be readily used by the electro-optical design community. An improved version of the SYSCAP computer program (version 2.5) is presented which inncludes the following new advances: (1) the introduction of a standard macro library that permits call-up of proven mathematical models for system modeling and simulation, (2) the introduction of improved semiconductor models for bipolar junction transistors and p-n junctions, (3) multifunction modeling capability to link signals with very high speed electronic circuit models, (4) high resolution computer graphics (both interactive and batch process) for display and permanent records, and (5) compatibility and interface with ad-vanced engineering work stations. This 2.5* version of the present SYSCAP 2 computer analysis program will be available for use through the Control Data Corporation world-wide Cybernet system in 1983*. This paper provides an overview of SYSCAP modeling and simulation capabilities.

  12. Infrastructure for distributed enterprise simulation

    SciTech Connect

    Johnson, M.M.; Yoshimura, A.S.; Goldsby, M.E.

    1998-01-01

    Traditional discrete-event simulations employ an inherently sequential algorithm and are run on a single computer. However, the demands of many real-world problems exceed the capabilities of sequential simulation systems. Often the capacity of a computer`s primary memory limits the size of the models that can be handled, and in some cases parallel execution on multiple processors could significantly reduce the simulation time. This paper describes the development of an Infrastructure for Distributed Enterprise Simulation (IDES) - a large-scale portable parallel simulation framework developed to support Sandia National Laboratories` mission in stockpile stewardship. IDES is based on the Breathing-Time-Buckets synchronization protocol, and maps a message-based model of distributed computing onto an object-oriented programming model. IDES is portable across heterogeneous computing architectures, including single-processor systems, networks of workstations and multi-processor computers with shared or distributed memory. The system provides a simple and sufficient application programming interface that can be used by scientists to quickly model large-scale, complex enterprise systems. In the background and without involving the user, IDES is capable of making dynamic use of idle processing power available throughout the enterprise network. 16 refs., 14 figs.

  13. Recent Advances in Photonic Devices for Optical Computing and the Role of Nonlinear Optics-Part II

    NASA Technical Reports Server (NTRS)

    Abdeldayem, Hossin; Frazier, Donald O.; Witherow, William K.; Banks, Curtis E.; Paley, Mark S.

    2007-01-01

    The twentieth century has been the era of semiconductor materials and electronic technology while this millennium is expected to be the age of photonic materials and all-optical technology. Optical technology has led to countless optical devices that have become indispensable in our daily lives in storage area networks, parallel processing, optical switches, all-optical data networks, holographic storage devices, and biometric devices at airports. This chapters intends to bring some awareness to the state-of-the-art of optical technologies, which have potential for optical computing and demonstrate the role of nonlinear optics in many of these components. Our intent, in this Chapter, is to present an overview of the current status of optical computing, and a brief evaluation of the recent advances and performance of the following key components necessary to build an optical computing system: all-optical logic gates, adders, optical processors, optical storage, holographic storage, optical interconnects, spatial light modulators and optical materials.

  14. Advances in Navy pharmacy information technology: accessing Micromedex via the Composite Healthcare Computer System and local area networks.

    PubMed

    Koerner, S D; Becker, F

    1999-07-01

    The pharmacy profession has long used technology to more effectively bring health care to the patient. Navy pharmacy has embraced technology advances in its daily operations, from computers to dispensing robots. Evolving from the traditional role of compounding and dispensing specialists, pharmacists are establishing themselves as vital team members in direct patient care: on the ward, in ambulatory clinics, in specialty clinics, and in other specialty patient care programs (e.g., smoking cessation). An important part of the evolution is the timely access to the most up-to-date information available. Micromedex, Inc. (Denver, Colorado), has developed a number of computer CD-ROM-based full-text pharmacy, toxicology, emergency medicine, and patient education products. Micromedex is a recognized leader with regard to total pharmaceutical information availability. This article discusses the implementation of Micromedex products within the established Composite Healthcare Computer System and the subsequent use by and effect on the international Navy pharmacy community.

  15. Data analysis methods for infrastructure permanent monitoring

    NASA Astrophysics Data System (ADS)

    Gargaro, Danilo; Rainieri, Carlo; Fabbrocino, Giovanni

    2016-04-01

    Safety and management of road networks is of paramount relevance not only under common operational conditions, but also in the case of extreme events, such as earthquakes or other natural hazards. Structural Health Monitoring systems may represent an attractive option for those institutions and companies having in charge the maintenance. Advances in sensing technology and miniaturization of computational devices make possible and reliable the installation of permanent monitoring systems and bring to operators both local and global data records. In this framework, vibration based monitoring represents an effective tool to track relevant modal properties during time by means of automated procedures. In any case, estimates of modal properties are influenced by environmental and operational factors as well as by the damage pointing out the need of discriminating between the different sources of influence. In the present paper, methodologies and strategies aimed at quantifying the influence of environmental and operational factors on modal parameters of infrastructure components are reviewed and some remarks on the application of blind source separation techniques to this end are proposed.

  16. Parallel-META 2.0: Enhanced Metagenomic Data Analysis with Functional Annotation, High Performance Computing and Advanced Visualization

    PubMed Central

    Song, Baoxing; Xu, Jian; Ning, Kang

    2014-01-01

    The metagenomic method directly sequences and analyses genome information from microbial communities. The main computational tasks for metagenomic analyses include taxonomical and functional structure analysis for all genomes in a microbial community (also referred to as a metagenomic sample). With the advancement of Next Generation Sequencing (NGS) techniques, the number of metagenomic samples and the data size for each sample are increasing rapidly. Current metagenomic analysis is both data- and computation- intensive, especially when there are many species in a metagenomic sample, and each has a large number of sequences. As such, metagenomic analyses require extensive computational power. The increasing analytical requirements further augment the challenges for computation analysis. In this work, we have proposed Parallel-META 2.0, a metagenomic analysis software package, to cope with such needs for efficient and fast analyses of taxonomical and functional structures for microbial communities. Parallel-META 2.0 is an extended and improved version of Parallel-META 1.0, which enhances the taxonomical analysis using multiple databases, improves computation efficiency by optimized parallel computing, and supports interactive visualization of results in multiple views. Furthermore, it enables functional analysis for metagenomic samples including short-reads assembly, gene prediction and functional annotation. Therefore, it could provide accurate taxonomical and functional analyses of the metagenomic samples in high-throughput manner and on large scale. PMID:24595159

  17. Parallel-META 2.0: enhanced metagenomic data analysis with functional annotation, high performance computing and advanced visualization.

    PubMed

    Su, Xiaoquan; Pan, Weihua; Song, Baoxing; Xu, Jian; Ning, Kang

    2014-01-01

    The metagenomic method directly sequences and analyses genome information from microbial communities. The main computational tasks for metagenomic analyses include taxonomical and functional structure analysis for all genomes in a microbial community (also referred to as a metagenomic sample). With the advancement of Next Generation Sequencing (NGS) techniques, the number of metagenomic samples and the data size for each sample are increasing rapidly. Current metagenomic analysis is both data- and computation- intensive, especially when there are many species in a metagenomic sample, and each has a large number of sequences. As such, metagenomic analyses require extensive computational power. The increasing analytical requirements further augment the challenges for computation analysis. In this work, we have proposed Parallel-META 2.0, a metagenomic analysis software package, to cope with such needs for efficient and fast analyses of taxonomical and functional structures for microbial communities. Parallel-META 2.0 is an extended and improved version of Parallel-META 1.0, which enhances the taxonomical analysis using multiple databases, improves computation efficiency by optimized parallel computing, and supports interactive visualization of results in multiple views. Furthermore, it enables functional analysis for metagenomic samples including short-reads assembly, gene prediction and functional annotation. Therefore, it could provide accurate taxonomical and functional analyses of the metagenomic samples in high-throughput manner and on large scale.

  18. An Overview of the Advanced CompuTational Software (ACTS)Collection

    SciTech Connect

    Drummond, Leroy A.; Marques, Osni A.

    2005-02-02

    The ACTS Collection brings together a number of general-purpose computational tools that were developed by independent research projects mostly funded and supported by the U.S. Department of Energy. These tools tackle a number of common computational issues found in many applications, mainly implementation of numerical algorithms, and support for code development, execution and optimization. In this article, we introduce the numerical tools in the collection and their functionalities, present a model for developing more complex computational applications on top of ACTS tools, and summarize applications that use these tools. Lastly, we present a vision of the ACTS project for deployment of the ACTS Collection by the computational sciences community.

  19. Communications, Computers and Networks.

    ERIC Educational Resources Information Center

    Dertouzos, Michael L.

    1991-01-01

    The infrastructure created by fusing computing and communications technologies is described. The effect of this infrastructure on the economy and society of the United States is discussed. The importance of knowing the value and role of information is emphasized. (KR)

  20. NASA World Wind: Infrastructure for Spatial Data

    NASA Technical Reports Server (NTRS)

    Hogan, Patrick

    2011-01-01

    The world has great need for analysis of Earth observation data, be it climate change, carbon monitoring, disaster response, national defense or simply local resource management. To best provide for spatial and time-dependent information analysis, the world benefits from an open standards and open source infrastructure for spatial data. In the spirit of NASA's motto "for the benefit of all" NASA invites the world community to collaboratively advance this core technology. The World Wind infrastructure for spatial data both unites and challenges the world for innovative solutions analyzing spatial data while also allowing absolute command and control over any respective information exchange medium.

  1. Infrastructure Retrofit Design via Composite Mechanics

    NASA Technical Reports Server (NTRS)

    Chamis, Christos, C.; Gotsis,Pascal K.

    1998-01-01

    Select applications are described to illustrate the concept for retrofitting reinforced concrete infrastructure with fiber reinforced plastic laminates. The concept is first illustrated by using an axially loaded reinforced concrete column. A reinforced concrete arch and a dome are then used to illustrate the versatility of the concept. Advanced methods such as finite element structural analysis and progressive structural fracture are then used to evaluate the retrofitting laminate adequacy. Results obtains show that retrofits can be designed to double and even triple the as-designed load of the select reinforced concrete infrastructures.

  2. 78 FR 59927 - Next Generation Risk Assessment: Incorporation of Recent Advances in Molecular, Computational...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-30

    ... Systems Biology [External Review Draft] AGENCY: Environmental Protection Agency (EPA). ACTION: Notice of... Molecular, Computational, and Systems Biology '' (EPA/600/R-13/214A). EPA is also announcing that Eastern..., Computational, and Systems Biology '' is available primarily via the Internet on the NCEA home page under...

  3. Projects to Measure the Effects of Computers on Campuses Make Major Advances, Leave Several Unsolved Problems.

    ERIC Educational Resources Information Center

    Turner, Judith Axler

    1989-01-01

    The most important results of computer networking experiments on campus may be increased use of technology, but the projects did not always take the directions anticipated. The projects also highlighted problems in social and educational systems that must be solved before computing can be comfortably integrated into higher education. (MSE)

  4. Advanced Scientific Computing Environment Team new scientific database management task. Progress report

    SciTech Connect

    Church, J.P.; Roberts, J.C.; Sims, R.N.; Smetana, A.O.; Westmoreland, B.W.

    1991-06-01

    The mission of the ASCENT Team is to continually keep pace with, evaluate, and select emerging computing technologies to define and implement prototypic scientific environments that maximize the ability of scientists and engineers to manage scientific data. These environments are to be implemented in a manner consistent with the site computing architecture and standards and NRTSC/SCS strategic plans for scientific computing. The major trends in computing hardware and software technology clearly indicate that the future ``computer`` will be a network environment that comprises supercomputers, graphics boxes, mainframes, clusters, workstations, terminals, and microcomputers. This ``network computer`` will have an architecturally transparent operating system allowing the applications code to run on any box supplying the required computing resources. The environment will include a distributed database and database managing system(s) that permits use of relational, hierarchical, object oriented, GIS, et al, databases. To reach this goal requires a stepwise progression from the present assemblage of monolithic applications codes running on disparate hardware platforms and operating systems. The first steps include converting from the existing JOSHUA system to a new J80 system that complies with modern language standards, development of a new J90 prototype to provide JOSHUA capabilities on Unix platforms, development of portable graphics tools to greatly facilitate preparation of input and interpretation of output; and extension of ``Jvv`` concepts and capabilities to distributed and/or parallel computing environments.

  5. BOOK REVIEW: Advanced Topics in Computational Partial Differential Equations: Numerical Methods and Diffpack Programming

    NASA Astrophysics Data System (ADS)

    Katsaounis, T. D.

    2005-02-01

    The scope of this book is to present well known simple and advanced numerical methods for solving partial differential equations (PDEs) and how to implement these methods using the programming environment of the software package Diffpack. A basic background in PDEs and numerical methods is required by the potential reader. Further, a basic knowledge of the finite element method and its implementation in one and two space dimensions is required. The authors claim that no prior knowledge of the package Diffpack is required, which is true, but the reader should be at least familiar with an object oriented programming language like C++ in order to better comprehend the programming environment of Diffpack. Certainly, a prior knowledge or usage of Diffpack would be a great advantage to the reader. The book consists of 15 chapters, each one written by one or more authors. Each chapter is basically divided into two parts: the first part is about mathematical models described by PDEs and numerical methods to solve these models and the second part describes how to implement the numerical methods using the programming environment of Diffpack. Each chapter closes with a list of references on its subject. The first nine chapters cover well known numerical methods for solving the basic types of PDEs. Further, programming techniques on the serial as well as on the parallel implementation of numerical methods are also included in these chapters. The last five chapters are dedicated to applications, modelled by PDEs, in a variety of fields. The first chapter is an introduction to parallel processing. It covers fundamentals of parallel processing in a simple and concrete way and no prior knowledge of the subject is required. Examples of parallel implementation of basic linear algebra operations are presented using the Message Passing Interface (MPI) programming environment. Here, some knowledge of MPI routines is required by the reader. Examples solving in parallel simple PDEs using

  6. LNG infrastructure and equipment

    SciTech Connect

    Forgash, D.J.

    1995-12-31

    Sound engineering principals have been used by every company involved in the development of the LNG infrastructure, but there is very little that is new. The same cryogenic technology that is used in the manufacture and sale of nitrogen, argon, and oxygen infrastructure is used in LNG infrastructure. The key component of the refueling infrastructure is the LNG tank which should have a capacity of at least 15,000 gallons. These stainless steel tanks are actually a tank within a tank separated by an annular space that is void of air creating a vacuum between the inner and outer tank where superinsulation is applied. Dispensing can be accomplished by pressure or pump. Either works well and has been demonstrated in the field. Until work is complete on NFPA 57 or The Texas Railroad Commission Rules for LNG are complete, the industry is setting the standards for the safe installation of refueling infrastructure. As a new industry, the safety record to date has been outstanding.

  7. Privacy and the National Information Infrastructure.

    ERIC Educational Resources Information Center

    Rotenberg, Marc

    1994-01-01

    Explains the work of Computer Professionals for Social Responsibility regarding privacy issues in the use of electronic networks; recommends principles that should be adopted for a National Information Infrastructure privacy code; discusses the need for public education; and suggests pertinent legislative proposals. (LRW)

  8. Recent advances in the computation of vortex induced vibrations and the shielding effect

    SciTech Connect

    Vada, T.

    1995-12-31

    The paper presents some recent results obtained by the computer program VISFLO. This program computes viscous flow around circular cylinders by the vortex-in-cell method. The first problem studied is a case with two cylinders that can move independently of each other. Results are shown which indicates if the cylinders can be expected to collide. The other type of problem studied is a large array of fixed cylinders. The aim of the study is to compute the shielding effect in steady flow. Results are shown for both model scale and full scale cases. The scaling effect is shown to be very important.

  9. Computer architecture for efficient algorithmic executions in real-time systems: New technology for avionics systems and advanced space vehicles

    NASA Technical Reports Server (NTRS)

    Carroll, Chester C.; Youngblood, John N.; Saha, Aindam

    1987-01-01

    Improvements and advances in the development of computer architecture now provide innovative technology for the recasting of traditional sequential solutions into high-performance, low-cost, parallel system to increase system performance. Research conducted in development of specialized computer architecture for the algorithmic execution of an avionics system, guidance and control problem in real time is described. A comprehensive treatment of both the hardware and software structures of a customized computer which performs real-time computation of guidance commands with updated estimates of target motion and time-to-go is presented. An optimal, real-time allocation algorithm was developed which maps the algorithmic tasks onto the processing elements. This allocation is based on the critical path analysis. The final stage is the design and development of the hardware structures suitable for the efficient execution of the allocated task graph. The processing element is designed for rapid execution of the allocated tasks. Fault tolerance is a key feature of the overall architecture. Parallel numerical integration techniques, tasks definitions, and allocation algorithms are discussed. The parallel implementation is analytically verified and the experimental results are presented. The design of the data-driven computer architecture, customized for the execution of the particular algorithm, is discussed.

  10. Computer architecture for efficient algorithmic executions in real-time systems: new technology for avionics systems and advanced space vehicles

    SciTech Connect

    Carroll, C.C.; Youngblood, J.N.; Saha, A.

    1987-12-01

    Improvements and advances in the development of computer architecture now provide innovative technology for the recasting of traditional sequential solutions into high-performance, low-cost, parallel system to increase system performance. Research conducted in development of specialized computer architecture for the algorithmic execution of an avionics system, guidance and control problem in real time is described. A comprehensive treatment of both the hardware and software structures of a customized computer which performs real-time computation of guidance commands with updated estimates of target motion and time-to-go is presented. An optimal, real-time allocation algorithm was developed which maps the algorithmic tasks onto the processing elements. This allocation is based on the critical path analysis. The final stage is the design and development of the hardware structures suitable for the efficient execution of the allocated task graph. The processing element is designed for rapid execution of the allocated tasks. Fault tolerance is a key feature of the overall architecture. Parallel numerical integration techniques, tasks definitions, and allocation algorithms are discussed. The parallel implementation is analytically verified and the experimental results are presented. The design of the data-driven computer architecture, customized for the execution of the particular algorithm, is discussed.

  11. A Computational Future for Preventing HIV in Minority Communities: How Advanced Technology Can Improve Implementation of Effective Programs

    PubMed Central

    Brown, C Hendricks; Mohr, David C.; Gallo, Carlos G.; Mader, Christopher; Palinkas, Lawrence; Wingood, Gina; Prado, Guillermo; Kellam, Sheppard G.; Pantin, Hilda; Poduska, Jeanne; Gibbons, Robert; McManus, John; Ogihara, Mitsunori; Valente, Thomas; Wulczyn, Fred; Czaja, Sara; Sutcliffe, Geoff; Villamar, Juan; Jacobs, Christopher

    2013-01-01

    African Americans and Hispanics in the U.S. have much higher rates of HIV than non-minorities. There is now strong evidence that a range of behavioral interventions are efficacious in reducing sexual risk behavior in these populations. While a handful of these programs are just beginning to be disseminated widely, we still have not implemented effective programs to a level that would reduce the population incidence of HIV for minorities. We propose that innovative approaches involving computational technologies be explored for their use in both developing new interventions as well as in supporting wide-scale implementation of effective behavioral interventions. Mobile technologies have a place in both of these activities. First, mobile technologies can be used in sensing contexts and interacting to the unique preferences and needs of individuals at times where intervention to reduce risk would be most impactful. Secondly, mobile technologies can be used to improve the delivery of interventions by facilitators and their agencies. Systems science methods, including social network analysis, agent based models, computational linguistics, intelligent data analysis, and systems and software engineering all have strategic roles that can bring about advances in HIV prevention in minority communities. Using an existing mobile technology for depression and three effective HIV prevention programs, we illustrate how eight areas in the intervention/implementation process can use innovative computational approaches to advance intervention adoption, fidelity, and sustainability. PMID:23673892

  12. A computational future for preventing HIV in minority communities: how advanced technology can improve implementation of effective programs.

    PubMed

    Brown, C Hendricks; Mohr, David C; Gallo, Carlos G; Mader, Christopher; Palinkas, Lawrence; Wingood, Gina; Prado, Guillermo; Kellam, Sheppard G; Pantin, Hilda; Poduska, Jeanne; Gibbons, Robert; McManus, John; Ogihara, Mitsunori; Valente, Thomas; Wulczyn, Fred; Czaja, Sara; Sutcliffe, Geoff; Villamar, Juan; Jacobs, Christopher

    2013-06-01

    African Americans and Hispanics in the United States have much higher rates of HIV than non-minorities. There is now strong evidence that a range of behavioral interventions are efficacious in reducing sexual risk behavior in these populations. Although a handful of these programs are just beginning to be disseminated widely, we still have not implemented effective programs to a level that would reduce the population incidence of HIV for minorities. We proposed that innovative approaches involving computational technologies be explored for their use in both developing new interventions and in supporting wide-scale implementation of effective behavioral interventions. Mobile technologies have a place in both of these activities. First, mobile technologies can be used in sensing contexts and interacting to the unique preferences and needs of individuals at times where intervention to reduce risk would be most impactful. Second, mobile technologies can be used to improve the delivery of interventions by facilitators and their agencies. Systems science methods including social network analysis, agent-based models, computational linguistics, intelligent data analysis, and systems and software engineering all have strategic roles that can bring about advances in HIV prevention in minority communities. Using an existing mobile technology for depression and 3 effective HIV prevention programs, we illustrated how 8 areas in the intervention/implementation process can use innovative computational approaches to advance intervention adoption, fidelity, and sustainability.

  13. A computational future for preventing HIV in minority communities: how advanced technology can improve implementation of effective programs.

    PubMed

    Brown, C Hendricks; Mohr, David C; Gallo, Carlos G; Mader, Christopher; Palinkas, Lawrence; Wingood, Gina; Prado, Guillermo; Kellam, Sheppard G; Pantin, Hilda; Poduska, Jeanne; Gibbons, Robert; McManus, John; Ogihara, Mitsunori; Valente, Thomas; Wulczyn, Fred; Czaja, Sara; Sutcliffe, Geoff; Villamar, Juan; Jacobs, Christopher

    2013-06-01

    African Americans and Hispanics in the United States have much higher rates of HIV than non-minorities. There is now strong evidence that a range of behavioral interventions are efficacious in reducing sexual risk behavior in these populations. Although a handful of these programs are just beginning to be disseminated widely, we still have not implemented effective programs to a level that would reduce the population incidence of HIV for minorities. We proposed that innovative approaches involving computational technologies be explored for their use in both developing new interventions and in supporting wide-scale implementation of effective behavioral interventions. Mobile technologies have a place in both of these activities. First, mobile technologies can be used in sensing contexts and interacting to the unique preferences and needs of individuals at times where intervention to reduce risk would be most impactful. Second, mobile technologies can be used to improve the delivery of interventions by facilitators and their agencies. Systems science methods including social network analysis, agent-based models, computational linguistics, intelligent data analysis, and systems and software engineering all have strategic roles that can bring about advances in HIV prevention in minority communities. Using an existing mobile technology for depression and 3 effective HIV prevention programs, we illustrated how 8 areas in the intervention/implementation process can use innovative computational approaches to advance intervention adoption, fidelity, and sustainability. PMID:23673892

  14. Verification, Validation and Credibility Assessment of a Computational Model of the Advanced Resistive Exercise Device (ARED)

    NASA Technical Reports Server (NTRS)

    Werner, C. R.; Humphreys, B. T.; Mulugeta, L.

    2014-01-01

    The Advanced Resistive Exercise Device (ARED) is the resistive exercise device used by astronauts on the International Space Station (ISS) to mitigate bone loss and muscle atrophy due to extended exposure to microgravity (micro g). The Digital Astronaut Project (DAP) has developed a multi-body dynamics model of biomechanics models for use in spaceflight exercise physiology research and operations. In an effort to advance model maturity and credibility of the ARED model, the DAP performed verification, validation and credibility (VV and C) assessment of the analyses of the model in accordance to NASA-STD-7009 'Standards for Models and Simulations'.

  15. Study of flutter related computational procedures for minimum weight structural sizing of advanced aircraft, supplemental data

    NASA Technical Reports Server (NTRS)

    Oconnell, R. F.; Hassig, H. J.; Radovcich, N. A.

    1975-01-01

    Computational aspects of (1) flutter optimization (minimization of structural mass subject to specified flutter requirements), (2) methods for solving the flutter equation, and (3) efficient methods for computing generalized aerodynamic force coefficients in the repetitive analysis environment of computer-aided structural design are discussed. Specific areas included: a two-dimensional Regula Falsi approach to solving the generalized flutter equation; method of incremented flutter analysis and its applications; the use of velocity potential influence coefficients in a five-matrix product formulation of the generalized aerodynamic force coefficients; options for computational operations required to generate generalized aerodynamic force coefficients; theoretical considerations related to optimization with one or more flutter constraints; and expressions for derivatives of flutter-related quantities with respect to design variables.

  16. Computation Directorate Annual Report 2003

    SciTech Connect

    Crawford, D L; McGraw, J R; Ashby, S F; McCoy, M G; Michels, T C; Eltgroth, P G

    2004-03-12

    Big computers are icons: symbols of the culture, and of the larger computing infrastructure that exists at Lawrence Livermore. Through the collective effort of Laboratory personnel, they enable scientific discovery and engineering development on an unprecedented scale. For more than three decades, the Computation Directorate has supplied the big computers that enable the science necessary for Laboratory missions and programs. Livermore supercomputing is uniquely mission driven. The high-fidelity weapon simulation capabilities essential to the Stockpile Stewardship Program compel major advances in weapons codes and science, compute power, and computational infrastructure. Computation's activities align with this vital mission of the Department of Energy. Increasingly, non-weapons Laboratory programs also rely on computer simulation. World-class achievements have been accomplished by LLNL specialists working in multi-disciplinary research and development teams. In these teams, Computation personnel employ a wide array of skills, from desktop support expertise, to complex applications development, to advanced research. Computation's skilled professionals make the Directorate the success that it has become. These individuals know the importance of the work they do and the many ways it contributes to Laboratory missions. They make appropriate and timely decisions that move the entire organization forward. They make Computation a leader in helping LLNL achieve its programmatic milestones. I dedicate this inaugural Annual Report to the people of Computation in recognition of their continuing contributions. I am proud that we perform our work securely and safely. Despite increased cyber attacks on our computing infrastructure from the Internet, advanced cyber security practices ensure that our computing environment remains secure. Through Integrated Safety Management (ISM) and diligent oversight, we address safety issues promptly and aggressively. The safety of our

  17. Development of Advanced Computational Aeroelasticity Tools at NASA Langley Research Center

    NASA Technical Reports Server (NTRS)

    Bartels, R. E.

    2008-01-01

    NASA Langley Research Center has continued to develop its long standing computational tools to address new challenges in aircraft and launch vehicle design. This paper discusses the application and development of those computational aeroelastic tools. Four topic areas will be discussed: 1) Modeling structural and flow field nonlinearities; 2) Integrated and modular approaches to nonlinear multidisciplinary analysis; 3) Simulating flight dynamics of flexible vehicles; and 4) Applications that support both aeronautics and space exploration.

  18. Implementing computer-based testing in distance education for advanced practice nurses: lessons learned.

    PubMed

    Caudle, Patricia; Bigness, Joanne; Daniels, Judi; Gillmor-Kahn, Mickey; Knestrick, Joyce

    2011-01-01

    A distance education program utilized by graduate nursing students worldwide faces unique problems with testing. This article presents the results of a pilot study on the implementation of computer-based testing at the Frontier Nursing University. A detailed analysis of the evaluative survey completed by students in the pilot study revealed issues of hi-directional respect and trust between faculty and students and technological anxiety among students using computer-based testing. PMID:22029246

  19. Assessing Vulnerabilities, Risks, and Consequences of Damage to Critical Infrastructure

    SciTech Connect

    Suski, N; Wuest, C

    2011-02-04

    Since the publication of 'Critical Foundations: Protecting America's Infrastructure,' there has been a keen understanding of the complexity, interdependencies, and shared responsibility required to protect the nation's most critical assets that are essential to our way of life. The original 5 sectors defined in 1997 have grown to 18 Critical Infrastructures and Key Resources (CIKR), which are discussed in the 2009 National Infrastructure Protection Plan (NIPP) and its supporting sector-specific plans. The NIPP provides the structure for a national program dedicated to enhanced protection and resiliency of the nation's infrastructure. Lawrence Livermore National Laboratory (LLNL) provides in-depth, multi-disciplinary assessments of threat, vulnerability, and consequence across all 18 sectors at scales ranging from specific facilities to infrastructures spanning multi-state regions, such as the Oil and Natural Gas (ONG) sector. Like many of the CIKR sectors, the ONG sector is comprised of production, processing, distribution, and storage of highly valuable and potentially dangerous commodities. Furthermore, there are significant interdependencies with other sectors, including transportation, communication, finance, and government. Understanding the potentially devastating consequences and collateral damage resulting from a terrorist attack or natural event is an important element of LLNL's infrastructure security programs. Our work began in the energy sector in the late 1990s and quickly expanded other critical infrastructure sectors. We have performed over 600 physical assessments with a particular emphasis on those sectors that utilize, store, or ship potentially hazardous materials and for whom cyber security is important. The success of our approach is based on building awareness of vulnerabilities and risks and working directly with industry partners to collectively advance infrastructure protection. This approach consists of three phases: The Pre

  20. Integrative Utilization of Microenvironments, Biomaterials and Computational Techniques for Advanced Tissue Engineering.

    PubMed

    Shamloo, Amir; Mohammadaliha, Negar; Mohseni, Mina

    2015-10-20

    This review aims to propose the integrative implementation of microfluidic devices, biomaterials, and computational methods that can lead to a significant progress in tissue engineering and regenerative medicine researches. Simultaneous implementation of multiple techniques can be very helpful in addressing biological processes. Providing controllable biochemical and biomechanical cues within artificial extracellular matrix similar to in vivo conditions is crucial in tissue engineering and regenerative medicine researches. Microfluidic devices provide precise spatial and temporal control over cell microenvironment. Moreover, generation of accurate and controllable spatial and temporal gradients of biochemical factors is attainable inside microdevices. Since biomaterials with tunable properties are a worthwhile option to construct artificial extracellular matrix, in vitro platforms that simultaneously utilize natural, synthetic, or engineered biomaterials inside microfluidic devices are phenomenally advantageous to experimental studies in the field of tissue engineering. Additionally, collaboration between experimental and computational methods is a useful way to predict and understand mechanisms responsible for complex biological phenomena. Computational results can be verified by using experimental platforms. Computational methods can also broaden the understanding of the mechanisms behind the biological phenomena observed during experiments. Furthermore, computational methods are powerful tools to optimize the fabrication of microfluidic devices and biomaterials with specific features. Here we present a succinct review of the benefits of microfluidic devices, biomaterial, and computational methods in the case of tissue engineering and regeneration medicine. Furthermore, some breakthroughs in biological phenomena including the neuronal axon development, cancerous cell migration and blood vessel formation via angiogenesis by virtue of the aforementioned approaches

  1. Next Generation Risk Assessment: Incorporation of Recent Advances in Molecular, Computational, and Systems Biology (Final Report)

    EPA Science Inventory

    Cover of the Next Generation of Risk Assessment Final report This final report, "Next Generation Risk Assessment: Recent Advances in Molec...

  2. Infrastructure: A technology battlefield in the 21st century

    SciTech Connect

    Drucker, H.

    1997-12-31

    A major part of technological advancement has involved the development of complex infrastructure systems, including electric power generation, transmission, and distribution networks; oil and gas pipeline systems; highway and rail networks; and telecommunication networks. Dependence on these infrastructure systems renders them attractive targets for conflict in the twenty-first century. Hostile governments, domestic and international terrorists, criminals, and mentally distressed individuals will inevitably find some part of the infrastructure an easy target for theft, for making political statements, for disruption of strategic activities, or for making a nuisance. The current situation regarding the vulnerability of the infrastructure can be summarized in three major points: (1) our dependence on technology has made our infrastructure more important and vital to our everyday lives, this in turn, makes us much more vulnerable to disruption in any infrastructure system; (2) technologies available for attacking infrastructure systems have changed substantially and have become much easier to obtain and use, easy accessibility to information on how to disrupt or destroy various infrastructure components means that almost anyone can be involved in this destructive process; (3) technologies for defending infrastructure systems and preventing damage have not kept pace with the capability for destroying such systems. A brief review of these points will illustrate the significance of infrastructure and the growing dangers to its various elements.

  3. The Moral Dimensions of Infrastructure.

    PubMed

    Epting, Shane

    2016-04-01

    Moral issues in urban planning involving technology, residents, marginalized groups, ecosystems, and future generations are complex cases, requiring solutions that go beyond the limits of contemporary moral theory. Aside from typical planning problems, there is incongruence between moral theory and some of the subjects that require moral assessment, such as urban infrastructure. Despite this incongruence, there is not a need to develop another moral theory. Instead, a supplemental measure that is compatible with existing moral positions will suffice. My primary goal in this paper is to explain the need for this supplemental measure, describe what one looks like, and show how it works with existing moral systems. The secondary goal is to show that creating a supplemental measure that provides congruency between moral systems that are designed to assess human action and non-human subjects advances the study of moral theory.

  4. Advanced computational simulation for design and manufacturing of lightweight material components for automotive applications

    SciTech Connect

    Simunovic, S.; Aramayo, G.A.; Zacharia, T.; Toridis, T.G.; Bandak, F.; Ragland, C.L.

    1997-04-01

    Computational vehicle models for the analysis of lightweight material performance in automobiles have been developed through collaboration between Oak Ridge National Laboratory, the National Highway Transportation Safety Administration, and George Washington University. The vehicle models have been verified against experimental data obtained from vehicle collisions. The crashed vehicles were analyzed, and the main impact energy dissipation mechanisms were identified and characterized. Important structural parts were extracted and digitized and directly compared with simulation results. High-performance computing played a key role in the model development because it allowed for rapid computational simulations and model modifications. The deformation of the computational model shows a very good agreement with the experiments. This report documents the modifications made to the computational model and relates them to the observations and findings on the test vehicle. Procedural guidelines are also provided that the authors believe need to be followed to create realistic models of passenger vehicles that could be used to evaluate the performance of lightweight materials in automotive structural components.

  5. Investigation of advanced counterrotation blade configuration concepts for high speed turboprop systems. Task 4: Advanced fan section aerodynamic analysis computer program user's manual

    NASA Technical Reports Server (NTRS)

    Crook, Andrew J.; Delaney, Robert A.

    1992-01-01

    The computer program user's manual for the ADPACAPES (Advanced Ducted Propfan Analysis Code-Average Passage Engine Simulation) program is included. The objective of the computer program is development of a three-dimensional Euler/Navier-Stokes flow analysis for fan section/engine geometries containing multiple blade rows and multiple spanwise flow splitters. An existing procedure developed by Dr. J. J. Adamczyk and associates at the NASA Lewis Research Center was modified to accept multiple spanwise splitter geometries and simulate engine core conditions. The numerical solution is based upon a finite volume technique with a four stage Runge-Kutta time marching procedure. Multiple blade row solutions are based upon the average-passage system of equations. The numerical solutions are performed on an H-type grid system, with meshes meeting the requirement of maintaining a common axisymmetric mesh for each blade row grid. The analysis was run on several geometry configurations ranging from one to five blade rows and from one to four radial flow splitters. The efficiency of the solution procedure was shown to be the same as the original analysis.

  6. Using Computer-Assisted Argumentation Mapping to develop effective argumentation skills in high school advanced placement physics

    NASA Astrophysics Data System (ADS)

    Heglund, Brian

    Educators recognize the importance of reasoning ability for development of critical thinking skills, conceptual change, metacognition, and participation in 21st century society. There is a recognized need for students to improve their skills of argumentation, however, argumentation is not explicitly taught outside logic and philosophy---subjects that are not part of the K-12 curriculum. One potential way of supporting the development of argumentation skills in the K-12 context is through incorporating Computer-Assisted Argument Mapping to evaluate arguments. This quasi-experimental study tested the effects of such argument mapping software and was informed by the following two research questions: 1. To what extent does the collaborative use of Computer-Assisted Argumentation Mapping to evaluate competing theories influence the critical thinking skill of argument evaluation, metacognitive awareness, and conceptual knowledge acquisition in high school Advanced Placement physics, compared to the more traditional method of text tables that does not employ Computer-Assisted Argumentation Mapping? 2. What are the student perceptions of the pros and cons of argument evaluation in the high school Advanced Placement physics environment? This study examined changes in critical thinking skills, including argumentation evaluation skills, as well as metacognitive awareness and conceptual knowledge, in two groups: a treatment group using Computer-Assisted Argumentation Mapping to evaluate physics arguments, and a comparison group using text tables to evaluate physics arguments. Quantitative and qualitative methods for collecting and analyzing data were used to answer the research questions. Quantitative data indicated no significant difference between the experimental groups, and qualitative data suggested students perceived pros and cons of argument evaluation in the high school Advanced Placement physics environment, such as self-reported sense of improvement in argument

  7. Infrastructure Survey 2009

    ERIC Educational Resources Information Center

    Group of Eight (NJ1), 2010

    2010-01-01

    In 2008 the Group of Eight (Go8) released a first report on the state of its buildings and infrastructure, based on a survey undertaken in 2007. A further survey was undertaken in 2009, updating some information about the assessed quality, value and condition of buildings and use of space. It also collated data related to aspects of the estate not…

  8. An Infrastructure Museum

    ERIC Educational Resources Information Center

    Roman, Harry T.

    2013-01-01

    This article invites teachers to let their students' imaginations soar as they become part of a team that will design a whole new kind of living technological museum, a facility that celebrates the world of infrastructure. In this activity, a new two-story building will be built, occupying a vacant corner parcel of land, approximately 150…

  9. Inspection of advanced computational lithography logic reticles using a 193-nm inspection system

    NASA Astrophysics Data System (ADS)

    Yu, Ching-Fang; Lin, Mei-Chun; Lai, Mei-Tsu; Hsu, Luke T. H.; Chin, Angus; Lee, S. C.; Yen, Anthony; Wang, Jim; Chen, Ellison; Wu, David; Broadbent, William H.; Huang, William; Zhu, Zinggang

    2010-09-01

    We report inspection results of early 22-nm logic reticles designed with both conventional and computational lithography methods. Inspection is performed using a state-of-the-art 193-nm reticle inspection system in the reticleplane inspection mode (RPI) where both rule-based sensitivity control (RSC) and a newer modelbased sensitivity control (MSC) method are tested. The evaluation includes defect detection performance using several special test reticles designed with both conventional and computational lithography methods; the reticles contain a variety of programmed critical defects which are measured based on wafer print impact. Also included are inspection results from several full-field product reticles designed with both conventional and computational lithography methods to determine if low nuisance-defect counts can be achieved. These early reticles are largely single-die and all inspections are performed in the die-to-database inspection mode only.

  10. High performance computing and communications: Advancing the frontiers of information technology

    SciTech Connect

    1997-12-31

    This report, which supplements the President`s Fiscal Year 1997 Budget, describes the interagency High Performance Computing and Communications (HPCC) Program. The HPCC Program will celebrate its fifth anniversary in October 1996 with an impressive array of accomplishments to its credit. Over its five-year history, the HPCC Program has focused on developing high performance computing and communications technologies that can be applied to computation-intensive applications. Major highlights for FY 1996: (1) High performance computing systems enable practical solutions to complex problems with accuracies not possible five years ago; (2) HPCC-funded research in very large scale networking techniques has been instrumental in the evolution of the Internet, which continues exponential growth in size, speed, and availability of information; (3) The combination of hardware capability measured in gigaflop/s, networking technology measured in gigabit/s, and new computational science techniques for modeling phenomena has demonstrated that very large scale accurate scientific calculations can be executed across heterogeneous parallel processing systems located thousands of miles apart; (4) Federal investments in HPCC software R and D support researchers who pioneered the development of parallel languages and compilers, high performance mathematical, engineering, and scientific libraries, and software tools--technologies that allow scientists to use powerful parallel systems to focus on Federal agency mission applications; and (5) HPCC support for virtual environments has enabled the development of immersive technologies, where researchers can explore and manipulate multi-dimensional scientific and engineering problems. Educational programs fostered by the HPCC Program have brought into classrooms new science and engineering curricula designed to teach computational science. This document contains a small sample of the significant HPCC Program accomplishments in FY 1996.

  11. Detecting nano-scale vibrations in rotating devices by using advanced computational methods.

    PubMed

    del Toro, Raúl M; Haber, Rodolfo E; Schmittdiel, Michael C

    2010-01-01

    This paper presents a computational method for detecting vibrations related to eccentricity in ultra precision rotation devices used for nano-scale manufacturing. The vibration is indirectly measured via a frequency domain analysis of the signal from a piezoelectric sensor attached to the stationary component of the rotating device. The algorithm searches for particular harmonic sequences associated with the eccentricity of the device rotation axis. The detected sequence is quantified and serves as input to a regression model that estimates the eccentricity. A case study presents the application of the computational algorithm during precision manufacturing processes.

  12. Recent computational advances in the identification of allosteric sites in proteins.

    PubMed

    Lu, Shaoyong; Huang, Wenkang; Zhang, Jian

    2014-10-01

    Allosteric modulators have the potential to fine-tune protein functional activity. Therefore, the targeting of allosteric sites, as a strategy in drug design, is gaining increasing attention. Currently, it is not trivial to find and characterize new allosteric sites by experimental approaches. Alternatively, computational approaches are useful in helping researchers analyze and select potential allosteric sites for drug discovery. Here, we review state-of-the-art computational approaches directed at predicting putative allosteric sites in proteins, along with examples of successes in identifying allosteric sites utilizing these methods. We also discuss the challenges in developing reliable methods for predicting allosteric sites and tactics to resolve demanding tasks. PMID:25107670

  13. Detecting nano-scale vibrations in rotating devices by using advanced computational methods.

    PubMed

    del Toro, Raúl M; Haber, Rodolfo E; Schmittdiel, Michael C

    2010-01-01

    This paper presents a computational method for detecting vibrations related to eccentricity in ultra precision rotation devices used for nano-scale manufacturing. The vibration is indirectly measured via a frequency domain analysis of the signal from a piezoelectric sensor attached to the stationary component of the rotating device. The algorithm searches for particular harmonic sequences associated with the eccentricity of the device rotation axis. The detected sequence is quantified and serves as input to a regression model that estimates the eccentricity. A case study presents the application of the computational algorithm during precision manufacturing processes. PMID:22399918

  14. Detecting Nano-Scale Vibrations in Rotating Devices by Using Advanced Computational Methods

    PubMed Central

    del Toro, Raúl M.; Haber, Rodolfo E.; Schmittdiel, Michael C.

    2010-01-01

    This paper presents a computational method for detecting vibrations related to eccentricity in ultra precision rotation devices used for nano-scale manufacturing. The vibration is indirectly measured via a frequency domain analysis of the signal from a piezoelectric sensor attached to the stationary component of the rotating device. The algorithm searches for particular harmonic sequences associated with the eccentricity of the device rotation axis. The detected sequence is quantified and serves as input to a regression model that estimates the eccentricity. A case study presents the application of the computational algorithm during precision manufacturing processes. PMID:22399918

  15. Development of Computational Approaches for Simulation and Advanced Controls for Hybrid Combustion-Gasification Chemical Looping

    SciTech Connect

    Joshi, Abhinaya; Lou, Xinsheng; Neuschaefer, Carl; Chaudry, Majid; Quinn, Joseph

    2012-07-31

    This document provides the results of the project through September 2009. The Phase I project has recently been extended from September 2009 to March 2011. The project extension will begin work on Chemical Looping (CL) Prototype modeling and advanced control design exploration in preparation for a scale-up phase. The results to date include: successful development of dual loop chemical looping process models and dynamic simulation software tools, development and test of several advanced control concepts and applications for Chemical Looping transport control and investigation of several sensor concepts and establishment of two feasible sensor candidates recommended for further prototype development and controls integration. There are three sections in this summary and conclusions. Section 1 presents the project scope and objectives. Section 2 highlights the detailed accomplishments by project task area. Section 3 provides conclusions to date and recommendations for future work.

  16. Structural analysis of advanced polymeric foams by means of high resolution X-ray computed tomography

    NASA Astrophysics Data System (ADS)

    Nacucchi, M.; De Pascalis, F.; Scatto, M.; Capodieci, L.; Albertoni, R.

    2016-06-01

    Advanced polymeric foams with enhanced thermal insulation and mechanical properties are used in a wide range of industrial applications. The properties of a foam strongly depend upon its cell structure. Traditionally, their microstructure has been studied using 2D imaging systems based on optical or electron microscopy, with the obvious disadvantage that only the surface of the sample can be analysed. To overcome this shortcoming, the adoption of X-ray micro-tomography imaging is here suggested to allow for a complete 3D, non-destructive analysis of advanced polymeric foams. Unlike metallic foams, the resolution of the reconstructed structural features is hampered by the low contrast in the images due to weak X-ray absorption in the polymer. In this work an advanced methodology based on high-resolution and low-contrast techniques is used to perform quantitative analyses on both closed and open cells foams. Local structural features of individual cells such as equivalent diameter, sphericity, anisotropy and orientation are statistically evaluated. In addition, thickness and length of the struts are determined, underlining the key role played by the achieved resolution. In perspective, the quantitative description of these structural features will be used to evaluate the results of in situ mechanical and thermal test on foam samples.

  17. Infrastructure for Training and Partnershipes: California Water and Coastal Ocean Resources

    NASA Technical Reports Server (NTRS)

    Siegel, David A.; Dozier, Jeffrey; Gautier, Catherine; Davis, Frank; Dickey, Tommy; Dunne, Thomas; Frew, James; Keller, Arturo; MacIntyre, Sally; Melack, John

    2000-01-01

    The purpose of this project was to advance the existing ICESS/Bren School computing infrastructure to allow scientists, students, and research trainees the opportunity to interact with environmental data and simulations in near-real time. Improvements made with the funding from this project have helped to strengthen the research efforts within both units, fostered graduate research training, and helped fortify partnerships with government and industry. With this funding, we were able to expand our computational environment in which computer resources, software, and data sets are shared by ICESS/Bren School faculty researchers in all areas of Earth system science. All of the graduate and undergraduate students associated with the Donald Bren School of Environmental Science and Management and the Institute for Computational Earth System Science have benefited from the infrastructure upgrades accomplished by this project. Additionally, the upgrades fostered a significant number of research projects (attached is a list of the projects that benefited from the upgrades). As originally proposed, funding for this project provided the following infrastructure upgrades: 1) a modem file management system capable of interoperating UNIX and NT file systems that can scale to 6.7 TB, 2) a Qualstar 40-slot tape library with two AIT tape drives and Legato Networker backup/archive software, 3) previously unavailable import/export capability for data sets on Zip, Jaz, DAT, 8mm, CD, and DLT media in addition to a 622Mb/s Internet 2 connection, 4) network switches capable of 100 Mbps to 128 desktop workstations, 5) Portable Batch System (PBS) computational task scheduler, and vi) two Compaq/Digital Alpha XP1000 compute servers each with 1.5 GB of RAM along with an SGI Origin 2000 (purchased partially using funds from this project along with funding from various other sources) to be used for very large computations, as required for simulation of mesoscale meteorology or climate.

  18. Dynamic Docking Test System (DDTS) active table computer program NASA Advanced Docking System (NADS)

    NASA Technical Reports Server (NTRS)

    Gates, R. M.; Jantz, R. E.

    1974-01-01

    A computer program was developed to describe the three-dimensional motion of the Dynamic Docking Test System active table. The input consists of inertia and geometry data, actuator structural data, forcing function data, hydraulics data, servo electronics data, and integration control data. The output consists of table responses, actuator bending responses, and actuator responses.

  19. 78 FR 68058 - Next Generation Risk Assessment: Incorporation of Recent Advances in Molecular, Computational...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-13

    ... Systems Biology [External Review Draft] AGENCY: Environmental Protection Agency (EPA). ACTION: Notice of... Systems Biology '' (EPA/600/R-13/214A). The original Federal Register notice announcing the public comment..., computational, and systems biology data can better inform risk assessment. This draft document is available...

  20. Exploring Interactive and Dynamic Simulations Using a Computer Algebra System in an Advanced Placement Chemistry Course

    ERIC Educational Resources Information Center

    Matsumoto, Paul S.

    2014-01-01

    The article describes the use of Mathematica, a computer algebra system (CAS), in a high school chemistry course. Mathematica was used to generate a graph, where a slider controls the value of parameter(s) in the equation; thus, students can visualize the effect of the parameter(s) on the behavior of the system. Also, Mathematica can show the…