Science.gov

Sample records for advanced computational infrastructure

  1. Advanced Artificial Science. The development of an artificial science and engineering research infrastructure to facilitate innovative computational modeling, analysis, and application to interdisciplinary areas of scientific investigation.

    SciTech Connect

    Saffer, Shelley I.

    2014-12-01

    This is a final report of the DOE award DE-SC0001132, Advanced Artificial Science. The development of an artificial science and engineering research infrastructure to facilitate innovative computational modeling, analysis, and application to interdisciplinary areas of scientific investigation. This document describes the achievements of the goals, and resulting research made possible by this award.

  2. Flexible Computational Science Infrastructure

    SciTech Connect

    Bergen, Ben; Moss, Nicholas; Charest, Marc Robert Joseph

    2016-04-06

    FleCSI is a compile-time configurable framework designed to support multi-physics application development. As such, FleCSI attempts to provide a very general set of infrastructure design patterns that can be specialized and extended to suit the needs of a broad variety of solver and data requirements. Current support includes multi-dimensional mesh topology, mesh geometry, and mesh adjacency information, n-dimensional hashed-tree data structures, graph partitioning interfaces, and dependency closures. FleCSI also introduces a functional programming model with control, execution, and data abstractions that are consistent with both MPI and state-of-the-art task-based runtimes such as Legion and Charm++. The FleCSI abstraction layer provides the developer with insulation from the underlying runtime, while allowing support for multiple runtime systems, including conventional models like asynchronous MPI. The intent is to give developers a concrete set of user-friendly programming tools that can be used now, while allowing flexibility in choosing runtime implementations and optimizations that can be applied to architectures and runtimes that arise in the future. The control and execution models in FleCSI also provide formal nomenclature for describing poorly understood concepts like kernels and tasks.

  3. Towards an Infrastructure for MLS Distributed Computing

    DTIC Science & Technology

    1998-01-01

    Distributed computing owes its success to the development of infrastructure, middleware, and standards (e.g., CORBA) by the computing industry. This...Government must protect national security information against unauthorized information flow. To support MLS distributed computing , a MLS infrastructure...protection of classified information and use both the emerging distributed computing and commercial security infrastructures. The resulting infrastructure

  4. @neurIST: infrastructure for advanced disease management through integration of heterogeneous data, computing, and complex processing services.

    PubMed

    Benkner, Siegfried; Arbona, Antonio; Berti, Guntram; Chiarini, Alessandro; Dunlop, Robert; Engelbrecht, Gerhard; Frangi, Alejandro F; Friedrich, Christoph M; Hanser, Susanne; Hasselmeyer, Peer; Hose, Rod D; Iavindrasana, Jimison; Köhler, Martin; Iacono, Luigi Lo; Lonsdale, Guy; Meyer, Rodolphe; Moore, Bob; Rajasekaran, Hariharan; Summers, Paul E; Wöhrer, Alexander; Wood, Steven

    2010-11-01

    The increasing volume of data describing human disease processes and the growing complexity of understanding, managing, and sharing such data presents a huge challenge for clinicians and medical researchers. This paper presents the @neurIST system, which provides an infrastructure for biomedical research while aiding clinical care, by bringing together heterogeneous data and complex processing and computing services. Although @neurIST targets the investigation and treatment of cerebral aneurysms, the system's architecture is generic enough that it could be adapted to the treatment of other diseases. Innovations in @neurIST include confining the patient data pertaining to aneurysms inside a single environment that offers clinicians the tools to analyze and interpret patient data and make use of knowledge-based guidance in planning their treatment. Medical researchers gain access to a critical mass of aneurysm related data due to the system's ability to federate distributed information sources. A semantically mediated grid infrastructure ensures that both clinicians and researchers are able to seamlessly access and work on data that is distributed across multiple sites in a secure way in addition to providing computing resources on demand for performing computationally intensive simulations for treatment planning and research.

  5. Computational Infrastructure for Geodynamics (CIG)

    NASA Astrophysics Data System (ADS)

    Gurnis, M.; Kellogg, L. H.; Bloxham, J.; Hager, B. H.; Spiegelman, M.; Willett, S.; Wysession, M. E.; Aivazis, M.

    2004-12-01

    Solid earth geophysicists have a long tradition of writing scientific software to address a wide range of problems. In particular, computer simulations came into wide use in geophysics during the decade after the plate tectonic revolution. Solution schemes and numerical algorithms that developed in other areas of science, most notably engineering, fluid mechanics, and physics, were adapted with considerable success to geophysics. This software has largely been the product of individual efforts and although this approach has proven successful, its strength for solving problems of interest is now starting to show its limitations as we try to share codes and algorithms or when we want to recombine codes in novel ways to produce new science. With funding from the NSF, the US community has embarked on a Computational Infrastructure for Geodynamics (CIG) that will develop, support, and disseminate community-accessible software for the greater geodynamics community from model developers to end-users. The software is being developed for problems involving mantle and core dynamics, crustal and earthquake dynamics, magma migration, seismology, and other related topics. With a high level of community participation, CIG is leveraging state-of-the-art scientific computing into a suite of open-source tools and codes. The infrastructure that we are now starting to develop will consist of: (a) a coordinated effort to develop reusable, well-documented and open-source geodynamics software; (b) the basic building blocks - an infrastructure layer - of software by which state-of-the-art modeling codes can be quickly assembled; (c) extension of existing software frameworks to interlink multiple codes and data through a superstructure layer; (d) strategic partnerships with the larger world of computational science and geoinformatics; and (e) specialized training and workshops for both the geodynamics and broader Earth science communities. The CIG initiative has already started to

  6. Scientific computing infrastructure and services in Moldova

    NASA Astrophysics Data System (ADS)

    Bogatencov, P. P.; Secrieru, G. V.; Degteariov, N. V.; Iliuha, N. P.

    2016-09-01

    In recent years distributed information processing and high-performance computing technologies (HPC, distributed Cloud and Grid computing infrastructures) for solving complex tasks with high demands of computing resources are actively developing. In Moldova the works on creation of high-performance and distributed computing infrastructures were started relatively recently due to participation in implementation of a number of international projects. Research teams from Moldova participated in a series of regional and pan-European projects that allowed them to begin forming the national heterogeneous computing infrastructure, get access to regional and European computing resources, and expand the range and areas of solving tasks.

  7. Computational Infrastructure for Nuclear Astrophysics

    NASA Astrophysics Data System (ADS)

    Smith, M. S.; Lingerfelt, E. J.; Scott, J. P.; Nesaraja, C. D.; Hix, W. R.; Bardayan, D. W.; Blackmon, J. C.; Chae, K.; Guidry, M. W.; Hard, C. C.; Sharp, J. E.; Kozub, R. L.; Meyer, R. A.

    2004-12-01

    The Computational Infrastructure for Nuclear Astrophysics is a platform-independent, online suite of computer codes developed by the ORNL Nuclear Data Project that makes a rapid connection between laboratory nuclear physics results and astrophysical models. It enables users to evaluate cross sections, process them into thermonuclear reaction rates, and parameterize (with a few percent accuracy) these rates that vary by up to 30 orders of magnitude over the temperatures of interest. Users can then properly format these rates for input into astrophysical computer simulations, create and manipulate libraries of rates, as well as run and visualize sample post-processing nucleosynthesis calculations. For example, we have developed animated nuclide charts that show how predicted abundances (represented by a user-defined color scale) change in time. With this unique suite, users can within a very short time quantify the astrophysical impact of a newly measured or calculated cross section, or a newly created customized reaction rate library, and then document and share their results with the scientific community. The suite has a straightforward interface with a "Windows Wizard" motif whereby users progress through complicated calculations in a step-by-step fashion. Users can upload their own files for processing and save their work on our server, as well as work with files that other users wish to share. These tools are currently being used to investigate novae and X-ray bursts. The suite is available through nucastrodata.org, a website that also hyperlinks available nuclear data sets relevant for nuclear astrophysics research. New features are continually being added to this software, which is funded by the U.S. Department of Energy Low Energy Nuclear Physics and Nuclear Data Programs. ORNL is managed by UT-Battelle, LLC, for the U.S. Department of Energy under contract DE-AC05-00OR22725.

  8. A Computing Infrastructure for Supporting Climate Studies

    NASA Astrophysics Data System (ADS)

    Yang, C.; Bambacus, M.; Freeman, S. M.; Huang, Q.; Li, J.; Sun, M.; Xu, C.; Wojcik, G. S.; Cahalan, R. F.; NASA Climate @ Home Project Team

    2011-12-01

    Climate change is one of the major challenges facing us on the Earth planet in the 21st century. Scientists build many models to simulate the past and predict the climate change for the next decades or century. Most of the models are at a low resolution with some targeting high resolution in linkage to practical climate change preparedness. To calibrate and validate the models, millions of model runs are needed to find the best simulation and configuration. This paper introduces the NASA effort on Climate@Home project to build a supercomputer based-on advanced computing technologies, such as cloud computing, grid computing, and others. Climate@Home computing infrastructure includes several aspects: 1) a cloud computing platform is utilized to manage the potential spike access to the centralized components, such as grid computing server for dispatching and collecting models runs results; 2) a grid computing engine is developed based on MapReduce to dispatch models, model configuration, and collect simulation results and contributing statistics; 3) a portal serves as the entry point for the project to provide the management, sharing, and data exploration for end users; 4) scientists can access customized tools to configure model runs and visualize model results; 5) the public can access twitter and facebook to get the latest about the project. This paper will introduce the latest progress of the project and demonstrate the operational system during the AGU fall meeting. It will also discuss how this technology can become a trailblazer for other climate studies and relevant sciences. It will share how the challenges in computation and software integration were solved.

  9. In silico drug discovery approaches on grid computing infrastructures.

    PubMed

    Wolf, Antje; Shahid, Mohammad; Kasam, Vinod; Ziegler, Wolfgang; Hofmann-Apitius, Martin

    2010-02-01

    The first step in finding a "drug" is screening chemical compound databases against a protein target. In silico approaches like virtual screening by molecular docking are well established in modern drug discovery. As molecular databases of compounds and target structures are becoming larger and more and more computational screening approaches are available, there is an increased need in compute power and more complex workflows. In this regard, computational Grids are predestined and offer seamless compute and storage capacity. In recent projects related to pharmaceutical research, the high computational and data storage demands of large-scale in silico drug discovery approaches have been addressed by using Grid computing infrastructures, in both; pharmaceutical industry as well as academic research. Grid infrastructures are part of the so-called eScience paradigm, where a digital infrastructure supports collaborative processes by providing relevant resources and tools for data- and compute-intensive applications. Substantial computing resources, large data collections and services for data analysis are shared on the Grid infrastructure and can be mobilized on demand. This review gives an overview on the use of Grid computing for in silico drug discovery and tries to provide a vision of future development of more complex and integrated workflows on Grids, spanning from target identification and target validation via protein-structure and ligand dependent screenings to advanced mining of large scale in silico experiments.

  10. Building an infrastructure for urgent computing.

    SciTech Connect

    Beckman, P.; Beschastnikh, I.; Nadella, S.; Trebon, N.; Mathematics and Computer Science; Univ. of Chicago; Univ. of Washington

    2007-01-01

    Large-scale scientific computing is playing an ever-increasing role in critical decision-making and dynamic, event-driven systems. While some computation can simply wait in a job queue until resources become available, key decisions concerning life-threatening situations must be made quickly. A computation to predict the flow of airborne contaminants from a ruptured railcar must guide evacuation rapidly, before the results are meaningless. Although not as urgent, on-demand computing is often required to leverage a scientific opportunity. For example, a burst of data from seismometers could trigger urgent computation that then redirects instruments to focus data collection on specific regions, before the opportunity is lost. This paper describes the challenges of building an infrastructure to support urgent computing. We focus on three areas: the needs and requirements of an urgent computing system, a prototype urgent computing system called SPRUCE currently deployed on the TeraGrid, and future technologies and directions for urgent computing. Since the days of the first supercomputers, computation has been playing an everincreasing role in science.

  11. National Computational Infrastructure for Lattice Gauge Theory

    SciTech Connect

    Brower, Richard C.

    2014-04-15

    SciDAC-2 Project The Secret Life of Quarks: National Computational Infrastructure for Lattice Gauge Theory, from March 15, 2011 through March 14, 2012. The objective of this project is to construct the software needed to study quantum chromodynamics (QCD), the theory of the strong interactions of sub-atomic physics, and other strongly coupled gauge field theories anticipated to be of importance in the energy regime made accessible by the Large Hadron Collider (LHC). It builds upon the successful efforts of the SciDAC-1 project National Computational Infrastructure for Lattice Gauge Theory, in which a QCD Applications Programming Interface (QCD API) was developed that enables lattice gauge theorists to make effective use of a wide variety of massively parallel computers. This project serves the entire USQCD Collaboration, which consists of nearly all the high energy and nuclear physicists in the United States engaged in the numerical study of QCD and related strongly interacting quantum field theories. All software developed in it is publicly available, and can be downloaded from a link on the USQCD Collaboration web site, or directly from the github repositories with entrance linke http://usqcd-software.github.io

  12. Computational infrastructure for law enforcement. Final report

    SciTech Connect

    Lades, M.; Kunz, C.; Strikos, I.

    1997-02-01

    This project planned to demonstrate the leverage of enhanced computational infrastructure for law enforcement by demonstrating the face recognition capability at LLNL. The project implemented a face finder module extending the segmentation capabilities of the current face recognition so it was capable of processing different image formats and sizes and create the pilot of a network-accessible image database for the demonstration of face recognition capabilities. The project was funded at $40k (2 man-months) for a feasibility study. It investigated several essential components of a networked face recognition system which could help identify, apprehend, and convict criminals.

  13. National Computational Infrastructure for Lattice Gauge Theory

    SciTech Connect

    Reed, Daniel, A

    2008-05-30

    In this document we describe work done under the SciDAC-1 Project National Computerational Infrastructure for Lattice Gauge Theory. The objective of this project was to construct the computational infrastructure needed to study quantim chromodynamics (QCD). Nearly all high energy and nuclear physicists in the United States working on the numerical study of QCD are involved in the project, as are Brookhaven National Laboratory (BNL), Fermi National Accelerator Laboratory (FNAL), and Thomas Jefferson National Accelerator Facility (JLab). A list of the serior participants is given in Appendix A.2. The project includes the development of community software for the effective use of the terascale computers, and the research and development of commodity clusters optimized for the study of QCD. The software developed as part of this effort is pubicly available, and is being widely used by physicists in the United States and abroad. The prototype clusters built with SciDAC-1 fund have been used to test the software, and are available to lattice guage theorists in the United States on a peer reviewed basis.

  14. Predictive Anomaly Management for Resilient Virtualized Computing Infrastructures

    DTIC Science & Technology

    2015-05-27

    SECURITY CLASSIFICATION OF: Large-scale distributed virtualized computing infrastructures have become important platforms for many critical real-world... virtualized computing infrastructures are inevitably prone to various system anomalies caused by software bugs, hardware failures, and resource contentions...2014 Approved for Public Release; Distribution Unlimited Final Report: Predictive Anomaly Management for Resilient Virtualized Computing

  15. Advanced Computer Typography.

    DTIC Science & Technology

    1981-12-01

    ADVANCED COMPUTER TYPOGRAPHY .(U) DEC 81 A V HERSHEY UNCLASSIFIED NPS012-81-005 M MEEEIEEEII IIUJIL15I.4 MICROCQP RE SO.JjI ON ft R NPS012-81-005...NAVAL POSTGRADUATE SCHOOL 0Monterey, California DTIC SELECTEWA APR 5 1982 B ADVANCED COMPUTER TYPOGRAPHY by A. V. HERSHEY December 1981 OApproved for...Subtitle) S. TYPE Or REPORT & PERIOD COVERED Final ADVANCED COMPUTER TYPOGRAPHY Dec 1979 - Dec 1981 S. PERFORMING ORG. REPORT NUMBER 7. AUTHOR(s) S CONTRACT

  16. Virtualizing observation computing infrastructure at Subaru Telescope

    NASA Astrophysics Data System (ADS)

    Jeschke, Eric; Inagaki, Takeshi; Kackley, Russell; Schubert, Kiaina; Tait, Philip

    2016-08-01

    Subaru Telescope, an 8-meter class optical telescope located in Hawaii, has been using a high-availability commodity cluster as a platform for our Observation Control System (OCS). Until recently, we have followed a tried-and-tested practice of running the system under a native (Linux) OS installation with dedicated attached RAID systems and following a strict cluster deployment model to facilitate failover handling of hardware problems,1.2 Following the apparent benefits of virtualizing (i.e. running in Virtual Machines (VMs)) many of the non- observation critical systems at the base facility, we recently began to explore the idea of migrating other parts of the observatory's computing infrastructure to virtualized systems, including the summit OCS, data analysis systems and even the front ends of various Instrument Control Systems. In this paper we describe our experience with the initial migration of the Observation Control System to virtual machines running on the cluster and using a new generation tool - ansible - to automate installation and deployment. This change has significant impacts for ease of cluster maintenance, upgrades, snapshots/backups, risk-management, availability, performance, cost-savings and energy use. In this paper we discuss some of the trade-offs involved in this virtualization and some of the impacts for the above-mentioned areas, as well as the specific techniques we are using to accomplish the changeover, simplify installation and reduce management complexity.

  17. RF Technologies for Advancing Space Communication Infrastructure

    NASA Technical Reports Server (NTRS)

    Romanofsky, Robert R.; Bibyk, Irene K.; Wintucky, Edwin G.

    2006-01-01

    This paper will address key technologies under development at the NASA Glenn Research Center designed to provide architecture-level impacts. Specifically, we will describe deployable antennas, a new type of phased array antenna and novel power amplifiers. The evaluation of architectural influence can be conducted from two perspectives where said architecture can be analyzed from either the top-down to determine the areas where technology improvements will be most beneficial or from the bottom-up where each technology s performance advancement can affect the overall architecture s performance. This paper will take the latter approach with focus on some technology improvement challenges and address architecture impacts. For example, using data rate as a performance metric, future exploration scenarios are expected to demand data rates possibly exceeding 1 Gbps. To support these advancements in a Mars scenario, as an example, Ka-band and antenna aperture sizes on the order of 10 meters will be required from Mars areostationary platforms. Key technical challenges for a large deployable antenna include maximizing the ratio of deployed-to-packaged volume, minimizing aerial density, maintaining RMS surface accuracy to within 1/20 of a wavelength or better, and developing reflector rigidization techniques. Moreover, the high frequencies and large apertures manifest a new problem for microwave engineers that are familiar to optical communications specialists: pointing. The fine beam widths and long ranges dictate the need for electronic or mechanical feed articulation to compensate for spacecraft attitude control limitations.

  18. Network and computing infrastructure for scientific applications in Georgia

    NASA Astrophysics Data System (ADS)

    Kvatadze, R.; Modebadze, Z.

    2016-09-01

    Status of network and computing infrastructure and available services for research and education community of Georgia are presented. Research and Educational Networking Association - GRENA provides the following network services: Internet connectivity, network services, cyber security, technical support, etc. Computing resources used by the research teams are located at GRENA and at major state universities. GE-01-GRENA site is included in European Grid infrastructure. Paper also contains information about programs of Learning Center and research and development projects in which GRENA is participating.

  19. Improving Mobile Infrastructure for Pervasive Personal Computing

    DTIC Science & Technology

    2007-11-01

    ISR model of mobile computing, users take advantage of pervasive de- ployments of inexpensive, mass-market PC hardware rather than carrying portable ...carry one’s computing environment in a portable computer. Instead, an exact replica of the last checkpointed state of a user’s entire computing...restricted. The same assumption applies to a portable computer that is physically safeguarded by the user at all times. In contrast, physical access to

  20. 77 FR 40586 - Draft NIST Interagency Report (NISTIR) 7823, Advanced Metering Infrastructure Smart Meter...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-10

    ... Metering Infrastructure Smart Meter Upgradeability Test Framework; Request for Comments AGENCY: National... Metering Infrastructure Smart Meter Upgradeability Test Framework (Draft NISTIR 7823). This draft document... process for the Advanced Metering Infrastructure (AMI) Smart Meters. The target audience for Draft...

  1. Advanced Wireless Power Transfer Vehicle and Infrastructure Analysis (Presentation)

    SciTech Connect

    Gonder, J.; Brooker, A.; Burton, E.; Wang, J.; Konan, A.

    2014-06-01

    This presentation discusses current research at NREL on advanced wireless power transfer vehicle and infrastructure analysis. The potential benefits of E-roadway include more electrified driving miles from battery electric vehicles, plug-in hybrid electric vehicles, or even properly equipped hybrid electric vehicles (i.e., more electrified miles could be obtained from a given battery size, or electrified driving miles could be maintained while using smaller and less expensive batteries, thereby increasing cost competitiveness and potential market penetration). The system optimization aspect is key given the potential impact of this technology on the vehicles, the power grid and the road infrastructure.

  2. Advanced technology program: information infrastructure for healthcare focused program.

    PubMed

    Spivack, Richard N

    2002-01-01

    This paper describes an initiative begun by the Advanced Technology Program in 1994 referred to as the Information Infrastructure for Healthcare (IIH) focused program. The IIH focus program began with an initial exchange of ideas among members of the private and public sectors (industry's submission of "white papers"; workshops conducted by the ATP; meetings held between individuals from both groups) to identify those technologies necessary for the development of a national information infrastructure in healthcare. A discussion of the development of the focus program through a "white paper" process notes differences that existed between what the ATP had hoped to gain through this method and how the private sector responded. A statistical description of the participants as well as a brief discussion of the ATP review and selection process is included.

  3. Autonomic Management of Application Workflows on Hybrid Computing Infrastructure

    DOE PAGES

    Kim, Hyunjoo; el-Khamra, Yaakoub; Rodero, Ivan; ...

    2011-01-01

    In this paper, we present a programming and runtime framework that enables the autonomic management of complex application workflows on hybrid computing infrastructures. The framework is designed to address system and application heterogeneity and dynamics to ensure that application objectives and constraints are satisfied. The need for such autonomic system and application management is becoming critical as computing infrastructures become increasingly heterogeneous, integrating different classes of resources from high-end HPC systems to commodity clusters and clouds. For example, the framework presented in this paper can be used to provision the appropriate mix of resources based on application requirements and constraints.more » The framework also monitors the system/application state and adapts the application and/or resources to respond to changing requirements or environment. To demonstrate the operation of the framework and to evaluate its ability, we employ a workflow used to characterize an oil reservoir executing on a hybrid infrastructure composed of TeraGrid nodes and Amazon EC2 instances of various types. Specifically, we show how different applications objectives such as acceleration, conservation and resilience can be effectively achieved while satisfying deadline and budget constraints, using an appropriate mix of dynamically provisioned resources. Our evaluations also demonstrate that public clouds can be used to complement and reinforce the scheduling and usage of traditional high performance computing infrastructure.« less

  4. New security infrastructure model for distributed computing systems

    NASA Astrophysics Data System (ADS)

    Dubenskaya, J.; Kryukov, A.; Demichev, A.; Prikhodko, N.

    2016-02-01

    At the paper we propose a new approach to setting up a user-friendly and yet secure authentication and authorization procedure in a distributed computing system. The security concept of the most heterogeneous distributed computing systems is based on the public key infrastructure along with proxy certificates which are used for rights delegation. In practice a contradiction between the limited lifetime of the proxy certificates and the unpredictable time of the request processing is a big issue for the end users of the system. We propose to use unlimited in time hashes which are individual for each request instead of proxy certificate. Our approach allows to avoid using of the proxy certificates. Thus the security infrastructure of distributed computing system becomes easier for development, support and use.

  5. High-throughput neuroimaging-genetics computational infrastructure.

    PubMed

    Dinov, Ivo D; Petrosyan, Petros; Liu, Zhizhong; Eggert, Paul; Hobel, Sam; Vespa, Paul; Woo Moon, Seok; Van Horn, John D; Franco, Joseph; Toga, Arthur W

    2014-01-01

    Many contemporary neuroscientific investigations face significant challenges in terms of data management, computational processing, data mining, and results interpretation. These four pillars define the core infrastructure necessary to plan, organize, orchestrate, validate, and disseminate novel scientific methods, computational resources, and translational healthcare findings. Data management includes protocols for data acquisition, archival, query, transfer, retrieval, and aggregation. Computational processing involves the necessary software, hardware, and networking infrastructure required to handle large amounts of heterogeneous neuroimaging, genetics, clinical, and phenotypic data and meta-data. Data mining refers to the process of automatically extracting data features, characteristics and associations, which are not readily visible by human exploration of the raw dataset. Result interpretation includes scientific visualization, community validation of findings and reproducible findings. In this manuscript we describe the novel high-throughput neuroimaging-genetics computational infrastructure available at the Institute for Neuroimaging and Informatics (INI) and the Laboratory of Neuro Imaging (LONI) at University of Southern California (USC). INI and LONI include ultra-high-field and standard-field MRI brain scanners along with an imaging-genetics database for storing the complete provenance of the raw and derived data and meta-data. In addition, the institute provides a large number of software tools for image and shape analysis, mathematical modeling, genomic sequence processing, and scientific visualization. A unique feature of this architecture is the Pipeline environment, which integrates the data management, processing, transfer, and visualization. Through its client-server architecture, the Pipeline environment provides a graphical user interface for designing, executing, monitoring validating, and disseminating of complex protocols that utilize

  6. High-throughput neuroimaging-genetics computational infrastructure

    PubMed Central

    Dinov, Ivo D.; Petrosyan, Petros; Liu, Zhizhong; Eggert, Paul; Hobel, Sam; Vespa, Paul; Woo Moon, Seok; Van Horn, John D.; Franco, Joseph; Toga, Arthur W.

    2014-01-01

    Many contemporary neuroscientific investigations face significant challenges in terms of data management, computational processing, data mining, and results interpretation. These four pillars define the core infrastructure necessary to plan, organize, orchestrate, validate, and disseminate novel scientific methods, computational resources, and translational healthcare findings. Data management includes protocols for data acquisition, archival, query, transfer, retrieval, and aggregation. Computational processing involves the necessary software, hardware, and networking infrastructure required to handle large amounts of heterogeneous neuroimaging, genetics, clinical, and phenotypic data and meta-data. Data mining refers to the process of automatically extracting data features, characteristics and associations, which are not readily visible by human exploration of the raw dataset. Result interpretation includes scientific visualization, community validation of findings and reproducible findings. In this manuscript we describe the novel high-throughput neuroimaging-genetics computational infrastructure available at the Institute for Neuroimaging and Informatics (INI) and the Laboratory of Neuro Imaging (LONI) at University of Southern California (USC). INI and LONI include ultra-high-field and standard-field MRI brain scanners along with an imaging-genetics database for storing the complete provenance of the raw and derived data and meta-data. In addition, the institute provides a large number of software tools for image and shape analysis, mathematical modeling, genomic sequence processing, and scientific visualization. A unique feature of this architecture is the Pipeline environment, which integrates the data management, processing, transfer, and visualization. Through its client-server architecture, the Pipeline environment provides a graphical user interface for designing, executing, monitoring validating, and disseminating of complex protocols that utilize

  7. Integrating multiple scientific computing needs via a Private Cloud infrastructure

    NASA Astrophysics Data System (ADS)

    Bagnasco, S.; Berzano, D.; Brunetti, R.; Lusso, S.; Vallero, S.

    2014-06-01

    In a typical scientific computing centre, diverse applications coexist and share a single physical infrastructure. An underlying Private Cloud facility eases the management and maintenance of heterogeneous use cases such as multipurpose or application-specific batch farms, Grid sites catering to different communities, parallel interactive data analysis facilities and others. It allows to dynamically and efficiently allocate resources to any application and to tailor the virtual machines according to the applications' requirements. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques; for example, rolling updates can be performed easily and minimizing the downtime. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 site and a dynamically expandable PROOF-based Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The Private Cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem (used in two different configurations for worker- and service-class hypervisors) and the OpenWRT Linux distribution (used for network virtualization). A future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and by using mainstream contextualization tools like CloudInit.

  8. Analysis of CERN computing infrastructure and monitoring data

    NASA Astrophysics Data System (ADS)

    Nieke, C.; Lassnig, M.; Menichetti, L.; Motesnitsalis, E.; Duellmann, D.

    2015-12-01

    Optimizing a computing infrastructure on the scale of LHC requires a quantitative understanding of a complex network of many different resources and services. For this purpose the CERN IT department and the LHC experiments are collecting a large multitude of logs and performance probes, which are already successfully used for short-term analysis (e.g. operational dashboards) within each group. The IT analytics working group has been created with the goal to bring data sources from different services and on different abstraction levels together and to implement a suitable infrastructure for mid- to long-term statistical analysis. It further provides a forum for joint optimization across single service boundaries and the exchange of analysis methods and tools. To simplify access to the collected data, we implemented an automated repository for cleaned and aggregated data sources based on the Hadoop ecosystem. This contribution describes some of the challenges encountered, such as dealing with heterogeneous data formats, selecting an efficient storage format for map reduce and external access, and will describe the repository user interface. Using this infrastructure we were able to quantitatively analyze the relationship between CPU/wall fraction, latency/throughput constraints of network and disk and the effective job throughput. In this contribution we will first describe the design of the shared analysis infrastructure and then present a summary of first analysis results from the combined data sources.

  9. Advanced Electrical, Optical and Data Communication Infrastructure Development

    SciTech Connect

    Simon Cobb

    2011-04-30

    The implementation of electrical and IT infrastructure systems at the North Carolina Center for Automotive Research , Inc. (NCCAR) has achieved several key objectives in terms of system functionality, operational safety and potential for ongoing research and development. Key conclusions include: (1) The proven ability to operate a high speed wireless data network over a large 155 acre area; (2) Node to node wireless transfers from access points are possible at speeds of more than 50 mph while maintaining high volume bandwidth; (3) Triangulation of electronic devices/users is possible in areas with overlapping multiple access points, outdoor areas with reduced overlap of access point coverage considerably reduces triangulation accuracy; (4) Wireless networks can be adversely affected by tree foliage, pine needles are a particular challenge due to the needle length relative to the transmission frequency/wavelength; and (5) Future research will use the project video surveillance and wireless systems to further develop automated image tracking functionality for the benefit of advanced vehicle safety monitoring and autonomous vehicle control through 'vehicle-to-vehicle' and 'vehicle-to-infrastructure' communications. A specific advantage realized from this IT implementation at NCCAR is that NC State University is implementing a similar wireless network across Centennial Campus, Raleigh, NC in 2011 and has benefited from lessons learned during this project. Consequently, students, researchers and members of the public will be able to benefit from a large scale IT implementation with features and improvements derived from this NCCAR project.

  10. An Infrastructure to Advance the Scholarly Work of Staff Nurses

    PubMed Central

    Parkosewich, Janet A.

    2013-01-01

    The traditional role of the acute care staff nurse is changing. The new norm establishes an expectation that staff nurses base their practice on best evidence. When evidence is lacking, nurses are charged with using the research process to generate and disseminate new knowledge. This article describes the critical forces behind the transformation of this role and the organizational mission, culture, and capacity required to support practice that is based on science. The vital role of senior nursing leaders, the nurse researcher, and the nursing research committee within the context of a collaborative governance structure is highlighted. Several well-known, evidence-based practice models are presented. Finally, there is a discussion of the infrastructure created by Yale-New Haven Hospital to advance the scholarly work of the nursing staff. PMID:23482435

  11. Advances in Computational Astrophysics

    SciTech Connect

    Calder, Alan C.; Kouzes, Richard T.

    2009-03-01

    I was invited to be the guest editor for a special issue of Computing in Science and Engineering along with a colleague from Stony Brook. This is the guest editors' introduction to a special issue of Computing in Science and Engineering. Alan and I have written this introduction and have been the editors for the 4 papers to be published in this special edition.

  12. Distributed Monitoring Infrastructure for Worldwide LHC Computing Grid

    NASA Astrophysics Data System (ADS)

    Andrade, P.; Babik, M.; Bhatt, K.; Chand, P.; Collados, D.; Duggal, V.; Fuente, P.; Hayashi, S.; Imamagic, E.; Joshi, P.; Kalmady, R.; Karnani, U.; Kumar, V.; Lapka, W.; Quick, R.; Tarragon, J.; Teige, S.; Triantafyllidis, C.

    2012-12-01

    The journey of a monitoring probe from its development phase to the moment its execution result is presented in an availability report is a complex process. It goes through multiple phases such as development, testing, integration, release, deployment, execution, data aggregation, computation, and reporting. Further, it involves people with different roles (developers, site managers, VO[1] managers, service managers, management), from different middleware providers (ARC[2], dCache[3], gLite[4], UNICORE[5] and VDT[6]), consortiums (WLCG[7], EMI[11], EGI[15], OSG[13]), and operational teams (GOC[16], OMB[8], OTAG[9], CSIRT[10]). The seamless harmonization of these distributed actors is in daily use for monitoring of the WLCG infrastructure. In this paper we describe the monitoring of the WLCG infrastructure from the operational perspective. We explain the complexity of the journey of a monitoring probe from its execution on a grid node to the visualization on the MyWLCG[27] portal where it is exposed to other clients. This monitoring workflow profits from the interoperability established between the SAM[19] and RSV[20] frameworks. We show how these two distributed structures are capable of uniting technologies and hiding the complexity around them, making them easy to be used by the community. Finally, the different supported deployment strategies, tailored not only for monitoring the entire infrastructure but also for monitoring sites and virtual organizations, are presented and the associated operational benefits highlighted.

  13. Performance evaluation of cognitive radio in advanced metering infrastructure communication

    NASA Astrophysics Data System (ADS)

    Hiew, Yik-Kuan; Mohd Aripin, Norazizah; Din, Norashidah Md

    2016-03-01

    Smart grid is an intelligent electricity grid system. A reliable two-way communication system is required to transmit both critical and non-critical smart grid data. However, it is difficult to locate a huge chunk of dedicated spectrum for smart grid communications. Hence, cognitive radio based communication is applied. Cognitive radio allows smart grid users to access licensed spectrums opportunistically with the constraint of not causing harmful interference to licensed users. In this paper, a cognitive radio based smart grid communication framework is proposed. Smart grid framework consists of Home Area Network (HAN) and Advanced Metering Infrastructure (AMI), while AMI is made up of Neighborhood Area Network (NAN) and Wide Area Network (WAN). In this paper, the authors only report the findings for AMI communication. AMI is smart grid domain that comprises smart meters, data aggregator unit, and billing center. Meter data are collected by smart meters and transmitted to data aggregator unit by using cognitive 802.11 technique; data aggregator unit then relays the data to billing center using cognitive WiMAX and TV white space. The performance of cognitive radio in AMI communication is investigated using Network Simulator 2. Simulation results show that cognitive radio improves the latency and throughput performances of AMI. Besides, cognitive radio also improves spectrum utilization efficiency of WiMAX band from 5.92% to 9.24% and duty cycle of TV band from 6.6% to 10.77%.

  14. A grid computing infrastructure for MEG data analysis.

    PubMed

    Nakagawa, S; Kosaka, T; Date, S; Shimojo, S; Tonoike, M

    2004-11-30

    Magnetoencephalography (MEG) is widely used for studying brain functions, but clinical applications of MEG have been less prevalent. One reason is that only clinicians who have highly specialized knowledge can use MEG diagnostically, and such clinicians are found at only a few major hospitals. Another reason is that MEG data analysis is getting more and more complicated, and deals with a large amount of data, and thus requires high-performance computing. These problems can be solved by the collaboration of human and computing resources distributed in multiple facilities. A new computing infrastructure for brain scientists and clinicians in distant locations was therefore developed by the Grid technology, which provides virtual computing environments composed of geographically distributed computers and experimental devices. A prototype system connecting an MEG system at the AIST in Japan, a Grid environment composed of PC clusters at Osaka University in Japan and Nanyang Technological University in Singapore, and user terminals in Baltimore was developed. MEG data measured at the AIST were transferred in real-time through a 1-GB/s network to the PC clusters for processing by a wavelet cross-correlation method, and then monitored in Baltimore. The current system is the basic model for remote-access to MEG equipment and high-speed processing of MEG data.

  15. The Computing and Data Grid Approach: Infrastructure for Distributed Science Applications

    NASA Technical Reports Server (NTRS)

    Johnston, William E.

    2002-01-01

    With the advent of Grids - infrastructure for using and managing widely distributed computing and data resources in the science environment - there is now an opportunity to provide a standard, large-scale, computing, data, instrument, and collaboration environment for science that spans many different projects and provides the required infrastructure and services in a relatively uniform and supportable way. Grid technology has evolved over the past several years to provide the services and infrastructure needed for building 'virtual' systems and organizations. We argue that Grid technology provides an excellent basis for the creation of the integrated environments that can combine the resources needed to support the large- scale science projects located at multiple laboratories and universities. We present some science case studies that indicate that a paradigm shift in the process of science will come about as a result of Grids providing transparent and secure access to advanced and integrated information and technologies infrastructure: powerful computing systems, large-scale data archives, scientific instruments, and collaboration tools. These changes will be in the form of services that can be integrated with the user's work environment, and that enable uniform and highly capable access to these computers, data, and instruments, regardless of the location or exact nature of these resources. These services will integrate transient-use resources like computing systems, scientific instruments, and data caches (e.g., as they are needed to perform a simulation or analyze data from a single experiment); persistent-use resources. such as databases, data catalogues, and archives, and; collaborators, whose involvement will continue for the lifetime of a project or longer. While we largely address large-scale science in this paper, Grids, particularly when combined with Web Services, will address a broad spectrum of science scenarios. both large and small scale.

  16. The Einstein Toolkit: a community computational infrastructure for relativistic astrophysics

    NASA Astrophysics Data System (ADS)

    Löffler, Frank; Faber, Joshua; Bentivegna, Eloisa; Bode, Tanja; Diener, Peter; Haas, Roland; Hinder, Ian; Mundim, Bruno C.; Ott, Christian D.; Schnetter, Erik; Allen, Gabrielle; Campanelli, Manuela; Laguna, Pablo

    2012-06-01

    We describe the Einstein Toolkit, a community-driven, freely accessible computational infrastructure intended for use in numerical relativity, relativistic astrophysics, and other applications. The toolkit, developed by a collaboration involving researchers from multiple institutions around the world, combines a core set of components needed to simulate astrophysical objects such as black holes, compact objects, and collapsing stars, as well as a full suite of analysis tools. The Einstein Toolkit is currently based on the Cactus framework for high-performance computing and the Carpet adaptive mesh refinement driver. It implements spacetime evolution via the BSSN evolution system and general relativistic hydrodynamics in a finite-volume discretization. The toolkit is under continuous development and contains many new code components that have been publicly released for the first time and are described in this paper. We discuss the motivation behind the release of the toolkit, the philosophy underlying its development, and the goals of the project. A summary of the implemented numerical techniques is included, as are results of numerical test covering a variety of sample astrophysical problems.

  17. Advanced computations in plasma physics

    NASA Astrophysics Data System (ADS)

    Tang, W. M.

    2002-05-01

    Scientific simulation in tandem with theory and experiment is an essential tool for understanding complex plasma behavior. In this paper we review recent progress and future directions for advanced simulations in magnetically confined plasmas with illustrative examples chosen from magnetic confinement research areas such as microturbulence, magnetohydrodynamics, magnetic reconnection, and others. Significant recent progress has been made in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics, giving increasingly good agreement between experimental observations and computational modeling. This was made possible by innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning widely disparate temporal and spatial scales together with access to powerful new computational resources. In particular, the fusion energy science community has made excellent progress in developing advanced codes for which computer run-time and problem size scale well with the number of processors on massively parallel machines (MPP's). A good example is the effective usage of the full power of multi-teraflop (multi-trillion floating point computations per second) MPP's to produce three-dimensional, general geometry, nonlinear particle simulations which have accelerated progress in understanding the nature of turbulence self-regulation by zonal flows. It should be emphasized that these calculations, which typically utilized billions of particles for thousands of time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In general, results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. The associated scientific excitement should serve to

  18. Center for Advanced Computational Technology

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.

    2000-01-01

    The Center for Advanced Computational Technology (ACT) was established to serve as a focal point for diverse research activities pertaining to application of advanced computational technology to future aerospace systems. These activities include the use of numerical simulations, artificial intelligence methods, multimedia and synthetic environments, and computational intelligence, in the modeling, analysis, sensitivity studies, optimization, design and operation of future aerospace systems. The Center is located at NASA Langley and is an integral part of the School of Engineering and Applied Science of the University of Virginia. The Center has four specific objectives: 1) conduct innovative research on applications of advanced computational technology to aerospace systems; 2) act as pathfinder by demonstrating to the research community what can be done (high-potential, high-risk research); 3) help in identifying future directions of research in support of the aeronautical and space missions of the twenty-first century; and 4) help in the rapid transfer of research results to industry and in broadening awareness among researchers and engineers of the state-of-the-art in applications of advanced computational technology to the analysis, design prototyping and operations of aerospace and other high-performance engineering systems. In addition to research, Center activities include helping in the planning and coordination of the activities of a multi-center team of NASA and JPL researchers who are developing an intelligent synthesis environment for future aerospace systems; organizing workshops and national symposia; as well as writing state-of-the-art monographs and NASA special publications on timely topics.

  19. Advanced Computation in Plasma Physics

    NASA Astrophysics Data System (ADS)

    Tang, William

    2001-10-01

    Scientific simulation in tandem with theory and experiment is an essential tool for understanding complex plasma behavior. This talk will review recent progress and future directions for advanced simulations in magnetically-confined plasmas with illustrative examples chosen from areas such as microturbulence, magnetohydrodynamics, magnetic reconnection, and others. Significant recent progress has been made in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics, giving increasingly good agreement between experimental observations and computational modeling. This was made possible by innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning widely disparate temporal and spatial scales together with access to powerful new computational resources. In particular, the fusion energy science community has made excellent progress in developing advanced codes for which computer run-time and problem size scale well with the number of processors on massively parallel machines (MPP's). A good example is the effective usage of the full power of multi-teraflop MPP's to produce 3-dimensional, general geometry, nonlinear particle simulations which have accelerated progress in understanding the nature of turbulence self-regulation by zonal flows. It should be emphasized that these calculations, which typically utilized billions of particles for tens of thousands time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In general, results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. The associated scientific excitement should serve to stimulate improved cross-cutting collaborations with other fields and also to help attract

  20. Advanced Decentralized Water/Energy Network Design for Sustainable Infrastructure

    EPA Science Inventory

    In order to provide a water infrastructure that is more sustainable into and beyond the 21st century, drinking water distribution systems and wastewater collection systems must account for our diminishing water supply, increasing demands, climate change, energy cost and availabil...

  1. Advanced flight computer. Special study

    NASA Technical Reports Server (NTRS)

    Coo, Dennis

    1995-01-01

    This report documents a special study to define a 32-bit radiation hardened, SEU tolerant flight computer architecture, and to investigate current or near-term technologies and development efforts that contribute to the Advanced Flight Computer (AFC) design and development. An AFC processing node architecture is defined. Each node may consist of a multi-chip processor as needed. The modular, building block approach uses VLSI technology and packaging methods that demonstrate a feasible AFC module in 1998 that meets that AFC goals. The defined architecture and approach demonstrate a clear low-risk, low-cost path to the 1998 production goal, with intermediate prototypes in 1996.

  2. Using cloud computing infrastructure with CloudBioLinux, CloudMan, and Galaxy.

    PubMed

    Afgan, Enis; Chapman, Brad; Jadan, Margita; Franke, Vedran; Taylor, James

    2012-06-01

    Cloud computing has revolutionized availability and access to computing and storage resources, making it possible to provision a large computational infrastructure with only a few clicks in a Web browser. However, those resources are typically provided in the form of low-level infrastructure components that need to be procured and configured before use. In this unit, we demonstrate how to utilize cloud computing resources to perform open-ended bioinformatic analyses, with fully automated management of the underlying cloud infrastructure. By combining three projects, CloudBioLinux, CloudMan, and Galaxy, into a cohesive unit, we have enabled researchers to gain access to more than 100 preconfigured bioinformatics tools and gigabytes of reference genomes on top of the flexible cloud computing infrastructure. The protocol demonstrates how to set up the available infrastructure and how to use the tools via a graphical desktop interface, a parallel command-line interface, and the Web-based Galaxy interface.

  3. CARMA: Management infrastructure and middleware for multi-paradigm computing

    NASA Astrophysics Data System (ADS)

    Troxel, Ian A.

    A recent trend toward multi-paradigm systems that couple General-Purpose Processors (GPPs) and other special-purpose devices such as Field-Programmable Gate Arrays (FPGAs) has shown impressive speedups. In fact, some researchers see such systems as vital to extending an anticipated slowing of Moore's law. However, extracting the full potential of large-scale, multi-paradigm, High-Performance Computing (HPC) systems has proven difficult due in part to a lack of traditional job management services upon which HPC users have come to rely. In this dissertation, the Comprehensive Approach to Reconfigurable Management Architecture (CARMA) is introduced and investigated. CARMA attempts to address the issue of large-scale, multi-paradigm system utilization by (1) providing a scalable, modular framework standard upon which other scholarly work can build; (2) conducting feasibility and tradeoff analysis of key components to determine strategies for reducing overhead and increasing performance and functionality; and (3) deploying initial prototypes with baseline services to receive feedback from the research community. In order to serve the needs of a wide range of HPC users, versions of CARMA tailored to two unique processing paradigms have been deployed and studied on an embedded space system and Beowulf cluster. Several scholarly contributions have been made by developing and analyzing the first comprehensive framework to date for job and resource management and services. CARMA is a first step toward developing an open standard with a modular design that will spur inventiveness and foster industry, government and scholarly collaboration much like Condor, Globus and the Open Systems Interconnect model have done for the cluster, grid and networking environments. Developing and evaluating components to fill out the CARMA infrastructure provided key insight into how future multi-paradigm runtime management services should be structured. In addition, deploying initial prototypes

  4. The computing and data infrastructure to interconnect EEE stations

    NASA Astrophysics Data System (ADS)

    Noferini, F.

    2016-07-01

    The Extreme Energy Event (EEE) experiment is devoted to the search of high energy cosmic rays through a network of telescopes installed in about 50 high schools distributed throughout the Italian territory. This project requires a peculiar data management infrastructure to collect data registered in stations very far from each other and to allow a coordinated analysis. Such an infrastructure is realized at INFN-CNAF, which operates a Cloud facility based on the OpenStack opensource Cloud framework and provides Infrastructure as a Service (IaaS) for its users. In 2014 EEE started to use it for collecting, monitoring and reconstructing the data acquired in all the EEE stations. For the synchronization between the stations and the INFN-CNAF infrastructure we used BitTorrent Sync, a free peer-to-peer software designed to optimize data syncronization between distributed nodes. All data folders are syncronized with the central repository in real time to allow an immediate reconstruction of the data and their publication in a monitoring webpage. We present the architecture and the functionalities of this data management system that provides a flexible environment for the specific needs of the EEE project.

  5. Building a Cloud Computing and Big Data Infrastructure for Cybersecurity Research and Education

    DTIC Science & Technology

    2015-04-17

    switch provides 4 x 40GbE + 48 x 10 Gb Base-T  Data Nodes (x12): Dell PowerEdge R720xd each w/  2 Intel Xeon Processors each with 6 Cores  64 GB...cloud computing and big data infrastructure to support the cybersecurity research, education, and outreach programs at Norfolk State University...2014 31-Jan-2015 Approved for Public Release; Distribution Unlimited Final Report: Building a Cloud Computing and Big Data Infrastructure for

  6. Elements of a computational infrastructure for social simulation.

    PubMed

    Birkin, Mark; Procter, Rob; Allan, Rob; Bechhofer, Sean; Buchan, Iain; Goble, Carole; Hudson-Smith, Andy; Lambert, Paul; De Roure, David; Sinnott, Richard

    2010-08-28

    Applications of simulation modelling in social science domains are varied and increasingly widespread. The effective deployment of simulation models depends on access to diverse datasets, the use of analysis capabilities, the ability to visualize model outcomes and to capture, share and re-use simulations as evidence in research and policy-making. We describe three applications of e-social science that promote social simulation modelling, data management and visualization. An example is outlined in which the three components are brought together in a transport planning context. We discuss opportunities and benefits for the combination of these and other components into an e-infrastructure for social simulation and review recent progress towards the establishment of such an infrastructure.

  7. Cloud Computing and Virtual Desktop Infrastructures in Afloat Environments

    DTIC Science & Technology

    2012-06-01

    INTENTIONALLY LEFT BLANK xiii LIST OF ACRONYMS AND ABBREVIATIONS AOR Area of Responsibility API Application Programming Interface BGAN ...Global Area Network The Broadband Global Area Network, or BGAN , is a global satellite Internet network system. The system utilizes the Internet...transfer, and remote surveillance, among others. [38] BGAN is a viable and suitable solution to linking nodes in an afloat cloud infrastructure for several

  8. Infrastructure Suitability Assessment Modeling for Cloud Computing Solutions

    DTIC Science & Technology

    2011-09-01

    implementations of the cloud com- puting paradigm, dissolving the need to co-locate user and computing power by providing desired services through the...increased imple- mentations of the cloud computing paradigm, dissolving the need to co-locate user and computing power by providing desired services...technologies, such as the widespread availability of fast computer networks, inexpensive computing power provided by small-form factor servers and

  9. Building Efficient Wireless Infrastructures for Pervasive Computing Environments

    ERIC Educational Resources Information Center

    Sheng, Bo

    2010-01-01

    Pervasive computing is an emerging concept that thoroughly brings computing devices and the consequent technology into people's daily life and activities. Most of these computing devices are very small, sometimes even "invisible", and often embedded into the objects surrounding people. In addition, these devices usually are not isolated, but…

  10. A Distributed Computing Infrastructure for Computational Thermodynamic Calculations of Solid-Liquid Phase Equilibria

    NASA Astrophysics Data System (ADS)

    Ghiorso, M. S.; Kress, V. C.

    2004-12-01

    Software tools like MELTS (Ghiorso and Sack, 1995, CMP 119:197) and its derivatives (Ghiorso et al., 2002, G3 3:10.1029/2001GC000217) are sophisticated calculators used by geoscientists to quantify the chemistry of melt production, transport and storage. These tools utilize computational thermodynamics to evaluate the equilibrium state of the system under specified external conditions by minimizing a suitably constructed thermodynamic potential. Like any thermodynamically based tool, the principal advantage in employing these techniques to model igneous processes is the intrinsic ability to couple the chemistry and energetics of the evolution of the system in a self consistent and rigorous formalism. Access to MELTS is normally accomplished via a standalone X11-based executable or as a Java-based web applet. The latter is a dedicated client-server application rooted at the University of Chicago. Our on-going objective is the development of a distributed computing infrastructure to provide "MELTS-like" computations on demand to remote network users by utilizing a language independent client-server protocol based on CORBA. The advantages of this model are numerous. First, the burden of implementing and executing MELTS computations is centralized with a software implementation optimized to a compute cluster dedicated for that purpose. Improvements and updates to MELTS software are handled locally on the server side without intervention of the user and the server-model lessens the burden of supporting the computational code on a variety of hardware and OS platforms. Second, the client hardware platform does not incur the computational cost of performing a MELTS simulation and the remote user can focus on the task of incorporating results into their model. Third, the client user can write software in a computer language of their choosing and procedural calls to the MELTS library can be executed transparently over the network as if a local language-compatible library of

  11. The inhabited environment, infrastructure development and advanced urbanization in China’s Yangtze River Delta Region

    NASA Astrophysics Data System (ADS)

    Zhu, Xiaoqing; Gao, Weijun; Zhou, Nan; Kammen, Daniel M.; Wu, Yiqun; Zhang, Yao; Chen, Wei

    2016-12-01

    This paper analyzes the relationship among the inhabited environment, infrastructure development and environmental impacts in China’s heavily urbanized Yangtze River Delta region. Using primary human environment data for the period 2006-2014, we examine factors affecting the inhabited environment and infrastructure development: urban population, GDP, built-up area, energy consumption, waste emission, transportation, real estate and urban greenery. Then we empirically investigate the impact of advanced urbanization with consideration of cities’ differences. Results from this study show that the growth rate of the inhabited environment and infrastructure development is strongly influenced by regional development structure, functional orientations, traffic network and urban size and form. The effect of advanced urbanization is more significant in large and mid-size cities than huge and mega cities. Energy consumption, waste emission and real estate in large and mid-size cities developed at an unprecedented rate with the rapid increase of economy. However, urban development of huge and mega cities gradually tended to be saturated. The transition development in these cities improved the inhabited environment and ecological protection instead of the urban construction simply. To maintain a sustainable advanced urbanization process, policy implications included urban sprawl control polices, ecological development mechanisms and reforming the economic structure for huge and mega cities, and construct major cross-regional infrastructure, enhance the carrying capacity and improvement of energy efficiency and structure for large and mid-size cities.

  12. A secure communications infrastructure for high-performance distributed computing

    SciTech Connect

    Foster, I.; Koenig, G.; Tuecke, S.

    1997-08-01

    Applications that use high-speed networks to connect geographically distributed supercomputers, databases, and scientific instruments may operate over open networks and access valuable resources. Hence, they can require mechanisms for ensuring integrity and confidentially of communications and for authenticating both users and resources. Security solutions developed for traditional client-server applications do not provide direct support for the program structures, programming tools, and performance requirements encountered in these applications. The authors address these requirements via a security-enhanced version of the Nexus communication library; which they use to provide secure versions of parallel libraries and languages, including the Message Passing Interface. These tools permit a fine degree of control over what, where, and when security mechanisms are applied. In particular, a single application can mix secure and nonsecure communication, allowing the programmer to make fine-grained security/performance tradeoffs. The authors present performance results that quantify the performance of their infrastructure.

  13. An Infrastructure for Web-Based Computer Assisted Learning

    ERIC Educational Resources Information Center

    Joy, Mike; Muzykantskii, Boris; Rawles, Simon; Evans, Michael

    2002-01-01

    We describe an initiative under way at Warwick to provide a technical foundation for computer aided learning and computer-assisted assessment tools, which allows a rich dialogue sensitive to individual students' response patterns. The system distinguishes between dialogues for individual problems and the linking of problems. This enables a subject…

  14. Managing a tier-2 computer centre with a private cloud infrastructure

    NASA Astrophysics Data System (ADS)

    Bagnasco, Stefano; Berzano, Dario; Brunetti, Riccardo; Lusso, Stefano; Vallero, Sara

    2014-06-01

    In a typical scientific computing centre, several applications coexist and share a single physical infrastructure. An underlying Private Cloud infrastructure eases the management and maintenance of such heterogeneous applications (such as multipurpose or application-specific batch farms, Grid sites, interactive data analysis facilities and others), allowing dynamic allocation resources to any application. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques. Such infrastructures are being deployed in some large centres (see e.g. the CERN Agile Infrastructure project), but with several open-source tools reaching maturity this is becoming viable also for smaller sites. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 centre, an Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The private cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem and the OpenWRT Linux distribution (used for network virtualization); a future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and OCCI.

  15. The Mediated Museum: Computer-Based Technology and Museum Infrastructure.

    ERIC Educational Resources Information Center

    Sterman, Nanette T.; Allen, Brockenbrough S.

    1991-01-01

    Describes the use of computer-based tools and techniques in museums. The integration of realia with media-based advice and interpretation is described, electronic replicas of ancient Greek vases in the J. Paul Getty Museum are explained, examples of mediated exhibits are presented, and the use of hypermedia is discussed. (five references) (LRW)

  16. Aerodynamic Analyses Requiring Advanced Computers, part 2

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Papers given at the conference present the results of theoretical research on aerodynamic flow problems requiring the use of advanced computers. Topics discussed include two-dimensional configurations, three-dimensional configurations, transonic aircraft, and the space shuttle.

  17. Aerodynamic Analyses Requiring Advanced Computers, Part 1

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Papers are presented which deal with results of theoretical research on aerodynamic flow problems requiring the use of advanced computers. Topics discussed include: viscous flows, boundary layer equations, turbulence modeling and Navier-Stokes equations, and internal flows.

  18. Bringing Advanced Computational Techniques to Energy Research

    SciTech Connect

    Mitchell, Julie C

    2012-11-17

    Please find attached our final technical report for the BACTER Institute award. BACTER was created as a graduate and postdoctoral training program for the advancement of computational biology applied to questions of relevance to bioenergy research.

  19. Quantum chromodynamics with advanced computing

    SciTech Connect

    Kronfeld, Andreas S.; /Fermilab

    2008-07-01

    We survey results in lattice quantum chromodynamics from groups in the USQCD Collaboration. The main focus is on physics, but many aspects of the discussion are aimed at an audience of computational physicists.

  20. Disaster Management Based on Urgent Computing and Dynamic Data Driven Application Services over European e-Infrastructures

    NASA Astrophysics Data System (ADS)

    Schiffers, Michael; Kranzlmüller, Dieter

    2014-05-01

    The study of natural hazards in Hydro-Meteorological Research (HMR) requires the execution of various meteorological, hydrological, and hydraulic models -- either standalone or as well-orchestrated chains (workflows) -- in order to better understand the natural phenomena leading to hazardous events. When hazards turn into disasters, exposure and vulnerability models need to be considered in addition for appropriate emergency response and recovery. While it is possible to mitigate the impact of disasters through education, policies, infrastructure resilience, and real-time decision support systems, the need to go beyond the pure communication of timely information by the advanced use of optimization and simulation technology is increasingly accepted. In general, the focus of today's decision-support systems for disaster management is on pure situational awareness (i.e., communicating to decision makers the situation in the field as accurately as possible). Experience from recent disaster cases, however, underpins the necessity of a novel, holistic approach to disaster management which not only needs to factor in the dependability of high performance computing resources and current advances in computational sciences (simulation, optimization), but also the dynamic integration of widely distributed (currently collected and archival) data into executing simulations to steer both the application model and further measurements (instrumentation and control). The first paradigm is commonly subsumed under "urgent computing (UC)", the latter one under "dynamic data driven application services (DDDAS)". Both paradigms suffer from severe research challenges. In the talk we first introduce both paradigms using several disaster cases as examples. We then deduct from these cases the inherent research questions and present a brief survey of the state-of-the-art. The main part of the talk, however, will be devoted to the presentation of a "disaster management infrastructure

  1. Some recent advances of intelligent health monitoring systems for civil infrastructures in HIT

    NASA Astrophysics Data System (ADS)

    Ou, Jinping

    2005-06-01

    The intelligent health monitoring systems more and more become a technique for ensuring the health and safety of civil infrastructures and also an important approach for research of the damage accumulation or even disaster evolving characteristics of civil infrastructures, and attracts prodigious research interests and active development interests of scientists and engineers since a great number of civil infrastructures are planning and building each year in mainland China. In this paper, some recent advances on research, development nad implementation of intelligent health monitoring systems for civil infrastructuresin mainland China, especially in Harbin Institute of Technology (HIT), P.R.China. The main contents include smart sensors such as optical fiber Bragg grating (OFBG) and polivinyllidene fluoride (PVDF) sensors, fatigue life gauges, self-sensing mortar and carbon fiber reinforced polymer (CFRP), wireless sensor networks and their implementation in practical infrastructures such as offshore platform structures, hydraulic engineering structures, large span bridges and large space structures. Finally, the relative research projects supported by the national foundation agencies of China are briefly introduced.

  2. Advanced sensor-computer technology for urban runoff monitoring

    NASA Astrophysics Data System (ADS)

    Yu, Byunggu; Behera, Pradeep K.; Ramirez Rochac, Juan F.

    2011-04-01

    The paper presents the project team's advanced sensor-computer sphere technology for real-time and continuous monitoring of wastewater runoff at the sewer discharge outfalls along the receiving water. This research significantly enhances and extends the previously proposed novel sensor-computer technology. This advanced technology offers new computation models for an innovative use of the sensor-computer sphere comprising accelerometer, programmable in-situ computer, solar power, and wireless communication for real-time and online monitoring of runoff quantity. This innovation can enable more effective planning and decision-making in civil infrastructure, natural environment protection, and water pollution related emergencies. The paper presents the following: (i) the sensor-computer sphere technology; (ii) a significant enhancement to the previously proposed discrete runoff quantity model of this technology; (iii) a new continuous runoff quantity model. Our comparative study on the two distinct models is presented. Based on this study, the paper further investigates the following: (1) energy-, memory-, and communication-efficient use of the technology for runoff monitoring; (2) possible sensor extensions for runoff quality monitoring.

  3. National Computational Infrastructure for Lattice Gauge Theory SciDAC-2 Closeout Report Indiana University Component

    SciTech Connect

    Gottlieb, Steven Arthur; DeTar, Carleton; Tousaint, Doug

    2014-07-24

    This is the closeout report for the Indiana University portion of the National Computational Infrastructure for Lattice Gauge Theory project supported by the United States Department of Energy under the SciDAC program. It includes information about activities at Indian University, the University of Arizona, and the University of Utah, as those three universities coordinated their activities.

  4. Network Computing Infrastructure to Share Tools and Data in Global Nuclear Energy Partnership

    NASA Astrophysics Data System (ADS)

    Kim, Guehee; Suzuki, Yoshio; Teshima, Naoya

    CCSE/JAEA (Center for Computational Science and e-Systems/Japan Atomic Energy Agency) integrated a prototype system of a network computing infrastructure for sharing tools and data to support the U.S. and Japan collaboration in GNEP (Global Nuclear Energy Partnership). We focused on three technical issues to apply our information process infrastructure, which are accessibility, security, and usability. In designing the prototype system, we integrated and improved both network and Web technologies. For the accessibility issue, we adopted SSL-VPN (Security Socket Layer-Virtual Private Network) technology for the access beyond firewalls. For the security issue, we developed an authentication gateway based on the PKI (Public Key Infrastructure) authentication mechanism to strengthen the security. Also, we set fine access control policy to shared tools and data and used shared key based encryption method to protect tools and data against leakage to third parties. For the usability issue, we chose Web browsers as user interface and developed Web application to provide functions to support sharing tools and data. By using WebDAV (Web-based Distributed Authoring and Versioning) function, users can manipulate shared tools and data through the Windows-like folder environment. We implemented the prototype system in Grid infrastructure for atomic energy research: AEGIS (Atomic Energy Grid Infrastructure) developed by CCSE/JAEA. The prototype system was applied for the trial use in the first period of GNEP.

  5. Advanced Biomedical Computing Center (ABCC) | DSITP

    Cancer.gov

    The Advanced Biomedical Computing Center (ABCC), located in Frederick Maryland (MD), provides HPC resources for both NIH/NCI intramural scientists and the extramural biomedical research community. Its mission is to provide HPC support, to provide collaborative research, and to conduct in-house research in various areas of computational biology and biomedical research.

  6. Advanced laptop and small personal computer technology

    NASA Technical Reports Server (NTRS)

    Johnson, Roger L.

    1991-01-01

    Advanced laptop and small personal computer technology is presented in the form of the viewgraphs. The following areas of hand carried computers and mobile workstation technology are covered: background, applications, high end products, technology trends, requirements for the Control Center application, and recommendations for the future.

  7. Building an infrastructure for scientific Grid computing: status and goals of the EGEE project.

    PubMed

    Gagliardi, Fabrizio; Jones, Bob; Grey, François; Bégin, Marc-Elian; Heikkurinen, Matti

    2005-08-15

    The state of computer and networking technology today makes the seamless sharing of computing resources on an international or even global scale conceivable. Scientific computing Grids that integrate large, geographically distributed computer clusters and data storage facilities are being developed in several major projects around the world. This article reviews the status of one of these projects, Enabling Grids for E-SciencE, describing the scientific opportunities that such a Grid can provide, while illustrating the scale and complexity of the challenge involved in establishing a scientific infrastructure of this kind.

  8. Opportunities in computational mechanics: Advances in parallel computing

    SciTech Connect

    Lesar, R.A.

    1999-02-01

    In this paper, the authors will discuss recent advances in computing power and the prospects for using these new capabilities for studying plasticity and failure. They will first review the new capabilities made available with parallel computing. They will discuss how these machines perform and how well their architecture might work on materials issues. Finally, they will give some estimates on the size of problems possible using these computers.

  9. Advanced simulation for analysis of critical infrastructure : abstract cascades, the electric power grid, and Fedwire.

    SciTech Connect

    Glass, Robert John, Jr.; Stamber, Kevin Louis; Beyeler, Walter Eugene

    2004-08-01

    Critical Infrastructures are formed by a large number of components that interact within complex networks. As a rule, infrastructures contain strong feedbacks either explicitly through the action of hardware/software control, or implicitly through the action/reaction of people. Individual infrastructures influence others and grow, adapt, and thus evolve in response to their multifaceted physical, economic, cultural, and political environments. Simply put, critical infrastructures are complex adaptive systems. In the Advanced Modeling and Techniques Investigations (AMTI) subgroup of the National Infrastructure Simulation and Analysis Center (NISAC), we are studying infrastructures as complex adaptive systems. In one of AMTI's efforts, we are focusing on cascading failure as can occur with devastating results within and between infrastructures. Over the past year we have synthesized and extended the large variety of abstract cascade models developed in the field of complexity science and have started to apply them to specific infrastructures that might experience cascading failure. In this report we introduce our comprehensive model, Polynet, which simulates cascading failure over a wide range of network topologies, interaction rules, and adaptive responses as well as multiple interacting and growing networks. We first demonstrate Polynet for the classical Bac, Tang, and Wiesenfeld or BTW sand-pile in several network topologies. We then apply Polynet to two very different critical infrastructures: the high voltage electric power transmission system which relays electricity from generators to groups of distribution-level consumers, and Fedwire which is a Federal Reserve service for sending large-value payments between banks and other large financial institutions. For these two applications, we tailor interaction rules to represent appropriate unit behavior and consider the influence of random transactions within two stylized networks: a regular homogeneous array and a

  10. Spatial Data Infrastructures and Grid Computing: the GDI-Grid project

    NASA Astrophysics Data System (ADS)

    Padberg, A.; Kiehle, C.

    2009-04-01

    Distribution of spatial data through standards compliant spatial data infrastructures (SDI) is a fairly common practice nowadays. The Open Geospatial Consortium (OGC) offers a broad range of implementation specifications for accessing and presenting spatial data. In December 2007 the OGC published the Web Processing Service specification (WPS) for extending the capabilities of a SDI to include the processing of distributed data. By utilizing a WPS it is possible to shift the workload from the client to the server. Furthermore it is possible to create automated workflows that include data processing without the need for user interaction or manual computation of data via a desktop GIS. When complex processes are offered or large amounts of data are processed by a WPS, the computational power of the server might not suffice. Especially when such processes are invoked by a multitude of users the server might not able to provide the wanted performance. In this case, Grid Computing is one way to provide the required computational power by accessing great quantities of worker nodes in an existing Grid infrastructure through a Grid middleware. Due to their respective origins the paradigms of SDIs and Grid infrastructures differ significantly in several important matters. For instance security is handled differently in the scope of OWS and Grid Computing. While the OGC does not yet specify a comprehensive security concept, strict security rules are a top priority in Grid Computing where providers need a certain degree of control over their resources and users want to process sensitive data on external resources. To create a SDI that is able to benefit from the computational power and the vast storage capacities of a Grid infrastructure it is necessary to overcome all the conceptual differences between OWS and Grid Computing. GDI-Grid (english: SDI-Grid) is a project funded by the German Ministry for Science and Education. It aims at bridging the aforementioned gaps between

  11. Advances and challenges in computational plasma science

    NASA Astrophysics Data System (ADS)

    Tang, W. M.

    2005-02-01

    Scientific simulation, which provides a natural bridge between theory and experiment, is an essential tool for understanding complex plasma behaviour. Recent advances in simulations of magnetically confined plasmas are reviewed in this paper, with illustrative examples, chosen from associated research areas such as microturbulence, magnetohydrodynamics and other topics. Progress has been stimulated, in particular, by the exponential growth of computer speed along with significant improvements in computer technology. The advances in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics have produced increasingly good agreement between experimental observations and computational modelling. This was enabled by two key factors: (a) innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning widely disparate temporal and spatial scales and (b) access to powerful new computational resources. Excellent progress has been made in developing codes for which computer run-time and problem-size scale well with the number of processors on massively parallel processors (MPPs). Examples include the effective usage of the full power of multi-teraflop (multi-trillion floating point computations per second) MPPs to produce three-dimensional, general geometry, nonlinear particle simulations that have accelerated advances in understanding the nature of turbulence self-regulation by zonal flows. These calculations, which typically utilized billions of particles for thousands of time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In looking towards the future, the current results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. This

  12. Role of HPC in Advancing Computational Aeroelasticity

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.

    2004-01-01

    On behalf of the High Performance Computing and Modernization Program (HPCMP) and NASA Advanced Supercomputing Division (NAS) a study is conducted to assess the role of supercomputers on computational aeroelasticity of aerospace vehicles. The study is mostly based on the responses to a web based questionnaire that was designed to capture the nuances of high performance computational aeroelasticity, particularly on parallel computers. A procedure is presented to assign a fidelity-complexity index to each application. Case studies based on major applications using HPCMP resources are presented.

  13. Advances and trends in computational structural mechanics

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Atluri, Satya N.

    1987-01-01

    The development status and applicational range of techniques in computational structural mechanics (CSM) are evaluated with a view to advances in computational models for material behavior, discrete-element technology, quality assessment, the control of numerical simulations of structural response, hybrid analysis techniques, techniques for large-scale optimization, and the impact of new computing systems on CSM. Primary pacers of CSM development encompass prediction and analysis of novel materials for structural components, computational strategies for large-scale structural calculations, and the assessment of response prediction reliability together with its adaptive improvement.

  14. Enhanced Computational Infrastructure for Data Analysis at the DIII-D National Fusion Facility

    SciTech Connect

    Schissel, D.P.; Peng, Q.; Schachter, J.; Terpstra, T.B.; Casper, T.A.; Freeman, J.; Jong, R.; Keith, K.M.; Meyer, W.H.; Parker, C.T.

    1999-08-01

    Recently a number of enhancements to the computer hardware infrastructure have been implemented at the DIII-D National Fusion Facility. Utilizing these improvements to the hardware infrastructure, software enhancements are focusing on streamlined analysis, automation, and graphical user interface (GUI) systems to enlarge the user base. The adoption of the load balancing software package LSF Suite by Platform Computing has dramatically increased the availability of CPU cycles and the efficiency of their use. Streamlined analysis has been aided by the adoption of the MDSplus system to provide a unified interface to analyzed DIII-D data. The majority of MDSplus data is made available in between pulses giving the researcher critical information before setting up the next pulse. Work on data viewing and analysis tools focuses on efficient GUI design with object-oriented programming (OOP) for maximum code flexibility. Work to enhance the computational infrastructure at DIII-D has included a significant effort to aid the remote collaborator since the DIII-D National Team consists of scientists from 9 national laboratories, 19 foreign laboratories, 16 universities, and 5 industrial partnerships. As a result of this work, DIII-D data is available on a 24 x 7 basis from a set of viewing and analysis tools that can be run either on the collaborators' or DIII-Ds computer systems. Additionally, a Web based data and code documentation system has been created to aid the novice and expert user alike.

  15. High-Performance Computing for Advanced Smart Grid Applications

    SciTech Connect

    Huang, Zhenyu; Chen, Yousu

    2012-07-06

    The power grid is becoming far more complex as a result of the grid evolution meeting an information revolution. Due to the penetration of smart grid technologies, the grid is evolving as an unprecedented speed and the information infrastructure is fundamentally improved with a large number of smart meters and sensors that produce several orders of magnitude larger amounts of data. How to pull data in, perform analysis, and put information out in a real-time manner is a fundamental challenge in smart grid operation and planning. The future power grid requires high performance computing to be one of the foundational technologies in developing the algorithms and tools for the significantly increased complexity. New techniques and computational capabilities are required to meet the demands for higher reliability and better asset utilization, including advanced algorithms and computing hardware for large-scale modeling, simulation, and analysis. This chapter summarizes the computational challenges in smart grid and the need for high performance computing, and present examples of how high performance computing might be used for future smart grid operation and planning.

  16. WRF4G project: Adaptation of WRF Model to Distributed Computing Infrastructures

    NASA Astrophysics Data System (ADS)

    Cofino, Antonio S.; Fernández Quiruelas, Valvanuz; García Díez, Markel; Blanco Real, Jose C.; Fernández, Jesús

    2013-04-01

    Nowadays Grid Computing is powerful computational tool which is ready to be used for scientific community in different areas (such as biomedicine, astrophysics, climate, etc.). However, the use of this distributed computing infrastructures (DCI) is not yet common practice in climate research, and only a few teams and applications in this area take advantage of this infrastructure. Thus, the first objective of this project is to popularize the use of this technology in the atmospheric sciences area. In order to achieve this objective, one of the most used applications has been taken (WRF; a limited- area model, successor of the MM5 model), that has a user community formed by more than 8000 researchers worldwide. This community develop its research activity on different areas and could benefit from the advantages of Grid resources (case study simulations, regional hind-cast/forecast, sensitivity studies, etc.). The WRF model is been used as input by many energy and natural hazards community, therefore those community will also benefit. However, Grid infrastructures have some drawbacks for the execution of applications that make an intensive use of CPU and memory for a long period of time. This makes necessary to develop a specific framework (middleware). This middleware encapsulates the application and provides appropriate services for the monitoring and management of the jobs and the data. Thus, the second objective of the project consists on the development of a generic adaptation of WRF for Grid (WRF4G), to be distributed as open-source and to be integrated in the official WRF development cycle. The use of this WRF adaptation should be transparent and useful to face any of the previously described studies, and avoid any of the problems of the Grid infrastructure. Moreover it should simplify the access to the Grid infrastructures for the research teams, and also to free them from the technical and computational aspects of the use of the Grid. Finally, in order to

  17. Advances in cheminformatics methodologies and infrastructure to support the data mining of large, heterogeneous chemical datasets.

    PubMed

    Guha, Rajarshi; Gilbert, Kevin; Fox, Geoffrey; Pierce, Marlon; Wild, David; Yuan, Huapeng

    2010-03-01

    In recent years, there has been an explosion in the availability of publicly accessible chemical information, including chemical structures of small molecules, structure-derived properties and associated biological activities in a variety of assays. These data sources present us with a significant opportunity to develop and apply computational tools to extract and understand the underlying structure-activity relationships. Furthermore, by integrating chemical data sources with biological information (protein structure, gene expression and so on), we can attempt to build up a holistic view of the effects of small molecules in biological systems. Equally important is the ability for non-experts to access and utilize state of the art cheminformatics method and models. In this review we present recent developments in cheminformatics methodologies and infrastructure that provide a robust, distributed approach to mining large and complex chemical datasets. In the area of methodology development, we highlight recent work on characterizing structure-activity landscapes, Quantitative Structure Activity Relationship (QSAR) model domain applicability and the use of chemical similarity in text mining. In the area of infrastructure, we discuss a distributed web services framework that allows easy deployment and uniform access to computational (statistics, cheminformatics and computational chemistry) methods, data and models. We also discuss the development of PubChem derived databases and highlight techniques that allow us to scale the infrastructure to extremely large compound collections, by use of distributed processing on Grids. Given that the above work is applicable to arbitrary types of cheminformatics problems, we also present some case studies related to virtual screening for anti-malarials and predictions of anti-cancer activity.

  18. Application of large-scale computing infrastructure for diverse environmental research applications using GC3Pie

    NASA Astrophysics Data System (ADS)

    Maffioletti, Sergio; Dawes, Nicholas; Bavay, Mathias; Sarni, Sofiane; Lehning, Michael

    2013-04-01

    The Swiss Experiment platform (SwissEx: http://www.swiss-experiment.ch) provides a distributed storage and processing infrastructure for environmental research experiments. The aim of the second phase project (the Open Support Platform for Environmental Research, OSPER, 2012-2015) is to develop the existing infrastructure to provide scientists with an improved workflow. This improved workflow will include pre-defined, documented and connected processing routines. A large-scale computing and data facility is required to provide reliable and scalable access to data for analysis, and it is desirable that such an infrastructure should be free of traditional data handling methods. Such an infrastructure has been developed using the cloud-based part of the Swiss national infrastructure SMSCG (http://www.smscg.ch) and Academic Cloud. The infrastructure under construction supports two main usage models: 1) Ad-hoc data analysis scripts: These scripts are simple processing scripts, written by the environmental researchers themselves, which can be applied to large data sets via the high power infrastructure. Examples of this type of script are spatial statistical analysis scripts (R-based scripts), mostly computed on raw meteorological and/or soil moisture data. These provide processed output in the form of a grid, a plot, or a kml. 2) Complex models: A more intense data analysis pipeline centered (initially) around the physical process model, Alpine3D, and the MeteoIO plugin; depending on the data set, this may require a tightly coupled infrastructure. SMSCG already supports Alpine3D executions as both regular grid jobs and as virtual software appliances. A dedicated appliance with the Alpine3D specific libraries has been created and made available through the SMSCG infrastructure. The analysis pipelines are activated and supervised by simple control scripts that, depending on the data fetched from the meteorological stations, launch new instances of the Alpine3D appliance

  19. Designing and Operating Through Compromise: Architectural Analysis of CKMS for the Advanced Metering Infrastructure

    SciTech Connect

    Duren, Mike; Aldridge, Hal; Abercrombie, Robert K; Sheldon, Frederick T

    2013-01-01

    Compromises attributable to the Advanced Persistent Threat (APT) highlight the necessity for constant vigilance. The APT provides a new perspective on security metrics (e.g., statistics based cyber security) and quantitative risk assessments. We consider design principals and models/tools that provide high assurance for energy delivery systems (EDS) operations regardless of the state of compromise. Cryptographic keys must be securely exchanged, then held and protected on either end of a communications link. This is challenging for a utility with numerous substations that must secure the intelligent electronic devices (IEDs) that may comprise complex control system of systems. For example, distribution and management of keys among the millions of intelligent meters within the Advanced Metering Infrastructure (AMI) is being implemented as part of the National Smart Grid initiative. Without a means for a secure cryptographic key management system (CKMS) no cryptographic solution can be widely deployed to protect the EDS infrastructure from cyber-attack. We consider 1) how security modeling is applied to key management and cyber security concerns on a continuous basis from design through operation, 2) how trusted models and key management architectures greatly impact failure scenarios, and 3) how hardware-enabled trust is a critical element to detecting, surviving, and recovering from attack.

  20. Advances and Challenges in Computational Plasma Science

    SciTech Connect

    W.M. Tang; V.S. Chan

    2005-01-03

    Scientific simulation, which provides a natural bridge between theory and experiment, is an essential tool for understanding complex plasma behavior. Recent advances in simulations of magnetically-confined plasmas are reviewed in this paper with illustrative examples chosen from associated research areas such as microturbulence, magnetohydrodynamics, and other topics. Progress has been stimulated in particular by the exponential growth of computer speed along with significant improvements in computer technology.

  1. Geospatial-enabled Data Exploration and Computation through Data Infrastructure Building Blocks

    NASA Astrophysics Data System (ADS)

    Song, C. X.; Biehl, L. L.; Merwade, V.; Villoria, N.

    2015-12-01

    Geospatial data are present everywhere today with the proliferation of location-aware computing devices and sensors. This is especially true in the scientific community where large amounts of data are driving research and education activities in many domains. Collaboration over geospatial data, for example, in modeling, data analysis and visualization, must still overcome the barriers of specialized software and expertise among other challenges. The GABBs project aims at enabling broader access to geospatial data exploration and computation by developing spatial data infrastructure building blocks that leverage capabilities of end-to-end application service and virtualized computing framework in HUBzero. Funded by NSF Data Infrastructure Building Blocks (DIBBS) initiative, GABBs provides a geospatial data architecture that integrates spatial data management, mapping and visualization and will make it available as open source. The outcome of the project will enable users to rapidly create tools and share geospatial data and tools on the web for interactive exploration of data without requiring significant software development skills, GIS expertise or IT administrative privileges. This presentation will describe the development of geospatial data infrastructure building blocks and the scientific use cases that help drive the software development, as well as seek feedback from the user communities.

  2. Advanced Computing Architectures for Cognitive Processing

    DTIC Science & Technology

    2009-07-01

    AND IS APPROVED FOR PUBLICATION IN ACCORDANCE WITH ASSIGNED DISTRIBUTION STATEMENT. FOR THE DIRECTOR: / s ... s / LOK YAN EDWARD J. JONES, Deputy Chief Work Unit Manager Advanced Computing Division...ELEMENT NUMBER 62702F 6. AUTHOR( S ) Gregory D. Peterson 5d. PROJECT NUMBER 459T 5e. TASK NUMBER AC 5f. WORK UNIT NUMBER CP 7. PERFORMING

  3. A proposed computational biomechanics cyber-infrastructure for multi-phase and multi-scale problems: delivering biomechanics to the surgeon.

    PubMed

    Impelluso, Thomas J

    2006-04-01

    This paper presents a new direction for practitioners of computational biomechanics. It provides a description of three prototype software platforms, which demonstrate how the cyber-infrastructure can be used to integrate the algorithms of computational biomechanics to solve multi-phase and multi-scale problems. Then, a development platform is presented. This platform can also deliver integrated biomechanics into the surgical ward for surgical planning. This platform performs these tasks without the need for advanced network software tools: an appendix provides all the simple and fundamental open source and CI technologies that are required. The overarching goal of this paper is to make the potential of the emerging cyber-infrastructure comprehensible and accessible to practitioners of computational biomechanics.

  4. Predictive Dynamic Security Assessment through Advanced Computing

    SciTech Connect

    Huang, Zhenyu; Diao, Ruisheng; Jin, Shuangshuang; Chen, Yousu

    2014-11-30

    Abstract— Traditional dynamic security assessment is limited by several factors and thus falls short in providing real-time information to be predictive for power system operation. These factors include the steady-state assumption of current operating points, static transfer limits, and low computational speed. This addresses these factors and frames predictive dynamic security assessment. The primary objective of predictive dynamic security assessment is to enhance the functionality and computational process of dynamic security assessment through the use of high-speed phasor measurements and the application of advanced computing technologies for faster-than-real-time simulation. This paper presents algorithms, computing platforms, and simulation frameworks that constitute the predictive dynamic security assessment capability. Examples of phasor application and fast computation for dynamic security assessment are included to demonstrate the feasibility and speed enhancement for real-time applications.

  5. Probability Distributome: A Web Computational Infrastructure for Exploring the Properties, Interrelations, and Applications of Probability Distributions.

    PubMed

    Dinov, Ivo D; Siegrist, Kyle; Pearl, Dennis K; Kalinin, Alexandr; Christou, Nicolas

    2016-06-01

    Probability distributions are useful for modeling, simulation, analysis, and inference on varieties of natural processes and physical phenomena. There are uncountably many probability distributions. However, a few dozen families of distributions are commonly defined and are frequently used in practice for problem solving, experimental applications, and theoretical studies. In this paper, we present a new computational and graphical infrastructure, the Distributome, which facilitates the discovery, exploration and application of diverse spectra of probability distributions. The extensible Distributome infrastructure provides interfaces for (human and machine) traversal, search, and navigation of all common probability distributions. It also enables distribution modeling, applications, investigation of inter-distribution relations, as well as their analytical representations and computational utilization. The entire Distributome framework is designed and implemented as an open-source, community-built, and Internet-accessible infrastructure. It is portable, extensible and compatible with HTML5 and Web2.0 standards (http://Distributome.org). We demonstrate two types of applications of the probability Distributome resources: computational research and science education. The Distributome tools may be employed to address five complementary computational modeling applications (simulation, data-analysis and inference, model-fitting, examination of the analytical, mathematical and computational properties of specific probability distributions, and exploration of the inter-distributional relations). Many high school and college science, technology, engineering and mathematics (STEM) courses may be enriched by the use of modern pedagogical approaches and technology-enhanced methods. The Distributome resources provide enhancements for blended STEM education by improving student motivation, augmenting the classical curriculum with interactive webapps, and overhauling the

  6. Advanced Neuropsychological Diagnostics Infrastructure (ANDI): A Normative Database Created from Control Datasets

    PubMed Central

    de Vent, Nathalie R.; Agelink van Rentergem, Joost A.; Schmand, Ben A.; Murre, Jaap M. J.; Huizenga, Hilde M.

    2016-01-01

    In the Advanced Neuropsychological Diagnostics Infrastructure (ANDI), datasets of several research groups are combined into a single database, containing scores on neuropsychological tests from healthy participants. For most popular neuropsychological tests the quantity, and range of these data surpasses that of traditional normative data, thereby enabling more accurate neuropsychological assessment. Because of the unique structure of the database, it facilitates normative comparison methods that were not feasible before, in particular those in which entire profiles of scores are evaluated. In this article, we describe the steps that were necessary to combine the separate datasets into a single database. These steps involve matching variables from multiple datasets, removing outlying values, determining the influence of demographic variables, and finding appropriate transformations to normality. Also, a brief description of the current contents of the ANDI database is given. PMID:27812340

  7. Simulation methods for advanced scientific computing

    SciTech Connect

    Booth, T.E.; Carlson, J.A.; Forster, R.A.

    1998-11-01

    This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The objective of the project was to create effective new algorithms for solving N-body problems by computer simulation. The authors concentrated on developing advanced classical and quantum Monte Carlo techniques. For simulations of phase transitions in classical systems, they produced a framework generalizing the famous Swendsen-Wang cluster algorithms for Ising and Potts models. For spin-glass-like problems, they demonstrated the effectiveness of an extension of the multicanonical method for the two-dimensional, random bond Ising model. For quantum mechanical systems, they generated a new method to compute the ground-state energy of systems of interacting electrons. They also improved methods to compute excited states when the diffusion quantum Monte Carlo method is used and to compute longer time dynamics when the stationary phase quantum Monte Carlo method is used.

  8. Advances in Computational Capabilities for Hypersonic Flows

    NASA Technical Reports Server (NTRS)

    Kumar, Ajay; Gnoffo, Peter A.; Moss, James N.; Drummond, J. Philip

    1997-01-01

    The paper reviews the growth and advances in computational capabilities for hypersonic applications over the period from the mid-1980's to the present day. The current status of the code development issues such as surface and field grid generation, algorithms, physical and chemical modeling, and validation is provided. A brief description of some of the major codes being used at NASA Langley Research Center for hypersonic continuum and rarefied flows is provided, along with their capabilities and deficiencies. A number of application examples are presented, and future areas of research to enhance accuracy, reliability, efficiency, and robustness of computational codes are discussed.

  9. The GMOS cyber(e)-infrastructure: advanced services for supporting science and policy.

    PubMed

    Cinnirella, S; D'Amore, F; Bencardino, M; Sprovieri, F; Pirrone, N

    2014-03-01

    The need for coordinated, systematized and catalogued databases on mercury in the environment is of paramount importance as improved information can help the assessment of the effectiveness of measures established to phase out and ban mercury. Long-term monitoring sites have been established in a number of regions and countries for the measurement of mercury in ambient air and wet deposition. Long term measurements of mercury concentration in biota also produced a huge amount of information, but such initiatives are far from being within a global, systematic and interoperable approach. To address these weaknesses the on-going Global Mercury Observation System (GMOS) project ( www.gmos.eu ) established a coordinated global observation system for mercury as well it retrieved historical data ( www.gmos.eu/sdi ). To manage such large amount of information a technological infrastructure was planned. This high-performance back-end resource associated with sophisticated client applications enables data storage, computing services, telecommunications networks and all services necessary to support the activity. This paper reports the architecture definition of the GMOS Cyber(e)-Infrastructure and the services developed to support science and policy, including the United Nation Environmental Program. It finally describes new possibilities in data analysis and data management through client applications.

  10. Software Infrastructure for Computer-aided Drug Discovery and Development, a Practical Example with Guidelines.

    PubMed

    Moretti, Loris; Sartori, Luca

    2016-09-01

    In the field of Computer-Aided Drug Discovery and Development (CADDD) the proper software infrastructure is essential for everyday investigations. The creation of such an environment should be carefully planned and implemented with certain features in order to be productive and efficient. Here we describe a solution to integrate standard computational services into a functional unit that empowers modelling applications for drug discovery. This system allows users with various level of expertise to run in silico experiments automatically and without the burden of file formatting for different software, managing the actual computation, keeping track of the activities and graphical rendering of the structural outcomes. To showcase the potential of this approach, performances of five different docking programs on an Hiv-1 protease test set are presented.

  11. Research Institute for Advanced Computer Science

    NASA Technical Reports Server (NTRS)

    Gross, Anthony R. (Technical Monitor); Leiner, Barry M.

    2000-01-01

    The Research Institute for Advanced Computer Science (RIACS) carries out basic research and technology development in computer science, in support of the National Aeronautics and Space Administration's missions. RIACS is located at the NASA Ames Research Center. It currently operates under a multiple year grant/cooperative agreement that began on October 1, 1997 and is up for renewal in the year 2002. Ames has been designated NASA's Center of Excellence in Information Technology. In this capacity, Ames is charged with the responsibility to build an Information Technology Research Program that is preeminent within NASA. RIACS serves as a bridge between NASA Ames and the academic community, and RIACS scientists and visitors work in close collaboration with NASA scientists. RIACS has the additional goal of broadening the base of researchers in these areas of importance to the nation's space and aeronautics enterprises. RIACS research focuses on the three cornerstones of information technology research necessary to meet the future challenges of NASA missions: (1) Automated Reasoning for Autonomous Systems. Techniques are being developed enabling spacecraft that will be self-guiding and self-correcting to the extent that they will require little or no human intervention. Such craft will be equipped to independently solve problems as they arise, and fulfill their missions with minimum direction from Earth; (2) Human-Centered Computing. Many NASA missions require synergy between humans and computers, with sophisticated computational aids amplifying human cognitive and perceptual abilities; (3) High Performance Computing and Networking. Advances in the performance of computing and networking continue to have major impact on a variety of NASA endeavors, ranging from modeling and simulation to data analysis of large datasets to collaborative engineering, planning and execution. In addition, RIACS collaborates with NASA scientists to apply information technology research to a

  12. a Holistic Approach for Inspection of Civil Infrastructures Based on Computer Vision Techniques

    NASA Astrophysics Data System (ADS)

    Stentoumis, C.; Protopapadakis, E.; Doulamis, A.; Doulamis, N.

    2016-06-01

    In this work, it is examined the 2D recognition and 3D modelling of concrete tunnel cracks, through visual cues. At the time being, the structural integrity inspection of large-scale infrastructures is mainly performed through visual observations by human inspectors, who identify structural defects, rate them and, then, categorize their severity. The described approach targets at minimum human intervention, for autonomous inspection of civil infrastructures. The shortfalls of existing approaches in crack assessment are being addressed by proposing a novel detection scheme. Although efforts have been made in the field, synergies among proposed techniques are still missing. The holistic approach of this paper exploits the state of the art techniques of pattern recognition and stereo-matching, in order to build accurate 3D crack models. The innovation lies in the hybrid approach for the CNN detector initialization, and the use of the modified census transformation for stereo matching along with a binary fusion of two state-of-the-art optimization schemes. The described approach manages to deal with images of harsh radiometry, along with severe radiometric differences in the stereo pair. The effectiveness of this workflow is evaluated on a real dataset gathered in highway and railway tunnels. What is promising is that the computer vision workflow described in this work can be transferred, with adaptations of course, to other infrastructure such as pipelines, bridges and large industrial facilities that are in the need of continuous state assessment during their operational life cycle.

  13. Airborne Advanced Reconfigurable Computer System (ARCS)

    NASA Technical Reports Server (NTRS)

    Bjurman, B. E.; Jenkins, G. M.; Masreliez, C. J.; Mcclellan, K. L.; Templeman, J. E.

    1976-01-01

    A digital computer subsystem fault-tolerant concept was defined, and the potential benefits and costs of such a subsystem were assessed when used as the central element of a new transport's flight control system. The derived advanced reconfigurable computer system (ARCS) is a triple-redundant computer subsystem that automatically reconfigures, under multiple fault conditions, from triplex to duplex to simplex operation, with redundancy recovery if the fault condition is transient. The study included criteria development covering factors at the aircraft's operation level that would influence the design of a fault-tolerant system for commercial airline use. A new reliability analysis tool was developed for evaluating redundant, fault-tolerant system availability and survivability; and a stringent digital system software design methodology was used to achieve design/implementation visibility.

  14. A simple grid implementation with Berkeley Open Infrastructure for Network Computing using BLAST as a model

    PubMed Central

    Pinthong, Watthanai; Muangruen, Panya

    2016-01-01

    Development of high-throughput technologies, such as Next-generation sequencing, allows thousands of experiments to be performed simultaneously while reducing resource requirement. Consequently, a massive amount of experiment data is now rapidly generated. Nevertheless, the data are not readily usable or meaningful until they are further analysed and interpreted. Due to the size of the data, a high performance computer (HPC) is required for the analysis and interpretation. However, the HPC is expensive and difficult to access. Other means were developed to allow researchers to acquire the power of HPC without a need to purchase and maintain one such as cloud computing services and grid computing system. In this study, we implemented grid computing in a computer training center environment using Berkeley Open Infrastructure for Network Computing (BOINC) as a job distributor and data manager combining all desktop computers to virtualize the HPC. Fifty desktop computers were used for setting up a grid system during the off-hours. In order to test the performance of the grid system, we adapted the Basic Local Alignment Search Tools (BLAST) to the BOINC system. Sequencing results from Illumina platform were aligned to the human genome database by BLAST on the grid system. The result and processing time were compared to those from a single desktop computer and HPC. The estimated durations of BLAST analysis for 4 million sequence reads on a desktop PC, HPC and the grid system were 568, 24 and 5 days, respectively. Thus, the grid implementation of BLAST by BOINC is an efficient alternative to the HPC for sequence alignment. The grid implementation by BOINC also helped tap unused computing resources during the off-hours and could be easily modified for other available bioinformatics software. PMID:27547555

  15. A simple grid implementation with Berkeley Open Infrastructure for Network Computing using BLAST as a model.

    PubMed

    Pinthong, Watthanai; Muangruen, Panya; Suriyaphol, Prapat; Mairiang, Dumrong

    2016-01-01

    Development of high-throughput technologies, such as Next-generation sequencing, allows thousands of experiments to be performed simultaneously while reducing resource requirement. Consequently, a massive amount of experiment data is now rapidly generated. Nevertheless, the data are not readily usable or meaningful until they are further analysed and interpreted. Due to the size of the data, a high performance computer (HPC) is required for the analysis and interpretation. However, the HPC is expensive and difficult to access. Other means were developed to allow researchers to acquire the power of HPC without a need to purchase and maintain one such as cloud computing services and grid computing system. In this study, we implemented grid computing in a computer training center environment using Berkeley Open Infrastructure for Network Computing (BOINC) as a job distributor and data manager combining all desktop computers to virtualize the HPC. Fifty desktop computers were used for setting up a grid system during the off-hours. In order to test the performance of the grid system, we adapted the Basic Local Alignment Search Tools (BLAST) to the BOINC system. Sequencing results from Illumina platform were aligned to the human genome database by BLAST on the grid system. The result and processing time were compared to those from a single desktop computer and HPC. The estimated durations of BLAST analysis for 4 million sequence reads on a desktop PC, HPC and the grid system were 568, 24 and 5 days, respectively. Thus, the grid implementation of BLAST by BOINC is an efficient alternative to the HPC for sequence alignment. The grid implementation by BOINC also helped tap unused computing resources during the off-hours and could be easily modified for other available bioinformatics software.

  16. AstroCloud, a Cyber-Infrastructure for Astronomy Research: Cloud Computing Environments

    NASA Astrophysics Data System (ADS)

    Li, C.; Wang, J.; Cui, C.; He, B.; Fan, D.; Yang, Y.; Chen, J.; Zhang, H.; Yu, C.; Xiao, J.; Wang, C.; Cao, Z.; Fan, Y.; Hong, Z.; Li, S.; Mi, L.; Wan, W.; Wang, J.; Yin, S.

    2015-09-01

    AstroCloud is a cyber-Infrastructure for Astronomy Research initiated by Chinese Virtual Observatory (China-VO) under funding support from NDRC (National Development and Reform commission) and CAS (Chinese Academy of Sciences). Based on CloudStack, an open source software, we set up the cloud computing environment for AstroCloud Project. It consists of five distributed nodes across the mainland of China. Users can use and analysis data in this cloud computing environment. Based on GlusterFS, we built a scalable cloud storage system. Each user has a private space, which can be shared among different virtual machines and desktop systems. With this environments, astronomer can access to astronomical data collected by different telescopes and data centers easily, and data producers can archive their datasets safely.

  17. 75 FR 64720 - DOE/Advanced Scientific Computing Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-20

    .../Advanced Scientific Computing Advisory Committee AGENCY: Department of Energy, Office of Science. ACTION: Notice of open meeting. SUMMARY: This notice announces a meeting of the Advanced Scientific Computing... Baker, Office of Advanced Scientific Computing Research; SC-21/Germantown Building; U. S. Department...

  18. 75 FR 9887 - Advanced Scientific Computing Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-04

    ... Advanced Scientific Computing Advisory Committee AGENCY: Department of Energy, Office of Science. ACTION: Notice of open meeting. SUMMARY: This notice announces a meeting of the Advanced Scientific Computing... Baker, Office of Advanced Scientific Computing Research; SC-21/Germantown Building; U.S. Department...

  19. 78 FR 6087 - Advanced Scientific Computing Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-29

    ... Advanced Scientific Computing Advisory Committee AGENCY: Department of Energy, Office of Science. ACTION: Notice of open meeting. SUMMARY: This notice announces a meeting of the Advanced Scientific Computing..., Office of Advanced Scientific Computing Research; SC-21/Germantown Building; U. S. Department of...

  20. 75 FR 43518 - Advanced Scientific Computing Advisory Committee; Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-26

    ... Advanced Scientific Computing Advisory Committee; Meeting AGENCY: Office of Science, DOE. ACTION: Notice of open meeting. SUMMARY: This notice announces a meeting of the Advanced Scientific Computing Advisory..., Office of Advanced Scientific Computing Research; SC-21/Germantown Building; U. S. Department of...

  1. 76 FR 41234 - Advanced Scientific Computing Advisory Committee Charter Renewal

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-13

    ... Advanced Scientific Computing Advisory Committee Charter Renewal AGENCY: Department of Energy, Office of... Administration, notice is hereby given that the Advanced Scientific Computing Advisory Committee will be renewed... concerning the Advanced Scientific Computing program in response only to charges from the Director of...

  2. 76 FR 9765 - Advanced Scientific Computing Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-22

    ... Advanced Scientific Computing Advisory Committee AGENCY: Office of Science, Department of Energy. ACTION: Notice of open meeting. SUMMARY: This notice announces a meeting of the Advanced Scientific Computing..., Office of Advanced Scientific Computing Research, SC-21/Germantown Building, U.S. Department of...

  3. 77 FR 45345 - DOE/Advanced Scientific Computing Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-31

    .../Advanced Scientific Computing Advisory Committee AGENCY: Office of Science, Department of Energy. ACTION: Notice of open meeting. SUMMARY: This notice announces a meeting of the Advanced Scientific Computing... Baker, Office of Advanced Scientific Computing Research; SC-21/Germantown Building; U.S. Department...

  4. 78 FR 41046 - Advanced Scientific Computing Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-09

    ... Advanced Scientific Computing Advisory Committee AGENCY: Department of Energy, Office of Science. ACTION... hereby given that the Advanced Scientific Computing Advisory Committee will be renewed for a two-year... (DOE), on the Advanced Scientific Computing Research Program managed by the Office of...

  5. A roadmap towards advanced space weather science to protect society's technological infrastructure: Panel Discussion 2

    NASA Astrophysics Data System (ADS)

    Schrijver, Carolus; Kauristie, Kirsti

    This single 90minute slot will follow on from the morning plenary presentation of the roadmap, providing an opportunity for further discussion of the panel’s findings with an invited panel of key stakeholders. --- As mankind’s technological capabilities grow, society constructs a rapidly deepening insight into the workings of the universe at large, being guided by exploring space near to our home. But at the same time our societal dependence on technology increases and with that comes a growing appreciation of the challenges presented by the phenomena that occur in that space around our home planet: Magnetic explosions on the Sun and their counterparts in the geomagnetic field can in extreme cases endanger our all-pervasive electrical infrastructure. Powerful space storms occasionally lower the reliability of the globe-spanning satellite navigation systems and interrupt radio communications. Energetic particle storms lead to malfunctions and even failures in satellites that are critical to the flow of information in the globally connected economies. These and other Sun-driven effects on Earth’s environment, collectively known as space weather, resemble some other natural hazards in the sense that they pose a risk for the safe and efficient functioning of society that needs to be understood, quantified, and - ultimately - mitigated against. The complexity of the coupled Sun-Earth system, the sparseness by which it can be covered by remote-sensing and in-situ instrumentation, and the costs of the required observational and computational infrastructure warrant a well-planned and well-coordinated approach with cost-efficient solutions. Our team is tasked with the development of a roadmap with the goal of demonstrably improving our observational capabilities, scientific understanding, and the ability to forecast. This paper summarizes the accomplishments of the roadmap team in identifying the highest-priority challenges to achieve these goals.

  6. A roadmap towards advanced space weather science to protect society's technological infrastructure

    NASA Astrophysics Data System (ADS)

    Schrijver, Carolus

    As mankind’s technological capabilities grow, society constructs a rapidly deepening insight into the workings of the universe at large, being guided by exploring space near to our home. But at the same time our societal dependence on technology increases and with that comes a growing appreciation of the challenges presented by the phenomena that occur in that space around our home planet: Magnetic explosions on the Sun and their counterparts in the geomagnetic field can in extreme cases endanger our all-pervasive electrical infrastructure. Powerful space storms occasionally lower the reliability of the globe-spanning satellite navigation systems and interrupt radio communications. Energetic particle storms lead to malfunctions and even failures in satellites that are critical to the flow of information in the globally connected economies. These and other Sun-driven effects on Earth’s environment, collectively known as space weather, resemble some other natural hazards in the sense that they pose a risk for the safe and efficient functioning of society that needs to be understood, quantified, and - ultimately - mitigated against. The complexity of the coupled Sun-Earth system, the sparseness by which it can be covered by remote-sensing and in-situ instrumentation, and the costs of the required observational and computational infrastructure warrant a well-planned and well-coordinated approach with cost-efficient solutions. Our team is tasked with the development of a roadmap with the goal of demonstrably improving our observational capabilities, scientific understanding, and the ability to forecast. This paper summarizes the accomplishments of the roadmap team in identifying the highest-priority challenges to achieve these goals.

  7. A roadmap towards advanced space weather science to protect society's technological infrastructure: Panel Discussion 1

    NASA Astrophysics Data System (ADS)

    Schrijver, Carolus; Kauristie, Kirsti

    This single 90minute slot will follow on from the morning plenary presentation of the roadmap, providing an opportunity for further discussion of the panel’s findings with an invited panel of key stakeholders. --- As mankind’s technological capabilities grow, society constructs a rapidly deepening insight into the workings of the universe at large, being guided by exploring space near to our home. But at the same time our societal dependence on technology increases and with that comes a growing appreciation of the challenges presented by the phenomena that occur in that space around our home planet: Magnetic explosions on the Sun and their counterparts in the geomagnetic field can in extreme cases endanger our all-pervasive electrical infrastructure. Powerful space storms occasionally lower the reliability of the globe-spanning satellite navigation systems and interrupt radio communications. Energetic particle storms lead to malfunctions and even failures in satellites that are critical to the flow of information in the globally connected economies. These and other Sun-driven effects on Earth’s environment, collectively known as space weather, resemble some other natural hazards in the sense that they pose a risk for the safe and efficient functioning of society that needs to be understood, quantified, and - ultimately - mitigated against. The complexity of the coupled Sun-Earth system, the sparseness by which it can be covered by remote-sensing and in-situ instrumentation, and the costs of the required observational and computational infrastructure warrant a well-planned and well-coordinated approach with cost-efficient solutions. Our team is tasked with the development of a roadmap with the goal of demonstrably improving our observational capabilities, scientific understanding, and the ability to forecast. This paper summarizes the accomplishments of the roadmap team in identifying the highest-priority challenges to achieve these goals.

  8. A roadmap towards advanced space weather science to protect society's technological infrastructure: Panel Discussion 3

    NASA Astrophysics Data System (ADS)

    Schrijver, Carolus; Kauristie, Kirsti

    This single 90minute slot will follow on from the morning plenary presentation of the roadmap, providing an opportunity for further discussion of the panel’s findings with an invited panel of key stakeholders. --- As mankind’s technological capabilities grow, society constructs a rapidly deepening insight into the workings of the universe at large, being guided by exploring space near to our home. But at the same time our societal dependence on technology increases and with that comes a growing appreciation of the challenges presented by the phenomena that occur in that space around our home planet: Magnetic explosions on the Sun and their counterparts in the geomagnetic field can in extreme cases endanger our all-pervasive electrical infrastructure. Powerful space storms occasionally lower the reliability of the globe-spanning satellite navigation systems and interrupt radio communications. Energetic particle storms lead to malfunctions and even failures in satellites that are critical to the flow of information in the globally connected economies. These and other Sun-driven effects on Earth’s environment, collectively known as space weather, resemble some other natural hazards in the sense that they pose a risk for the safe and efficient functioning of society that needs to be understood, quantified, and - ultimately - mitigated against. The complexity of the coupled Sun-Earth system, the sparseness by which it can be covered by remote-sensing and in-situ instrumentation, and the costs of the required observational and computational infrastructure warrant a well-planned and well-coordinated approach with cost-efficient solutions. Our team is tasked with the development of a roadmap with the goal of demonstrably improving our observational capabilities, scientific understanding, and the ability to forecast. This paper summarizes the accomplishments of the roadmap team in identifying the highest-priority challenges to achieve these goals.

  9. Advanced Refrigerant-Based Cooling Technologies for Information and Communication Infrastructure (ARCTIC)

    SciTech Connect

    Salamon, Todd

    2012-12-13

    efficiency and carbon footprint reduction for our nation's Information and Communications Technology (ICT) infrastructure. The specific objectives of the ARCTIC project focused in the following three areas: i) advanced research innovations that dramatically enhance the ability to deal with ever-increasing device heat densities and footprint reduction by bringing the liquid cooling much closer to the actual heat sources; ii) manufacturing optimization of key components; and iii) ensuring rapid market acceptance by reducing cost, thoroughly understanding system-level performance, and developing viable commercialization strategies. The project involved participants with expertise in all aspects of commercialization, including research & development, manufacturing, sales & marketing and end users. The team was lead by Alcatel-Lucent, and included subcontractors Modine and USHose.

  10. Computational Design of Advanced Nuclear Fuels

    SciTech Connect

    Savrasov, Sergey; Kotliar, Gabriel; Haule, Kristjan

    2014-06-03

    The objective of the project was to develop a method for theoretical understanding of nuclear fuel materials whose physical and thermophysical properties can be predicted from first principles using a novel dynamical mean field method for electronic structure calculations. We concentrated our study on uranium, plutonium, their oxides, nitrides, carbides, as well as some rare earth materials whose 4f eletrons provide a simplified framework for understanding complex behavior of the f electrons. We addressed the issues connected to the electronic structure, lattice instabilities, phonon and magnon dynamics as well as thermal conductivity. This allowed us to evaluate characteristics of advanced nuclear fuel systems using computer based simulations and avoid costly experiments.

  11. ATCA for Machines-- Advanced Telecommunications Computing Architecture

    SciTech Connect

    Larsen, R.S.; /SLAC

    2008-04-22

    The Advanced Telecommunications Computing Architecture is a new industry open standard for electronics instrument modules and shelves being evaluated for the International Linear Collider (ILC). It is the first industrial standard designed for High Availability (HA). ILC availability simulations have shown clearly that the capabilities of ATCA are needed in order to achieve acceptable integrated luminosity. The ATCA architecture looks attractive for beam instruments and detector applications as well. This paper provides an overview of ongoing R&D including application of HA principles to power electronics systems.

  12. λ-augmented tree for robust data collection in Advanced Metering Infrastructure

    DOE PAGES

    Kamto, Joseph; Qian, Lijun; Li, Wei; ...

    2016-01-01

    In this study, tree multicast configuration of smart meters (SMs) can maintain the connectivity and meet the latency requirements for the Advanced Metering Infrastructure (AMI). However, such topology is extremely weak as any single failure suffices to break its connectivity. On the other hand, the impact of a SM node failure can be more or less significant: a noncut SM node will have a limited local impact compared to a cut SM node that will break the network connectivity. In this work, we design a highly connected tree with a set of backup links to minimize the weakness of treemore » topology of SMs. A topology repair scheme is proposed to address the impact of a SM node failure on the connectivity of the augmented tree network. It relies on a loop detection scheme to define the criticality of a SM node and specifically targets cut SM node by selecting backup parent SM to cover its children. Detailed algorithms to create such AMI tree and related theoretical and complexity analysis are provided with insightful simulation results: sufficient redundancy is provided to alleviate data loss at the cost of signaling overhead. It is however observed that biconnected tree provides the best compromise between the two entities.« less

  13. λ-augmented tree for robust data collection in Advanced Metering Infrastructure

    SciTech Connect

    Kamto, Joseph; Qian, Lijun; Li, Wei; Han, Zhu

    2016-01-01

    In this study, tree multicast configuration of smart meters (SMs) can maintain the connectivity and meet the latency requirements for the Advanced Metering Infrastructure (AMI). However, such topology is extremely weak as any single failure suffices to break its connectivity. On the other hand, the impact of a SM node failure can be more or less significant: a noncut SM node will have a limited local impact compared to a cut SM node that will break the network connectivity. In this work, we design a highly connected tree with a set of backup links to minimize the weakness of tree topology of SMs. A topology repair scheme is proposed to address the impact of a SM node failure on the connectivity of the augmented tree network. It relies on a loop detection scheme to define the criticality of a SM node and specifically targets cut SM node by selecting backup parent SM to cover its children. Detailed algorithms to create such AMI tree and related theoretical and complexity analysis are provided with insightful simulation results: sufficient redundancy is provided to alleviate data loss at the cost of signaling overhead. It is however observed that biconnected tree provides the best compromise between the two entities.

  14. Feasibility of a Networked Air Traffic Infrastructure Validation Environment for Advanced NextGen Concepts

    NASA Technical Reports Server (NTRS)

    McCormack, Michael J.; Gibson, Alec K.; Dennis, Noah E.; Underwood, Matthew C.; Miller,Lana B.; Ballin, Mark G.

    2013-01-01

    Abstract-Next Generation Air Transportation System (NextGen) applications reliant upon aircraft data links such as Automatic Dependent Surveillance-Broadcast (ADS-B) offer a sweeping modernization of the National Airspace System (NAS), but the aviation stakeholder community has not yet established a positive business case for equipage and message content standards remain in flux. It is necessary to transition promising Air Traffic Management (ATM) Concepts of Operations (ConOps) from simulation environments to full-scale flight tests in order to validate user benefits and solidify message standards. However, flight tests are prohibitively expensive and message standards for Commercial-off-the-Shelf (COTS) systems cannot support many advanced ConOps. It is therefore proposed to simulate future aircraft surveillance and communications equipage and employ an existing commercial data link to exchange data during dedicated flight tests. This capability, referred to as the Networked Air Traffic Infrastructure Validation Environment (NATIVE), would emulate aircraft data links such as ADS-B using in-flight Internet and easily-installed test equipment. By utilizing low-cost equipment that is easy to install and certify for testing, advanced ATM ConOps can be validated, message content standards can be solidified, and new standards can be established through full-scale flight trials without necessary or expensive equipage or extensive flight test preparation. This paper presents results of a feasibility study of the NATIVE concept. To determine requirements, six NATIVE design configurations were developed for two NASA ConOps that rely on ADS-B. The performance characteristics of three existing in-flight Internet services were investigated to determine whether performance is adequate to support the concept. Next, a study of requisite hardware and software was conducted to examine whether and how the NATIVE concept might be realized. Finally, to determine a business case

  15. Towards sustainable infrastructure management: knowledge-based service-oriented computing framework for visual analytics

    NASA Astrophysics Data System (ADS)

    Vatcha, Rashna; Lee, Seok-Won; Murty, Ajeet; Tolone, William; Wang, Xiaoyu; Dou, Wenwen; Chang, Remco; Ribarsky, William; Liu, Wanqiu; Chen, Shen-en; Hauser, Edd

    2009-05-01

    Infrastructure management (and its associated processes) is complex to understand, perform and thus, hard to make efficient and effective informed decisions. The management involves a multi-faceted operation that requires the most robust data fusion, visualization and decision making. In order to protect and build sustainable critical assets, we present our on-going multi-disciplinary large-scale project that establishes the Integrated Remote Sensing and Visualization (IRSV) system with a focus on supporting bridge structure inspection and management. This project involves specific expertise from civil engineers, computer scientists, geographers, and real-world practitioners from industry, local and federal government agencies. IRSV is being designed to accommodate the essential needs from the following aspects: 1) Better understanding and enforcement of complex inspection process that can bridge the gap between evidence gathering and decision making through the implementation of ontological knowledge engineering system; 2) Aggregation, representation and fusion of complex multi-layered heterogeneous data (i.e. infrared imaging, aerial photos and ground-mounted LIDAR etc.) with domain application knowledge to support machine understandable recommendation system; 3) Robust visualization techniques with large-scale analytical and interactive visualizations that support users' decision making; and 4) Integration of these needs through the flexible Service-oriented Architecture (SOA) framework to compose and provide services on-demand. IRSV is expected to serve as a management and data visualization tool for construction deliverable assurance and infrastructure monitoring both periodically (annually, monthly, even daily if needed) as well as after extreme events.

  16. A Comprehensive and Cost-Effective Computer Infrastructure for K-12 Schools

    NASA Technical Reports Server (NTRS)

    Warren, G. P.; Seaton, J. M.

    1996-01-01

    Since 1993, NASA Langley Research Center has been developing and implementing a low-cost Internet connection model, including system architecture, training, and support, to provide Internet access for an entire network of computers. This infrastructure allows local area networks which exceed 50 machines per school to independently access the complete functionality of the Internet by connecting to a central site, using state-of-the-art commercial modem technology, through a single standard telephone line. By locating high-cost resources at this central site and sharing these resources and their costs among the school districts throughout a region, a practical, efficient, and affordable infrastructure for providing scale-able Internet connectivity has been developed. As the demand for faster Internet access grows, the model has a simple expansion path that eliminates the need to replace major system components and re-train personnel. Observations of optical Internet usage within an environment, particularly school classrooms, have shown that after an initial period of 'surfing,' the Internet traffic becomes repetitive. By automatically storing requested Internet information on a high-capacity networked disk drive at the local site (network based disk caching), then updating this information only when it changes, well over 80 percent of the Internet traffic that leaves a location can be eliminated by retrieving the information from the local disk cache.

  17. Monitoring of infrastructural sites by means of advanced multi-temporal DInSAR methods

    NASA Astrophysics Data System (ADS)

    Vollrath, Andreas; Zucca, Francesco; Stramondo, Salvatore

    2013-10-01

    With the launch of Sentinel-1, advanced interferometric measurements will become more applicable then ever. The foreseen standard Wide Area Product (WAP), with its higher spatial and temporal resolution than comparable SAR missions, will provide the basement for the use of new wide scale and multitemporal analysis. By now the use of SAR interferometry methods with respect to risk assessment are mainly conducted for active tectonic zones, plate boundaries, volcanoes as well as urban areas, where local surface movement rates exceed the expected error and enough pixels per area contain a relatively stable phase. This study, in contrast, aims to focus on infrastructural sites that are located outside cities and are therefore surrounded by rural landscapes. The stumbling bock was given by the communication letter by the European Commission with regard to the stress tests of nuclear power plants in Europe in 2012. It is mentioned that continuously re-evaluated risk and safety assessments are necessary to guarantee highest possible security to the European citizens and environment. This is also true for other infrastructural sites, that are prone to diverse geophysical hazards. In combination with GPS and broadband seismology, multitemporal Differential Interferometric SAR approaches demonstrated great potential in contributing valuable information to surface movement phenomenas. At this stage of the project, first results of the Stamps-MTI approach (combined PSInSAR and SBAS) will be presented for the industrial area around Priolo Gargallo in South East Sicily by using ENVISAT ASAR IM mode data from 2003-2010. This area is located between the Malta Escarpment fault system and the Hyblean plateau and is prone to earthquake and tsunami risk. It features a high density of oil refineries that are directly located at the coast. The general potential of these techniques with respect to the SENTINEL-1 mission will be shown for this area and a road-map for further improvements

  18. Education as eHealth Infrastructure: Considerations in Advancing a National Agenda for eHealth

    ERIC Educational Resources Information Center

    Hilberts, Sonya; Gray, Kathleen

    2014-01-01

    This paper explores the role of education as infrastructure in large-scale ehealth strategies--in theory, in international practice and in one national case study. Education is often invisible in the documentation of ehealth infrastructure. Nevertheless a review of international practice shows that there is significant educational investment made…

  19. Event heap: a coordination infrastructure for dynamic heterogeneous application interactions in ubiquitous computing environments

    DOEpatents

    Johanson, Bradley E.; Fox, Armando; Winograd, Terry A.; Hanrahan, Patrick M.

    2010-04-20

    An efficient and adaptive middleware infrastructure called the Event Heap system dynamically coordinates application interactions and communications in a ubiquitous computing environment, e.g., an interactive workspace, having heterogeneous software applications running on various machines and devices across different platforms. Applications exchange events via the Event Heap. Each event is characterized by a set of unordered, named fields. Events are routed by matching certain attributes in the fields. The source and target versions of each field are automatically set when an event is posted or used as a template. The Event Heap system implements a unique combination of features, both intrinsic to tuplespaces and specific to the Event Heap, including content based addressing, support for routing patterns, standard routing fields, limited data persistence, query persistence/registration, transparent communication, self-description, flexible typing, logical/physical centralization, portable client API, at most once per source first-in-first-out ordering, and modular restartability.

  20. High-Performance Compute Infrastructure in Astronomy: 2020 Is Only Months Away

    NASA Astrophysics Data System (ADS)

    Berriman, B.; Deelman, E.; Juve, G.; Rynge, M.; Vöckler, J. S.

    2012-09-01

    By 2020, astronomy will be awash with as much as 60 PB of public data. Full scientific exploitation of such massive volumes of data will require high-performance computing on server farms co-located with the data. Development of this computing model will be a community-wide enterprise that has profound cultural and technical implications. Astronomers must be prepared to develop environment-agnostic applications that support parallel processing. The community must investigate the applicability and cost-benefit of emerging technologies such as cloud computing to astronomy, and must engage the Computer Science community to develop science-driven cyberinfrastructure such as workflow schedulers and optimizers. We report here the results of collaborations between a science center, IPAC, and a Computer Science research institute, ISI. These collaborations may be considered pathfinders in developing a high-performance compute infrastructure in astronomy. These collaborations investigated two exemplar large-scale science-driver workflow applications: 1) Calculation of an infrared atlas of the Galactic Plane at 18 different wavelengths by placing data from multiple surveys on a common plate scale and co-registering all the pixels; 2) Calculation of an atlas of periodicities present in the public Kepler data sets, which currently contain 380,000 light curves. These products have been generated with two workflow applications, written in C for performance and designed to support parallel processing on multiple environments and platforms, but with different compute resource needs: the Montage image mosaic engine is I/O-bound, and the NASA Star and Exoplanet Database periodogram code is CPU-bound. Our presentation will report cost and performance metrics and lessons-learned for continuing development. Applicability of Cloud Computing: Commercial Cloud providers generally charge for all operations, including processing, transfer of input and output data, and for storage of data

  1. An APEL Tool Based CPU Usage Accounting Infrastructure for Large Scale Computing Grids

    NASA Astrophysics Data System (ADS)

    Jiang, Ming; Novales, Cristina Del Cano; Mathieu, Gilles; Casson, John; Rogers, William; Gordon, John

    The APEL (Accounting Processor for Event Logs) is the fundamental tool for the CPU usage accounting infrastructure deployed within the WLCG and EGEE Grids. In these Grids, jobs are submitted by users to computing resources via a Grid Resource Broker (e.g. gLite Workload Management System). As a log processing tool, APEL interprets logs of Grid gatekeeper (e.g. globus) and batch system logs (e.g. PBS, LSF, SGE and Condor) to produce CPU job accounting records identified with Grid identities. These records provide a complete description of usage of computing resources by user's jobs. APEL publishes accounting records into an accounting record repository at a Grid Operations Centre (GOC) for the access from a GUI web tool. The functions of log files parsing, records generation and publication are implemented by the APEL Parser, APEL Core, and APEL Publisher component respectively. Within the distributed accounting infrastructure, accounting records are transported from APEL Publishers at Grid sites to either a regionalised accounting system or the central one by choice via a common ActiveMQ message broker network. This provides an open transport layer for other accounting systems to publish relevant accounting data to a central accounting repository via a unified interface provided an APEL Publisher and also will give regional/National Grid Initiatives (NGIs) Grids the flexibility in their choice of accounting system. The robust and secure delivery of accounting record messages at an NGI level and between NGI accounting instances and the central one are achieved by using configurable APEL Publishers and an ActiveMQ message broker network.

  2. Sustaining a Community Computing Infrastructure for Online Teacher Professional Development: A Case Study of Designing Tapped In

    NASA Astrophysics Data System (ADS)

    Farooq, Umer; Schank, Patricia; Harris, Alexandra; Fusco, Judith; Schlager, Mark

    Community computing has recently grown to become a major research area in human-computer interaction. One of the objectives of community computing is to support computer-supported cooperative work among distributed collaborators working toward shared professional goals in online communities of practice. A core issue in designing and developing community computing infrastructures — the underlying sociotechnical layer that supports communitarian activities — is sustainability. Many community computing initiatives fail because the underlying infrastructure does not meet end user requirements; the community is unable to maintain a critical mass of users consistently over time; it generates insufficient social capital to support significant contributions by members of the community; or, as typically happens with funded initiatives, financial and human capital resource become unavailable to further maintain the infrastructure. On the basis of more than 9 years of design experience with Tapped In-an online community of practice for education professionals — we present a case study that discusses four design interventions that have sustained the Tapped In infrastructure and its community to date. These interventions represent broader design strategies for developing online environments for professional communities of practice.

  3. Defense Science Board Report on Advanced Computing

    DTIC Science & Technology

    2009-03-01

    complex computational  issues are  pursued , and that several vendors remain at  the  leading edge of  supercomputing  capability  in  the U.S.  In... pursuing   the  ASC  program  to  help  assure  that  HPC  advances  are  available  to  the  broad  national  security  community. As  in  the past, many...apply HPC  to  technical  problems  related  to  weapons  physics,  but  that  are  entirely  unclassified.  Examples include explosive  astrophysical

  4. Advanced high-performance computer system architectures

    NASA Astrophysics Data System (ADS)

    Vinogradov, V. I.

    2007-02-01

    Convergence of computer systems and communication technologies are moving to switched high-performance modular system architectures on the basis of high-speed switched interconnections. Multi-core processors become more perspective way to high-performance system, and traditional parallel bus system architectures (VME/VXI, cPCI/PXI) are moving to new higher speed serial switched interconnections. Fundamentals in system architecture development are compact modular component strategy, low-power processor, new serial high-speed interface chips on the board, and high-speed switched fabric for SAN architectures. Overview of advanced modular concepts and new international standards for development high-performance embedded and compact modular systems for real-time applications are described.

  5. Making Advanced Computer Science Topics More Accessible through Interactive Technologies

    ERIC Educational Resources Information Center

    Shao, Kun; Maher, Peter

    2012-01-01

    Purpose: Teaching advanced technical concepts in a computer science program to students of different technical backgrounds presents many challenges. The purpose of this paper is to present a detailed experimental pedagogy in teaching advanced computer science topics, such as computer networking, telecommunications and data structures using…

  6. 75 FR 57742 - Advanced Scientific Computing Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-22

    ... Advanced Scientific Computing Advisory Committee AGENCY: Department of Energy, Office of Science. ACTION... Scientific Computing Advisory Committee (ASCAC). Federal Advisory Committee Act (Pub. L. 92-463, 86 Stat. 770...: Melea Baker, Office of Advanced Scientific Computing Research; SC-21/Germantown Building;...

  7. 76 FR 45786 - Advanced Scientific Computing Advisory Committee; Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-01

    ... Advanced Scientific Computing Advisory Committee; Meeting AGENCY: Office of Science, Department of Energy... Computing Advisory Committee (ASCAC). Federal Advisory Committee Act (Pub. L. 92-463, 86 Stat. 770) requires... INFORMATION CONTACT: Melea Baker, Office of Advanced Scientific Computing Research; SC-21/Germantown...

  8. Toward real-time Monte Carlo simulation using a commercial cloud computing infrastructure

    NASA Astrophysics Data System (ADS)

    Wang, Henry; Ma, Yunzhi; Pratx, Guillem; Xing, Lei

    2011-09-01

    Monte Carlo (MC) methods are the gold standard for modeling photon and electron transport in a heterogeneous medium; however, their computational cost prohibits their routine use in the clinic. Cloud computing, wherein computing resources are allocated on-demand from a third party, is a new approach for high performance computing and is implemented to perform ultra-fast MC calculation in radiation therapy. We deployed the EGS5 MC package in a commercial cloud environment. Launched from a single local computer with Internet access, a Python script allocates a remote virtual cluster. A handshaking protocol designates master and worker nodes. The EGS5 binaries and the simulation data are initially loaded onto the master node. The simulation is then distributed among independent worker nodes via the message passing interface, and the results aggregated on the local computer for display and data analysis. The described approach is evaluated for pencil beams and broad beams of high-energy electrons and photons. The output of cloud-based MC simulation is identical to that produced by single-threaded implementation. For 1 million electrons, a simulation that takes 2.58 h on a local computer can be executed in 3.3 min on the cloud with 100 nodes, a 47× speed-up. Simulation time scales inversely with the number of parallel nodes. The parallelization overhead is also negligible for large simulations. Cloud computing represents one of the most important recent advances in supercomputing technology and provides a promising platform for substantially improved MC simulation. In addition to the significant speed up, cloud computing builds a layer of abstraction for high performance parallel computing, which may change the way dose calculations are performed and radiation treatment plans are completed. This work was presented in part at the 2010 Annual Meeting of the American Association of Physicists in Medicine (AAPM), Philadelphia, PA.

  9. Toward real-time Monte Carlo simulation using a commercial cloud computing infrastructure.

    PubMed

    Wang, Henry; Ma, Yunzhi; Pratx, Guillem; Xing, Lei

    2011-09-07

    Monte Carlo (MC) methods are the gold standard for modeling photon and electron transport in a heterogeneous medium; however, their computational cost prohibits their routine use in the clinic. Cloud computing, wherein computing resources are allocated on-demand from a third party, is a new approach for high performance computing and is implemented to perform ultra-fast MC calculation in radiation therapy. We deployed the EGS5 MC package in a commercial cloud environment. Launched from a single local computer with Internet access, a Python script allocates a remote virtual cluster. A handshaking protocol designates master and worker nodes. The EGS5 binaries and the simulation data are initially loaded onto the master node. The simulation is then distributed among independent worker nodes via the message passing interface, and the results aggregated on the local computer for display and data analysis. The described approach is evaluated for pencil beams and broad beams of high-energy electrons and photons. The output of cloud-based MC simulation is identical to that produced by single-threaded implementation. For 1 million electrons, a simulation that takes 2.58 h on a local computer can be executed in 3.3 min on the cloud with 100 nodes, a 47× speed-up. Simulation time scales inversely with the number of parallel nodes. The parallelization overhead is also negligible for large simulations. Cloud computing represents one of the most important recent advances in supercomputing technology and provides a promising platform for substantially improved MC simulation. In addition to the significant speed up, cloud computing builds a layer of abstraction for high performance parallel computing, which may change the way dose calculations are performed and radiation treatment plans are completed.

  10. Large-Scale Data Collection Metadata Management at the National Computation Infrastructure

    NASA Astrophysics Data System (ADS)

    Wang, J.; Evans, B. J. K.; Bastrakova, I.; Ryder, G.; Martin, J.; Duursma, D.; Gohar, K.; Mackey, T.; Paget, M.; Siddeswara, G.

    2014-12-01

    Data Collection management has become an essential activity at the National Computation Infrastructure (NCI) in Australia. NCI's partners (CSIRO, Bureau of Meteorology, Australian National University, and Geoscience Australia), supported by the Australian Government and Research Data Storage Infrastructure (RDSI), have established a national data resource that is co-located with high-performance computing. This paper addresses the metadata management of these data assets over their lifetime. NCI manages 36 data collections (10+ PB) categorised as earth system sciences, climate and weather model data assets and products, earth and marine observations and products, geosciences, terrestrial ecosystem, water management and hydrology, astronomy, social science and biosciences. The data is largely sourced from NCI partners, the custodians of many of the national scientific records, and major research community organisations. The data is made available in a HPC and data-intensive environment - a ~56000 core supercomputer, virtual labs on a 3000 core cloud system, and data services. By assembling these large national assets, new opportunities have arisen to harmonise the data collections, making a powerful cross-disciplinary resource.To support the overall management, a Data Management Plan (DMP) has been developed to record the workflows, procedures, the key contacts and responsibilities. The DMP has fields that can be exported to the ISO19115 schema and to the collection level catalogue of GeoNetwork. The subset or file level metadata catalogues are linked with the collection level through parent-child relationship definition using UUID. A number of tools have been developed that support interactive metadata management, bulk loading of data, and support for computational workflows or data pipelines. NCI creates persistent identifiers for each of the assets. The data collection is tracked over its lifetime, and the recognition of the data providers, data owners, data

  11. OPENING REMARKS: Scientific Discovery through Advanced Computing

    NASA Astrophysics Data System (ADS)

    Strayer, Michael

    2006-01-01

    Good morning. Welcome to SciDAC 2006 and Denver. I share greetings from the new Undersecretary for Energy, Ray Orbach. Five years ago SciDAC was launched as an experiment in computational science. The goal was to form partnerships among science applications, computer scientists, and applied mathematicians to take advantage of the potential of emerging terascale computers. This experiment has been a resounding success. SciDAC has emerged as a powerful concept for addressing some of the biggest challenges facing our world. As significant as these successes were, I believe there is also significance in the teams that achieved them. In addition to their scientific aims these teams have advanced the overall field of computational science and set the stage for even larger accomplishments as we look ahead to SciDAC-2. I am sure that many of you are expecting to hear about the results of our current solicitation for SciDAC-2. I’m afraid we are not quite ready to make that announcement. Decisions are still being made and we will announce the results later this summer. Nearly 250 unique proposals were received and evaluated, involving literally thousands of researchers, postdocs, and students. These collectively requested more than five times our expected budget. This response is a testament to the success of SciDAC in the community. In SciDAC-2 our budget has been increased to about 70 million for FY 2007 and our partnerships have expanded to include the Environment and National Security missions of the Department. The National Science Foundation has also joined as a partner. These new partnerships are expected to expand the application space of SciDAC, and broaden the impact and visibility of the program. We have, with our recent solicitation, expanded to turbulence, computational biology, and groundwater reactive modeling and simulation. We are currently talking with the Department’s applied energy programs about risk assessment, optimization of complex systems - such

  12. Stimulated dual-band infrared computed tomography: A tool to inspect the aging infrastructure

    SciTech Connect

    Del Grande, N.K.; Durbin, P.F.

    1995-06-27

    The authors have developed stimulated dual-band infrared (IR) computed tomography as a tool to inspect the aging infrastructure. The system has the potential to locate and quantify structural damage within airframes and bridge decks. Typically, dual-band IR detection methods improve the signal-to-noise ratio by a factor of ten, compared to single-band IR detection methods. They conducted a demonstration at Boeing using a uniform pulsed-heat source to stimulate IR images of hidden defects in the 727 fuselage. The dual-band IR camera and image processing system produced temperature, thermal inertia, and cooling-rate maps. In combination, these maps characterized the defect site, size, depth, thickness and type. The authors quantified the percent metal loss from corrosion above a threshold of 5%, with overall uncertainties of 3%. Also, they conducted a feasibility study of dual-band IR thermal imaging for bridge deck inspections. They determined the sites and relative concrete displacement of 2-in. and 4-in. deep delaminations from thin styrofoam implants in asphalt-covered concrete slabs. They demonstrated the value of dual-band IR computed tomography to quantify structural damage within flash-heated airframes and naturally-heated bridge decks.

  13. Application of advanced electronics to a future spacecraft computer design

    NASA Technical Reports Server (NTRS)

    Carney, P. C.

    1980-01-01

    Advancements in hardware and software technology are summarized with specific emphasis on spacecraft computer capabilities. Available state of the art technology is reviewed and candidate architectures are defined.

  14. 77 FR 12823 - Advanced Scientific Computing Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-02

    ... final report, Advanced Networking update Status from Computer Science COV Early Career technical talks Summary of Applied Math and Computer Science Workshops ASCR's new SBIR awards Data-intensive...

  15. [Design and study of parallel computing environment of Monte Carlo simulation for particle therapy planning using a public cloud-computing infrastructure].

    PubMed

    Yokohama, Noriya

    2013-07-01

    This report was aimed at structuring the design of architectures and studying performance measurement of a parallel computing environment using a Monte Carlo simulation for particle therapy using a high performance computing (HPC) instance within a public cloud-computing infrastructure. Performance measurements showed an approximately 28 times faster speed than seen with single-thread architecture, combined with improved stability. A study of methods of optimizing the system operations also indicated lower cost.

  16. Developing an Advanced Environment for Collaborative Computing

    NASA Technical Reports Server (NTRS)

    Becerra-Fernandez, Irma; Stewart, Helen; DelAlto, Martha; DelAlto, Martha; Knight, Chris

    1999-01-01

    Knowledge management in general tries to organize and make available important know-how, whenever and where ever is needed. Today, organizations rely on decision-makers to produce "mission critical" decisions that am based on inputs from multiple domains. The ideal decision-maker has a profound understanding of specific domains that influence the decision-making process coupled with the experience that allows them to act quickly and decisively on the information. In addition, learning companies benefit by not repeating costly mistakes, and by reducing time-to-market in Research & Development projects. Group-decision making tools can help companies make better decisions by capturing the knowledge from groups of experts. Furthermore, companies that capture their customers preferences can improve their customer service, which translates to larger profits. Therefore collaborative computing provides a common communication space, improves sharing of knowledge, provides a mechanism for real-time feedback on the tasks being performed, helps to optimize processes, and results in a centralized knowledge warehouse. This paper presents the research directions. of a project which seeks to augment an advanced collaborative web-based environment called Postdoc, with workflow capabilities. Postdoc is a "government-off-the-shelf" document management software developed at NASA-Ames Research Center (ARC).

  17. Computational and design methods for advanced imaging

    NASA Astrophysics Data System (ADS)

    Birch, Gabriel C.

    This dissertation merges the optical design and computational aspects of imaging systems to create novel devices that solve engineering problems in optical science and attempts to expand the solution space available to the optical designer. This dissertation is divided into two parts: the first discusses a new active illumination depth sensing modality, while the second part discusses a passive illumination system called plenoptic, or lightfield, imaging. The new depth sensing modality introduced in part one is called depth through controlled aberration. This technique illuminates a target with a known, aberrated projected pattern and takes an image using a traditional, unmodified imaging system. Knowing how the added aberration in the projected pattern changes as a function of depth, we are able to quantitatively determine depth of a series of points from the camera. A major advantage this method permits is the ability for illumination and imaging axes to be coincident. Plenoptic cameras capture both spatial and angular data simultaneously. This dissertation present a new set of parameters that permit the design and comparison of plenoptic devices outside the traditionally published plenoptic 1.0 and plenoptic 2.0 configurations. Additionally, a series of engineering advancements are presented, including full system raytraces of raw plenoptic images, Zernike compression techniques of raw image files, and non-uniform lenslet arrays to compensate for plenoptic system aberrations. Finally, a new snapshot imaging spectrometer is proposed based off the plenoptic configuration.

  18. Siberia Integrated Regional Study information-computational and instrumental infrastructure development

    NASA Astrophysics Data System (ADS)

    Gordov, E. P.; Kabanov, M. V.; Krutikov, V. A.; Kuzin, V. I.; Lykosov, V. N.; Okladnikov, I.; Titov, A. G.; Vaganov, E. A.

    2011-12-01

    Reported are several important steps in development of information-computational and instrumental infrastructure of the NEESPI mega-project Siberia Integrated Regional Study, which is devoted to investigation of global change impact on Siberia environment and related feedback. Firstly, development of scientific and technological basis and creation of a reference network for monitoring of climatic changes in Siberia is planned for 2012-2017. The network will include 12 reference monitoring stations equipped with modern instrumentation for monitoring spread across Siberia as well as data center aimed at storage of instrumental and modeling data and providing an access to those. The stations will be created at the following sites: Barnaul (Aktru), Chita (Arakhley), Irkutsk (Mondy), Khaty-Mansiisk (Shapsha), Krasnoyarsk (Zotino), Kyzyl (Dolinnaya), Nadym (Polyarnaya), Novosibirsk (Chany), Tomsk (Vasyuganie), Tomsk (Akademgorodok), Ulan-Ude (Istomino) and Yakutsk (Spasskaya Pad') and supported in operation by relevant SB RAS research Institutes and Siberian Universities. Also a suite of models is under development now, which will comprise global and regional climatic and meteorological models run at the Siberian Supercomputer Center. The CLEARS (CLimate and Environment Analysis and Research System) information-computational web-GIS is planned to be deployed at the data center and used for analysis of recent and future climatic and environmental changes in Siberia. Altogether these components will form a SB RAS megascience facility aimed at detailed monitoring of on-going natural and climatic processes on this territory and prognoses of their dynamics in future. It should create an information basis for decision-making on future socio-economic development of Siberia. It will also improve significantly efficiency of international scientific cooperation in Siberia.

  19. Advanced flight computers for planetary exploration

    NASA Technical Reports Server (NTRS)

    Stephenson, R. Rhoads

    1988-01-01

    Research concerning flight computers for use on interplanetary probes is reviewed. The history of these computers from the Viking mission to the present is outlined. The differences between ground commercial computers and computers for planetary exploration are listed. The development of a computer for the Mariner Mark II comet rendezvous asteroid flyby mission is described. Various aspects of recently developed computer systems are examined, including the Max real time, embedded computer, a hypercube distributed supercomputer, a SAR data processor, a processor for the High Resolution IR Imaging Spectrometer, and a robotic vision multiresolution pyramid machine for processsing images obtained by a Mars Rover.

  20. The National Alliance for Medical Image Computing, a roadmap initiative to build a free and open source software infrastructure for translational research in medical image analysis.

    PubMed

    Kapur, Tina; Pieper, Steve; Whitaker, Ross; Aylward, Stephen; Jakab, Marianna; Schroeder, Will; Kikinis, Ron

    2012-01-01

    The National Alliance for Medical Image Computing (NA-MIC), is a multi-institutional, interdisciplinary community of researchers, who share the recognition that modern health care demands improved technologies to ease suffering and prolong productive life. Organized under the National Centers for Biomedical Computing 7 years ago, the mission of NA-MIC is to implement a robust and flexible open-source infrastructure for developing and applying advanced imaging technologies across a range of important biomedical research disciplines. A measure of its success, NA-MIC is now applying this technology to diseases that have immense impact on the duration and quality of life: cancer, heart disease, trauma, and degenerative genetic diseases. The targets of this technology range from group comparisons to subject-specific analysis.

  1. Some Recent Advances in Computer Graphics.

    ERIC Educational Resources Information Center

    Whitted, Turner

    1982-01-01

    General principles of computer graphics are reviewed, including discussions of display hardware, geometric modeling, algorithms, and applications in science, computer-aided design, flight training, communications, business, art, and entertainment. (JN)

  2. Computing Advances in the Teaching of Chemistry.

    ERIC Educational Resources Information Center

    Baskett, W. P.; Matthews, G. P.

    1984-01-01

    Discusses three trends in computer-oriented chemistry instruction: (1) availability of interfaces to integrate computers with experiments; (2) impact of the development of higher resolution graphics and greater memory capacity; and (3) role of videodisc technology on computer assisted instruction. Includes program listings for auto-titration and…

  3. ADVANCED COMPUTATIONAL METHODS IN DOSE MODELING: APPLICATION OF COMPUTATIONAL BIOPHYSICAL TRANSPORT, COMPUTATIONAL CHEMISTRY, AND COMPUTATIONAL BIOLOGY

    EPA Science Inventory

    Computational toxicology (CompTox) leverages the significant gains in computing power and computational techniques (e.g., numerical approaches, structure-activity relationships, bioinformatics) realized over the last few years, thereby reducing costs and increasing efficiency i...

  4. A social-ecological database to advance research on infrastructure development impacts in the Brazilian Amazon.

    PubMed

    Tucker Lima, Joanna M; Valle, Denis; Moretto, Evandro Mateus; Pulice, Sergio Mantovani Paiva; Zuca, Nadia Lucia; Roquetti, Daniel Rondinelli; Beduschi, Liviam Elizabeth Cordeiro; Praia, Amanda Salles; Okamoto, Claudia Parucce Franco; da Silva Carvalhaes, Vinicius Leite; Branco, Evandro Albiach; Barbezani, Bruna; Labandera, Emily; Timpe, Kelsie; Kaplan, David

    2016-08-30

    Recognized as one of the world's most vital natural and cultural resources, the Amazon faces a wide variety of threats from natural resource and infrastructure development. Within this context, rigorous scientific study of the region's complex social-ecological system is critical to inform and direct decision-making toward more sustainable environmental and social outcomes. Given the Amazon's tightly linked social and ecological components and the scope of potential development impacts, effective study of this system requires an easily accessible resource that provides a broad and reliable data baseline. This paper brings together multiple datasets from diverse disciplines (including human health, socio-economics, environment, hydrology, and energy) to provide investigators with a variety of baseline data to explore the multiple long-term effects of infrastructure development in the Brazilian Amazon.

  5. A social-ecological database to advance research on infrastructure development impacts in the Brazilian Amazon

    NASA Astrophysics Data System (ADS)

    Tucker Lima, Joanna M.; Valle, Denis; Moretto, Evandro Mateus; Pulice, Sergio Mantovani Paiva; Zuca, Nadia Lucia; Roquetti, Daniel Rondinelli; Beduschi, Liviam Elizabeth Cordeiro; Praia, Amanda Salles; Okamoto, Claudia Parucce Franco; da Silva Carvalhaes, Vinicius Leite; Branco, Evandro Albiach; Barbezani, Bruna; Labandera, Emily; Timpe, Kelsie; Kaplan, David

    2016-08-01

    Recognized as one of the world’s most vital natural and cultural resources, the Amazon faces a wide variety of threats from natural resource and infrastructure development. Within this context, rigorous scientific study of the region’s complex social-ecological system is critical to inform and direct decision-making toward more sustainable environmental and social outcomes. Given the Amazon’s tightly linked social and ecological components and the scope of potential development impacts, effective study of this system requires an easily accessible resource that provides a broad and reliable data baseline. This paper brings together multiple datasets from diverse disciplines (including human health, socio-economics, environment, hydrology, and energy) to provide investigators with a variety of baseline data to explore the multiple long-term effects of infrastructure development in the Brazilian Amazon.

  6. A social-ecological database to advance research on infrastructure development impacts in the Brazilian Amazon

    PubMed Central

    Tucker Lima, Joanna M.; Valle, Denis; Moretto, Evandro Mateus; Pulice, Sergio Mantovani Paiva; Zuca, Nadia Lucia; Roquetti, Daniel Rondinelli; Beduschi, Liviam Elizabeth Cordeiro; Praia, Amanda Salles; Okamoto, Claudia Parucce Franco; da Silva Carvalhaes, Vinicius Leite; Branco, Evandro Albiach; Barbezani, Bruna; Labandera, Emily; Timpe, Kelsie; Kaplan, David

    2016-01-01

    Recognized as one of the world’s most vital natural and cultural resources, the Amazon faces a wide variety of threats from natural resource and infrastructure development. Within this context, rigorous scientific study of the region’s complex social-ecological system is critical to inform and direct decision-making toward more sustainable environmental and social outcomes. Given the Amazon’s tightly linked social and ecological components and the scope of potential development impacts, effective study of this system requires an easily accessible resource that provides a broad and reliable data baseline. This paper brings together multiple datasets from diverse disciplines (including human health, socio-economics, environment, hydrology, and energy) to provide investigators with a variety of baseline data to explore the multiple long-term effects of infrastructure development in the Brazilian Amazon. PMID:27575915

  7. Advanced Digital Video and the National Information Infrastructure. Report of the Information Infrastructure Task Force, Committee on Applications and Technology, Technology Policy Working Group. Draft for Public Comment.

    ERIC Educational Resources Information Center

    Office of Science and Technology Policy, Washington, DC.

    The National Information Infrastructure (NII) vision encompasses an infrastructure providing seamless, interactive, user driven access to the widest range of information. Video will play a key role in distribution of educational information, government data, manufacturing information, and access to health care data and services. The Technology…

  8. Implementation issues for mobile-wireless infrastructure and mobile health care computing devices for a hospital ward setting.

    PubMed

    Heslop, Liza; Weeding, Stephen; Dawson, Linda; Fisher, Julie; Howard, Andrew

    2010-08-01

    mWard is a project whose purpose is to enhance existing clinical and administrative decision support and to consider mobile computers, connected via wireless network, for bringing clinical information to the point of care. The mWard project allowed a limited number of users to test and evaluate a selected range of mobile-wireless infrastructure and mobile health care computing devices at the neuroscience ward at Southern Health's Monash Medical Centre, Victoria, Australia. Before the project commenced, the ward had two PC's which were used as terminals by all ward-based staff and numerous multi-disciplinary staff who visited the ward each day. The first stage of the research, outlined in this paper, evaluates a selected range of mobile-wireless infrastructure.

  9. Advancing crime scene computer forensics techniques

    NASA Astrophysics Data System (ADS)

    Hosmer, Chet; Feldman, John; Giordano, Joe

    1999-02-01

    Computers and network technology have become inexpensive and powerful tools that can be applied to a wide range of criminal activity. Computers have changed the world's view of evidence because computers are used more and more as tools in committing `traditional crimes' such as embezzlements, thefts, extortion and murder. This paper will focus on reviewing the current state-of-the-art of the data recovery and evidence construction tools used in both the field and laboratory for prosection purposes.

  10. Application of advanced computational technology to propulsion CFD

    NASA Astrophysics Data System (ADS)

    Szuch, John R.

    The Internal Fluid Mechanics Division of the NASA Lewis Research Center is combining the key elements of computational fluid dynamics, aerothermodynamic experiments, and advanced computational technology to bring internal computational fluid dynamics (ICFM) to a state of practical application for aerospace propulsion system design. This paper presents an overview of efforts underway at NASA Lewis to advance and apply computational technology to ICFM. These efforts include the use of modern, software engineering principles for code development, the development of an AI-based user-interface for large codes, the establishment of a high-performance, data communications network to link ICFM researchers and facilities, and the application of parallel processing to speed up computationally intensive and/or time-critical ICFM problems. A multistage compressor flow physics program is cited as an example of efforts to use advanced computational technology to enhance a current NASA Lewis ICFM research program.

  11. Computation of Viscous Flow about Advanced Projectiles.

    DTIC Science & Technology

    1983-09-09

    Domain". Journal of Comp. Physics, Vol. 8, 1971, pp. 392-408. 10. Thompson , J . F ., Thames, F. C., and Mastin, C. M., "Automatic Numerical Generation of...computations, USSR Comput. Math. Math. Phys., 12, 2 (1972), 182-195. I~~ll A - - 18. Thompson , J . F ., F. C. Thames, and C. M. Mastin, Automatic

  12. The multi-modal Australian ScienceS Imaging and Visualization Environment (MASSIVE) high performance computing infrastructure: applications in neuroscience and neuroinformatics research

    PubMed Central

    Goscinski, Wojtek J.; McIntosh, Paul; Felzmann, Ulrich; Maksimenko, Anton; Hall, Christopher J.; Gureyev, Timur; Thompson, Darren; Janke, Andrew; Galloway, Graham; Killeen, Neil E. B.; Raniga, Parnesh; Kaluza, Owen; Ng, Amanda; Poudel, Govinda; Barnes, David G.; Nguyen, Toan; Bonnington, Paul; Egan, Gary F.

    2014-01-01

    The Multi-modal Australian ScienceS Imaging and Visualization Environment (MASSIVE) is a national imaging and visualization facility established by Monash University, the Australian Synchrotron, the Commonwealth Scientific Industrial Research Organization (CSIRO), and the Victorian Partnership for Advanced Computing (VPAC), with funding from the National Computational Infrastructure and the Victorian Government. The MASSIVE facility provides hardware, software, and expertise to drive research in the biomedical sciences, particularly advanced brain imaging research using synchrotron x-ray and infrared imaging, functional and structural magnetic resonance imaging (MRI), x-ray computer tomography (CT), electron microscopy and optical microscopy. The development of MASSIVE has been based on best practice in system integration methodologies, frameworks, and architectures. The facility has: (i) integrated multiple different neuroimaging analysis software components, (ii) enabled cross-platform and cross-modality integration of neuroinformatics tools, and (iii) brought together neuroimaging databases and analysis workflows. MASSIVE is now operational as a nationally distributed and integrated facility for neuroinfomatics and brain imaging research. PMID:24734019

  13. Computing Algorithms for Nuffield Advanced Physics.

    ERIC Educational Resources Information Center

    Summers, M. K.

    1978-01-01

    Defines all recurrence relations used in the Nuffield course, to solve first- and second-order differential equations, and describes a typical algorithm for computer generation of solutions. (Author/GA)

  14. Aerodynamic optimization studies on advanced architecture computers

    NASA Technical Reports Server (NTRS)

    Chawla, Kalpana

    1995-01-01

    The approach to carrying out multi-discipline aerospace design studies in the future, especially in massively parallel computing environments, comprises of choosing (1) suitable solvers to compute solutions to equations characterizing a discipline, and (2) efficient optimization methods. In addition, for aerodynamic optimization problems, (3) smart methodologies must be selected to modify the surface shape. In this research effort, a 'direct' optimization method is implemented on the Cray C-90 to improve aerodynamic design. It is coupled with an existing implicit Navier-Stokes solver, OVERFLOW, to compute flow solutions. The optimization method is chosen such that it can accomodate multi-discipline optimization in future computations. In the work , however, only single discipline aerodynamic optimization will be included.

  15. Advanced Computational Techniques for Power Tube Design.

    DTIC Science & Technology

    1986-07-01

    fixturing applications, in addition to the existing computer-aided engineering capabilities. o Helix TWT Manufacturing has Implemented a tooling and fixturing...illustrates the ajor features of this computer network. ) The backbone of our system is a Sytek Broadband Network (LAN) which Interconnects terminals and...automatic network analyzer (FANA) which electrically characterizes the slow-wave helices of traveling-wave tubes ( TWTs ) -- both for engineering design

  16. Advanced Crew Personal Support Computer (CPSC) task

    NASA Technical Reports Server (NTRS)

    Muratore, Debra

    1991-01-01

    The topics are presented in view graph form and include: background; objectives of task; benefits to the Space Station Freedom (SSF) Program; technical approach; baseline integration; and growth and evolution options. The objective is to: (1) introduce new computer technology into the SSF Program; (2) augment core computer capabilities to meet additional mission requirements; (3) minimize risk in upgrading technology; and (4) provide a low cost way to enhance crew and ground operations support.

  17. Frontiers of research in advanced computations

    SciTech Connect

    1996-07-01

    The principal mission of the Institute for Scientific Computing Research is to foster interactions among LLNL researchers, universities, and industry on selected topics in scientific computing. In the area of computational physics, the Institute has developed a new algorithm, GaPH, to help scientists understand the chemistry of turbulent and driven plasmas or gases at far less cost than other methods. New low-frequency electromagnetic models better describe the plasma etching and deposition characteristics of a computer chip in the making. A new method for modeling realistic curved boundaries within an orthogonal mesh is resulting in a better understanding of the physics associated with such boundaries and much quicker solutions. All these capabilities are being developed for massively parallel implementation, which is an ongoing focus of Institute researchers. Other groups within the Institute are developing novel computational methods to address a range of other problems. Examples include feature detection and motion recognition by computer, improved monitoring of blood oxygen levels, and entirely new models of human joint mechanics and prosthetic devices.

  18. An Open Computing Infrastructure that Facilitates Integrated Product and Process Development from a Decision-Based Perspective

    NASA Technical Reports Server (NTRS)

    Hale, Mark A.

    1996-01-01

    Computer applications for design have evolved rapidly over the past several decades, and significant payoffs are being achieved by organizations through reductions in design cycle times. These applications are overwhelmed by the requirements imposed during complex, open engineering systems design. Organizations are faced with a number of different methodologies, numerous legacy disciplinary tools, and a very large amount of data. Yet they are also faced with few interdisciplinary tools for design collaboration or methods for achieving the revolutionary product designs required to maintain a competitive advantage in the future. These organizations are looking for a software infrastructure that integrates current corporate design practices with newer simulation and solution techniques. Such an infrastructure must be robust to changes in both corporate needs and enabling technologies. In addition, this infrastructure must be user-friendly, modular and scalable. This need is the motivation for the research described in this dissertation. The research is focused on the development of an open computing infrastructure that facilitates product and process design. In addition, this research explicitly deals with human interactions during design through a model that focuses on the role of a designer as that of decision-maker. The research perspective here is taken from that of design as a discipline with a focus on Decision-Based Design, Theory of Languages, Information Science, and Integration Technology. Given this background, a Model of IPPD is developed and implemented along the lines of a traditional experimental procedure: with the steps of establishing context, formalizing a theory, building an apparatus, conducting an experiment, reviewing results, and providing recommendations. Based on this Model, Design Processes and Specification can be explored in a structured and implementable architecture. An architecture for exploring design called DREAMS (Developing Robust

  19. RECENT ADVANCES IN COMPUTATIONAL MECHANICS FOR CIVIL ENGINEERING

    NASA Astrophysics Data System (ADS)

    Applied Mechanics Committee, Computational Mechanics Subcommittee,

    In order to clarify mechanical phenomena in civil engineering, it is necessary to improve computational theory and technique in consideration of the particularity of objects to be analyzed and to update computational mechanics focusing on practical use. In addition to the analysis of infrastructure, for damage prediction of natural disasters such as earthquake, tsunami and flood, since it is essential to reflect broad ranges in space and time inherent to fields of civil engineering as well as material properties, it is important to newly develop computational method in view of the particularity of fields of civil engineering. In this context, research trend of methods of computational mechanics which is noteworthy for resolving the complex mechanics problems in civil engineering is reviewed in this paper.

  20. Advanced Computational Techniques in Regional Wave Studies

    DTIC Science & Technology

    1990-01-03

    the new GERESS data. The dissertation work emphasized the development and use of advanced computa- tional techniques for studying regional seismic...hand, the possibility of new data sources at regional distances permits using previously ignored signals. Unfortunately, these regional signals will...the Green’s function around this new reference point is containing the propagation effects, and V is the source Gnk(x,t;r,t) - (2) volume where fJk

  1. Advances in Computer-Supported Learning

    ERIC Educational Resources Information Center

    Neto, Francisco; Brasileiro, Francisco

    2007-01-01

    The Internet and growth of computer networks have eliminated geographic barriers, creating an environment where education can be brought to a student no matter where that student may be. The success of distance learning programs and the availability of many Web-supported applications and multimedia resources have increased the effectiveness of…

  2. Advanced Computing Tools and Models for Accelerator Physics

    SciTech Connect

    Ryne, Robert; Ryne, Robert D.

    2008-06-11

    This paper is based on a transcript of my EPAC'08 presentation on advanced computing tools for accelerator physics. Following an introduction I present several examples, provide a history of the development of beam dynamics capabilities, and conclude with thoughts on the future of large scale computing in accelerator physics.

  3. Evaluation of Advanced Computing Techniques and Technologies: Reconfigurable Computing

    NASA Technical Reports Server (NTRS)

    Wells, B. Earl

    2003-01-01

    The focus of this project was to survey the technology of reconfigurable computing determine its level of maturity and suitability for NASA applications. To better understand and assess the effectiveness of the reconfigurable design paradigm that is utilized within the HAL-15 reconfigurable computer system. This system was made available to NASA MSFC for this purpose, from Star Bridge Systems, Inc. To implement on at least one application that would benefit from the performance levels that are possible with reconfigurable hardware. It was originally proposed that experiments in fault tolerance and dynamically reconfigurability would be perform but time constraints mandated that these be pursued as future research.

  4. ASDA - Advanced Suit Design Analyzer computer program

    NASA Technical Reports Server (NTRS)

    Bue, Grant C.; Conger, Bruce C.; Iovine, John V.; Chang, Chi-Min

    1992-01-01

    An ASDA model developed to evaluate the heat and mass transfer characteristics of advanced pressurized suit design concepts for low pressure or vacuum planetary applications is presented. The model is based on a generalized 3-layer suit that uses the Systems Integrated Numerical Differencing Analyzer '85 in conjunction with a 41-node FORTRAN routine. The latter simulates the transient heat transfer and respiratory processes of a human body in a suited environment. The user options for the suit encompass a liquid cooled garment, a removable jacket, a CO2/H2O permeable layer, and a phase change layer.

  5. Developing Research Infrastructure: The Institute for the Advancement of Social Work Research

    ERIC Educational Resources Information Center

    Zlotnik, Joan Levy; Solt, Barbara E.

    2008-01-01

    This article reviews the 15 years of research development efforts of the Institute for the Advancement of Social Work Research (IASWR); delineates IASWR's roles in relation to the social work practice, education, and research communities; presents the transdisciplinary and transorganizational partnerships in which IASWR engages to influence…

  6. Advancing vector biology research: a community survey for future directions, research applications and infrastructure requirements.

    PubMed

    Kohl, Alain; Pondeville, Emilie; Schnettler, Esther; Crisanti, Andrea; Supparo, Clelia; Christophides, George K; Kersey, Paul J; Maslen, Gareth L; Takken, Willem; Koenraadt, Constantianus J M; Oliva, Clelia F; Busquets, Núria; Abad, F Xavier; Failloux, Anna-Bella; Levashina, Elena A; Wilson, Anthony J; Veronesi, Eva; Pichard, Maëlle; Arnaud Marsh, Sarah; Simard, Frédéric; Vernick, Kenneth D

    2016-01-01

    Vector-borne pathogens impact public health, animal production, and animal welfare. Research on arthropod vectors such as mosquitoes, ticks, sandflies, and midges which transmit pathogens to humans and economically important animals is crucial for development of new control measures that target transmission by the vector. While insecticides are an important part of this arsenal, appearance of resistance mechanisms is increasingly common. Novel tools for genetic manipulation of vectors, use of Wolbachia endosymbiotic bacteria, and other biological control mechanisms to prevent pathogen transmission have led to promising new intervention strategies, adding to strong interest in vector biology and genetics as well as vector-pathogen interactions. Vector research is therefore at a crucial juncture, and strategic decisions on future research directions and research infrastructure investment should be informed by the research community. A survey initiated by the European Horizon 2020 INFRAVEC-2 consortium set out to canvass priorities in the vector biology research community and to determine key activities that are needed for researchers to efficiently study vectors, vector-pathogen interactions, as well as access the structures and services that allow such activities to be carried out. We summarize the most important findings of the survey which in particular reflect the priorities of researchers in European countries, and which will be of use to stakeholders that include researchers, government, and research organizations.

  7. A Revolution in the Making: Advances in Materials That May Transform Future Exploration Infrastructures and Missions

    NASA Technical Reports Server (NTRS)

    Harris, Charles E.; Dicus, Dennis L.; Shuart, Mark J.

    2001-01-01

    The NASA Strategic Plan identifies the long-term goal to provide safe and affordable space access, orbital transfer, and interplanetary transportation capabilities to enable research, human exploration, and the commercial development of space; and to conduct human and robotic missions to planets and other bodies in our solar system. Numerous scientific and engineering breakthroughs will be required to develop the technology necessary to achieve this goal. Critical technologies include advanced vehicle primary and secondary structure, radiation protection, propulsion and power systems, fuel storage, electronics and devices, sensors and science instruments, and medical diagnostics and treatment. Advanced materials with revolutionary new capabilities are an essential element of each of these technologies. This paper discusses those materials best suited for aerospace vehicle structure and highlights the enormous potential of one revolutionary new material, carbon nanotubes.

  8. Computational Approaches to Enhance Nanosafety and Advance Nanomedicine

    NASA Astrophysics Data System (ADS)

    Mendoza, Eduardo R.

    With the increasing use of nanoparticles in food processing, filtration/purification and consumer products, as well as the huge potential of their use in nanomedicine, a quantitative understanding of the effects of nanoparticle uptake and transport is needed. We provide examples of novel methods for modeling complex bio-nano interactions which are based on stochastic process algebras. Since model construction presumes sufficient availability of experimental data, recent developments in "nanoinformatics", an emerging discipline analogous to bioinfomatics, in building an accessible information infrastructure are subsequently discussed. Both computational areas offer opportunities for Filipinos to engage in collaborative, cutting edge research in this impactful field.

  9. Intelligent Software Tools for Advanced Computing

    SciTech Connect

    Baumgart, C.W.

    2001-04-03

    Feature extraction and evaluation are two procedures common to the development of any pattern recognition application. These features are the primary pieces of information which are used to train the pattern recognition tool, whether that tool is a neural network, a fuzzy logic rulebase, or a genetic algorithm. Careful selection of the features to be used by the pattern recognition tool can significantly streamline the overall development and training of the solution for the pattern recognition application. This report summarizes the development of an integrated, computer-based software package called the Feature Extraction Toolbox (FET), which can be used for the development and deployment of solutions to generic pattern recognition problems. This toolbox integrates a number of software techniques for signal processing, feature extraction and evaluation, and pattern recognition, all under a single, user-friendly development environment. The toolbox has been developed to run on a laptop computer, so that it may be taken to a site and used to develop pattern recognition applications in the field. A prototype version of this toolbox has been completed and is currently being used for applications development on several projects in support of the Department of Energy.

  10. Advances in computer imaging/applications in facial plastic surgery.

    PubMed

    Papel, I D; Jiannetto, D F

    1999-01-01

    Rapidly progressing computer technology, ever-increasing expectations of patients, and a confusing medicolegal environment requires a clarification of the role of computer imaging/applications. Advances in computer technology and its applications are reviewed. A brief historical discussion is included for perspective. Improvements in both hardware and software with the advent of digital imaging have allowed great increases in speed and accuracy in patient imaging. This facilitates doctor-patient communication and possibly realistic patient expectations. Patients seeking cosmetic surgery now often expect preoperative imaging. Although society in general has become more litigious, a literature search up to 1998 reveals no lawsuits directly involving computer imaging. It appears that conservative utilization of computer imaging by the facial plastic surgeon may actually reduce liability and promote communication. Recent advances have significantly enhanced the value of computer imaging in the practice of facial plastic surgery. These technological advances in computer imaging appear to contribute a useful technique for the practice of facial plastic surgery. Inclusion of computer imaging should be given serious consideration as an adjunct to clinical practice.

  11. TOPICAL REVIEW: Advances and challenges in computational plasma science

    NASA Astrophysics Data System (ADS)

    Tang, W. M.; Chan, V. S.

    2005-02-01

    Scientific simulation, which provides a natural bridge between theory and experiment, is an essential tool for understanding complex plasma behaviour. Recent advances in simulations of magnetically confined plasmas are reviewed in this paper, with illustrative examples, chosen from associated research areas such as microturbulence, magnetohydrodynamics and other topics. Progress has been stimulated, in particular, by the exponential growth of computer speed along with significant improvements in computer technology. The advances in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics have produced increasingly good agreement between experimental observations and computational modelling. This was enabled by two key factors: (a) innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning widely disparate temporal and spatial scales and (b) access to powerful new computational resources. Excellent progress has been made in developing codes for which computer run-time and problem-size scale well with the number of processors on massively parallel processors (MPPs). Examples include the effective usage of the full power of multi-teraflop (multi-trillion floating point computations per second) MPPs to produce three-dimensional, general geometry, nonlinear particle simulations that have accelerated advances in understanding the nature of turbulence self-regulation by zonal flows. These calculations, which typically utilized billions of particles for thousands of time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In looking towards the future, the current results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. This

  12. Advanced Simulation and Computing Fiscal Year 2011-2012 Implementation Plan, Revision 0

    SciTech Connect

    McCoy, M; Phillips, J; Hpson, J; Meisner, R

    2010-04-22

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering (D&E) programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model

  13. Advanced Simulation and Computing FY08-09 Implementation Plan Volume 2 Revision 0

    SciTech Connect

    McCoy, M; Kusnezov, D; Bikkel, T; Hopson, J

    2007-04-25

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the safety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future nonnuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear-weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable Stockpile Life Extension Programs (SLEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional simulation environment while maintaining the support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from one

  14. Advanced Simulation and Computing FY07-08 Implementation Plan Volume 2

    SciTech Connect

    Kusnezov, D; Hale, A; McCoy, M; Hopson, J

    2006-06-22

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the safety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future nonnuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program will require the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear-weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable Stockpile Life Extension Programs (SLEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional simulation environment while maintaining the support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from

  15. Advanced Simulation & Computing FY09-FY10 Implementation Plan Volume 2, Rev. 0

    SciTech Connect

    Meisner, R; Perry, J; McCoy, M; Hopson, J

    2008-04-30

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the safety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future nonnuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC)1 is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear-weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable Stockpile Life Extension Programs (SLEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional simulation environment while maintaining the support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from one

  16. Advanced Simulation and Computing FY09-FY10 Implementation Plan, Volume 2, Revision 0.5

    SciTech Connect

    Meisner, R; Hopson, J; Peery, J; McCoy, M

    2008-10-07

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC)1 is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from one

  17. Advanced Simulation and Computing FY08-09 Implementation Plan, Volume 2, Revision 0.5

    SciTech Connect

    Kusnezov, D; Bickel, T; McCoy, M; Hopson, J

    2007-09-13

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC)1 is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear-weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable Stockpile Life Extension Programs (SLEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional simulation environment while maintaining the support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from

  18. Advanced Simulation and Computing FY09-FY10 Implementation Plan Volume 2, Rev. 1

    SciTech Connect

    Kissel, L

    2009-04-01

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from one that

  19. Advanced Simulation and Computing FY10-FY11 Implementation Plan Volume 2, Rev. 0.5

    SciTech Connect

    Meisner, R; Peery, J; McCoy, M; Hopson, J

    2009-09-08

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering (D&E) programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model

  20. Downscaling seasonal to centennial simulations on distributed computing infrastructures using WRF model. The WRF4G project

    NASA Astrophysics Data System (ADS)

    Cofino, A. S.; Fernández Quiruelas, V.; Blanco Real, J. C.; García Díez, M.; Fernández, J.

    2013-12-01

    Nowadays Grid Computing is powerful computational tool which is ready to be used for scientific community in different areas (such as biomedicine, astrophysics, climate, etc.). However, the use of this distributed computing infrastructures (DCI) is not yet common practice in climate research, and only a few teams and applications in this area take advantage of this infrastructure. Thus, the WRF4G project objective is to popularize the use of this technology in the atmospheric sciences area. In order to achieve this objective, one of the most used applications has been taken (WRF; a limited- area model, successor of the MM5 model), that has a user community formed by more than 8000 researchers worldwide. This community develop its research activity on different areas and could benefit from the advantages of Grid resources (case study simulations, regional hind-cast/forecast, sensitivity studies, etc.). The WRF model is used by many groups, in the climate research community, to carry on downscaling simulations. Therefore this community will also benefit. However, Grid infrastructures have some drawbacks for the execution of applications that make an intensive use of CPU and memory for a long period of time. This makes necessary to develop a specific framework (middleware). This middleware encapsulates the application and provides appropriate services for the monitoring and management of the simulations and the data. Thus,another objective of theWRF4G project consists on the development of a generic adaptation of WRF to DCIs. It should simplify the access to the DCIs for the researchers, and also to free them from the technical and computational aspects of the use of theses DCI. Finally, in order to demonstrate the ability of WRF4G solving actual scientific challenges with interest and relevance on the climate science (implying a high computational cost) we will shown results from different kind of downscaling experiments, like ERA-Interim re-analysis, CMIP5 models

  1. On the Development of a Computing Infrastructure that Facilitates IPPD from a Decision-Based Design Perspective

    NASA Technical Reports Server (NTRS)

    Hale, Mark A.; Craig, James I.; Mistree, Farrokh; Schrage, Daniel P.

    1995-01-01

    Integrated Product and Process Development (IPPD) embodies the simultaneous application of both system and quality engineering methods throughout an iterative design process. The use of IPPD results in the time-conscious, cost-saving development of engineering systems. Georgia Tech has proposed the development of an Integrated Design Engineering Simulator that will merge Integrated Product and Process Development with interdisciplinary analysis techniques and state-of-the-art computational technologies. To implement IPPD, a Decision-Based Design perspective is encapsulated in an approach that focuses on the role of the human designer in product development. The approach has two parts and is outlined in this paper. First, an architecture, called DREAMS, is being developed that facilitates design from a decision-based perspective. Second, a supporting computing infrastructure, called IMAGE, is being designed. The current status of development is given and future directions are outlined.

  2. Technology Advancements in the Next Generation of Domain Agnostic Spatial Data Infrastructures

    NASA Astrophysics Data System (ADS)

    Golodoniuc, Pavel; Rankine, Terry; Box, Paul; Atkinson, Rob; Kostanski, Laura

    2013-04-01

    Spatial Data Infrastructures (SDI) are typically composed of a suite of products focused on improving spatial information discovery and access. Proliferation of SDI initiatives has caused the "Yet Another Portal" (YAP) syndrome to emerge with each initiative providing a new mechanism for cataloguing and enabling users to search for spatial information resources. Often coarse-grained and incomplete metadata information available via these SDIs renders them to being analogous with an antiquated library catalogue. We posit that the successful use of SDI resources requires attention to be focused on various semantic aspects of the information contained within - particularly the information models and vocabularies. Currently it is common for understanding of these models and vocabularies to be built into portals. This does not enhance interoperability between SDIs, nor does this provide a means for referencing or searching for a specific feature (e.g., the City of Sydney) without first knowing the location of the information source for the feature and the form in which it is represented. SDI interfaces, such as OGC WFS, provide data from a spatial representation perspective, but do not provide identifiers that can easily be cited or used across system boundaries. The lack of mechanisms to provide stable identifiers of a feature renders it permanently scoped to a particular dataset. The other three important aspects that are commonly lacking in SDIs are the inadequate handling of feature level metadata that is commonly not sufficient enough for more than the most basic data discovery; features delivered through SDI are not well integrated with information systems that deliver statistical information about those features; and, importantly there are inadequate mechanisms to reconcile and associate multiple identities and representations of the same real world feature. In this paper we present an extended view of an SDI architecture with integrated support for information

  3. ATOS-1: Designing the infrastructure for an advanced spacecraft operations system

    NASA Technical Reports Server (NTRS)

    Poulter, K. J.; Smith, H. N.

    1993-01-01

    The space industry has identified the need to use artificial intelligence and knowledge based system techniques as integrated, central, symbolic processing components of future mission design, support and operations systems. Various practical and commercial constraints require that off-the-shelf applications, and their knowledge bases, are reused where appropriate and that different mission contractors, potentially using different KBS technologies, can provide application and knowledge sub-modules of an overall integrated system. In order to achieve this integration, which we call knowledge sharing and distributed reasoning, there needs to be agreement on knowledge representations, knowledge interchange-formats, knowledge level communications protocols, and ontology. Research indicates that the latter is most important, providing the applications with a common conceptualization of the domain, in our case spacecraft operations, mission design, and planning. Agreement on ontology permits applications that employ different knowledge representations to interwork through mediators which we refer to as knowledge agents. This creates the illusion of a shared model without the constraints, both technical and commercial, that occur in centralized or uniform architectures. This paper explains how these matters are being addressed within the ATOS program at ESOC, using techniques which draw upon ideas and standards emerging from the DARPA Knowledge Sharing Effort. In particular, we explain how the project is developing an electronic Ontology of Spacecraft Operations and how this can be used as an enabling component within space support systems that employ advanced software engineering. We indicate our hope and expectation that the core ontology developed in ATOS, will permit the full development of standards for such systems throughout the space industry.

  4. ATOS-1: Designing the infrastructure for an advanced spacecraft operations system

    NASA Astrophysics Data System (ADS)

    Poulter, K. J.; Smith, H. N.

    1993-03-01

    The space industry has identified the need to use artificial intelligence and knowledge based system techniques as integrated, central, symbolic processing components of future mission design, support and operations systems. Various practical and commercial constraints require that off-the-shelf applications, and their knowledge bases, are reused where appropriate and that different mission contractors, potentially using different KBS technologies, can provide application and knowledge sub-modules of an overall integrated system. In order to achieve this integration, which we call knowledge sharing and distributed reasoning, there needs to be agreement on knowledge representations, knowledge interchange-formats, knowledge level communications protocols, and ontology. Research indicates that the latter is most important, providing the applications with a common conceptualization of the domain, in our case spacecraft operations, mission design, and planning. Agreement on ontology permits applications that employ different knowledge representations to interwork through mediators which we refer to as knowledge agents. This creates the illusion of a shared model without the constraints, both technical and commercial, that occur in centralized or uniform architectures. This paper explains how these matters are being addressed within the ATOS program at ESOC, using techniques which draw upon ideas and standards emerging from the DARPA Knowledge Sharing Effort. In particular, we explain how the project is developing an electronic Ontology of Spacecraft Operations and how this can be used as an enabling component within space support systems that employ advanced software engineering. We indicate our hope and expectation that the core ontology developed in ATOS, will permit the full development of standards for such systems throughout the space industry.

  5. Computer-Assisted Foreign Language Teaching and Learning: Technological Advances

    ERIC Educational Resources Information Center

    Zou, Bin; Xing, Minjie; Wang, Yuping; Sun, Mingyu; Xiang, Catherine H.

    2013-01-01

    Computer-Assisted Foreign Language Teaching and Learning: Technological Advances highlights new research and an original framework that brings together foreign language teaching, experiments and testing practices that utilize the most recent and widely used e-learning resources. This comprehensive collection of research will offer linguistic…

  6. 76 FR 64330 - Advanced Scientific Computing Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-18

    ... Advanced Scientific Computing Advisory Committee AGENCY: Department of Energy, Office of Science. ACTION... Reliability, Diffusion on Complex Networks, and Reversible Software Execution Systems Report from Applied Math... at: (301) 903-7486 or by email at: Melea.Baker@science.doe.gov . You must make your request for...

  7. 78 FR 56871 - Advanced Scientific Computing Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-16

    ... Advanced Scientific Computing Advisory Committee AGENCY: Office of Science, Department of Energy. ACTION... Exascale technical approaches subcommittee Facilities update Report from Applied Math Committee of Visitors...: ( Melea.Baker@science.doe.gov ). You must make your request for an oral statement at least five...

  8. The Federal Government's Role in Advancing Computer Technology

    ERIC Educational Resources Information Center

    Information Hotline, 1978

    1978-01-01

    As part of the Federal Data Processing Reorganization Study submitted by the Science and Technology Team, the Federal Government's role in advancing and diffusing computer technology is discussed. Findings and conclusions assess the state-of-the-art in government and in industry, and five recommendations provide directions for government policy…

  9. Advanced computational research in materials processing for design and manufacturing

    SciTech Connect

    Zacharia, T.

    1994-12-31

    The computational requirements for design and manufacture of automotive components have seen dramatic increases for producing automobiles with three times the mileage. Automotive component design systems are becoming increasingly reliant on structural analysis requiring both overall larger analysis and more complex analyses, more three-dimensional analyses, larger model sizes, and routine consideration of transient and non-linear effects. Such analyses must be performed rapidly to minimize delays in the design and development process, which drives the need for parallel computing. This paper briefly describes advanced computational research in superplastic forming and automotive crash worthiness.

  10. Advanced computational tools for 3-D seismic analysis

    SciTech Connect

    Barhen, J.; Glover, C.W.; Protopopescu, V.A.

    1996-06-01

    The global objective of this effort is to develop advanced computational tools for 3-D seismic analysis, and test the products using a model dataset developed under the joint aegis of the United States` Society of Exploration Geophysicists (SEG) and the European Association of Exploration Geophysicists (EAEG). The goal is to enhance the value to the oil industry of the SEG/EAEG modeling project, carried out with US Department of Energy (DOE) funding in FY` 93-95. The primary objective of the ORNL Center for Engineering Systems Advanced Research (CESAR) is to spearhead the computational innovations techniques that would enable a revolutionary advance in 3-D seismic analysis. The CESAR effort is carried out in collaboration with world-class domain experts from leading universities, and in close coordination with other national laboratories and oil industry partners.

  11. An expanded framework for the advanced computational testing and simulation toolkit

    SciTech Connect

    Marques, Osni A.; Drummond, Leroy A.

    2003-11-09

    The Advanced Computational Testing and Simulation (ACTS) Toolkit is a set of computational tools developed primarily at DOE laboratories and is aimed at simplifying the solution of common and important computational problems. The use of the tools reduces the development time for new codes and the tools provide functionality that might not otherwise be available. This document outlines an agenda for expanding the scope of the ACTS Project based on lessons learned from current activities. Highlights of this agenda include peer-reviewed certification of new tools; finding tools to solve problems that are not currently addressed by the Toolkit; working in collaboration with other software initiatives and DOE computer facilities; expanding outreach efforts; promoting interoperability, further development of the tools; and improving functionality of the ACTS Information Center, among other tasks. The ultimate goal is to make the ACTS tools more widely used and more effective in solving DOE's and the nation's scientific problems through the creation of a reliable software infrastructure for scientific computing.

  12. QuakeSim Computational Infrastructure for Integrating DESDynI and UAVSAR Data into Earthquake Models (Invited)

    NASA Astrophysics Data System (ADS)

    Donnellan, A.; Rundle, J. B.; Grant Ludwig, L.; McLeod, D.; Pierce, M.; Fox, G.; Al-Ghanmi, R. A.; Parker, J. W.; Granat, R. A.; Lyzenga, G. A.; Ma, Y.; Glasscoe, M. T.; Ji, J.; Wang, J.; Gao, X.; Quakesim Team

    2010-12-01

    QuakeSim is a computational infrastructure for studying, modeling, and forecasting earthquakes from a system perspective. QuakeSim takes into account the entire earthquake cycle of strain accumulation and release, requiring crustal deformation data as a key data source. Interferometric Synthetic Aperture Radar (InSAR) and Global Positioning System (GPS) data provide current crustal deformation rates, while paleoseismic data provide long-term fault slip rates and earthquake history. The QuakeTables federated multimedia database contains spaceborne and UAVSAR InSAR data for the California region as well as paleoseismic fault data from a number of self-consistent datasets, such as the Uniform California Earthquake Rupture Forecast (UCERF), California Geological Survey (CGS), and Virtual California. Access to QuakeTables is provided through a web interface and a Web Services based application program interface (API) for data delivery. Data are categorized into self-consistent datasets that can be queried in their original form or a derivation therefrom. QuakeTables provides access to mapping features through a web interface, that provides users with direct access to the QuakeTables federated data. Users can browse, map and navigate the available datasets. QuakeSim applications include crustal deformation modeling and pattern analysis. The crustal deformation tools include forward elastic dislocation models (DISLOC) and 3D viscoelastic finite element models (GeoFEST), and elastic inversions of crustal deformation data (SIMPLEX). The tools support mapping and applications for visualizing results in vector or interfermetric form. Virtual California simulates interacting fault systems. Pattern analysis tools include RDAHMM for identifying state changes in time series data, and RIPI for identifying hotspot locations of increased probabilities for magnitude 5 and above earthquakes. The QuakeSim infrastructure automatically posts UAVSAR data to QuakeTables for storage and

  13. The Integrated Safety-Critical Advanced Avionics Communication and Control (ISAACC) System Concept: Infrastructure for ISHM

    NASA Technical Reports Server (NTRS)

    Gwaltney, David A.; Briscoe, Jeri M.

    2005-01-01

    Integrated System Health Management (ISHM) architectures for spacecraft will include hard real-time, critical subsystems and soft real-time monitoring subsystems. Interaction between these subsystems will be necessary and an architecture supporting multiple criticality levels will be required. Demonstration hardware for the Integrated Safety-Critical Advanced Avionics Communication & Control (ISAACC) system has been developed at NASA Marshall Space Flight Center. It is a modular system using a commercially available time-triggered protocol, ?Tp/C, that supports hard real-time distributed control systems independent of the data transmission medium. The protocol is implemented in hardware and provides guaranteed low-latency messaging with inherent fault-tolerance and fault-containment. Interoperability between modules and systems of modules using the TTP/C is guaranteed through definition of messages and the precise message schedule implemented by the master-less Time Division Multiple Access (TDMA) communications protocol. "Plug-and-play" capability for sensors and actuators provides automatically configurable modules supporting sensor recalibration and control algorithm re-tuning without software modification. Modular components of controlled physical system(s) critical to control algorithm tuning, such as pumps or valve components in an engine, can be replaced or upgraded as "plug and play" components without modification to the ISAACC module hardware or software. ISAACC modules can communicate with other vehicle subsystems through time-triggered protocols or other communications protocols implemented over Ethernet, MIL-STD- 1553 and RS-485/422. Other communication bus physical layers and protocols can be included as required. In this way, the ISAACC modules can be part of a system-of-systems in a vehicle with multi-tier subsystems of varying criticality. The goal of the ISAACC architecture development is control and monitoring of safety critical systems of a

  14. [Activities of Research Institute for Advanced Computer Science

    NASA Technical Reports Server (NTRS)

    Gross, Anthony R. (Technical Monitor); Leiner, Barry M.

    2001-01-01

    The Research Institute for Advanced Computer Science (RIACS) carries out basic research and technology development in computer science, in support of the National Aeronautics and Space Administrations missions. RIACS is located at the NASA Ames Research Center, Moffett Field, California. RIACS research focuses on the three cornerstones of IT research necessary to meet the future challenges of NASA missions: 1. Automated Reasoning for Autonomous Systems Techniques are being developed enabling spacecraft that will be self-guiding and self-correcting to the extent that they will require little or no human intervention. Such craft will be equipped to independently solve problems as they arise, and fulfill their missions with minimum direction from Earth. 2. Human-Centered Computing Many NASA missions require synergy between humans and computers, with sophisticated computational aids amplifying human cognitive and perceptual abilities. 3. High Performance Computing and Networking Advances in the performance of computing and networking continue to have major impact on a variety of NASA endeavors, ranging from modeling and simulation to analysis of large scientific datasets to collaborative engineering, planning and execution. In addition, RIACS collaborates with NASA scientists to apply IT research to a variety of NASA application domains. RIACS also engages in other activities, such as workshops, seminars, visiting scientist programs and student summer programs, designed to encourage and facilitate collaboration between the university and NASA IT research communities.

  15. Advances in Numerical Boundary Conditions for Computational Aeroacoustics

    NASA Technical Reports Server (NTRS)

    Tam, Christopher K. W.

    1997-01-01

    Advances in Computational Aeroacoustics (CAA) depend critically on the availability of accurate, nondispersive, least dissipative computation algorithm as well as high quality numerical boundary treatments. This paper focuses on the recent developments of numerical boundary conditions. In a typical CAA problem, one often encounters two types of boundaries. Because a finite computation domain is used, there are external boundaries. On the external boundaries, boundary conditions simulating the solution outside the computation domain are to be imposed. Inside the computation domain, there may be internal boundaries. On these internal boundaries, boundary conditions simulating the presence of an object or surface with specific acoustic characteristics are to be applied. Numerical boundary conditions, both external or internal, developed for simple model problems are reviewed and examined. Numerical boundary conditions for real aeroacoustic problems are also discussed through specific examples. The paper concludes with a description of some much needed research in numerical boundary conditions for CAA.

  16. Evolving a lingua franca and associated software infrastructure for computational systems biology: the Systems Biology Markup Language (SBML) project.

    PubMed

    Hucka, M; Finney, A; Bornstein, B J; Keating, S M; Shapiro, B E; Matthews, J; Kovitz, B L; Schilstra, M J; Funahashi, A; Doyle, J C; Kitano, H

    2004-06-01

    Biologists are increasingly recognising that computational modelling is crucial for making sense of the vast quantities of complex experimental data that are now being collected. The systems biology field needs agreed-upon information standards if models are to be shared, evaluated and developed cooperatively. Over the last four years, our team has been developing the Systems Biology Markup Language (SBML) in collaboration with an international community of modellers and software developers. SBML has become a de facto standard format for representing formal, quantitative and qualitative models at the level of biochemical reactions and regulatory networks. In this article, we summarise the current and upcoming versions of SBML and our efforts at developing software infrastructure for supporting and broadening its use. We also provide a brief overview of the many SBML-compatible software tools available today.

  17. MOBBED: a computational data infrastructure for handling large collections of event-rich time series datasets in MATLAB.

    PubMed

    Cockfield, Jeremy; Su, Kyungmin; Robbins, Kay A

    2013-01-01

    Experiments to monitor human brain activity during active behavior record a variety of modalities (e.g., EEG, eye tracking, motion capture, respiration monitoring) and capture a complex environmental context leading to large, event-rich time series datasets. The considerable variability of responses within and among subjects in more realistic behavioral scenarios requires experiments to assess many more subjects over longer periods of time. This explosion of data requires better computational infrastructure to more systematically explore and process these collections. MOBBED is a lightweight, easy-to-use, extensible toolkit that allows users to incorporate a computational database into their normal MATLAB workflow. Although capable of storing quite general types of annotated data, MOBBED is particularly oriented to multichannel time series such as EEG that have event streams overlaid with sensor data. MOBBED directly supports access to individual events, data frames, and time-stamped feature vectors, allowing users to ask questions such as what types of events or features co-occur under various experimental conditions. A database provides several advantages not available to users who process one dataset at a time from the local file system. In addition to archiving primary data in a central place to save space and avoid inconsistencies, such a database allows users to manage, search, and retrieve events across multiple datasets without reading the entire dataset. The database also provides infrastructure for handling more complex event patterns that include environmental and contextual conditions. The database can also be used as a cache for expensive intermediate results that are reused in such activities as cross-validation of machine learning algorithms. MOBBED is implemented over PostgreSQL, a widely used open source database, and is freely available under the GNU general public license at http://visual.cs.utsa.edu/mobbed. Source and issue reports for MOBBED

  18. The Design and Implementation of NASA's Advanced Flight Computing Module

    NASA Technical Reports Server (NTRS)

    Alkakaj, Leon; Straedy, Richard; Jarvis, Bruce

    1995-01-01

    This paper describes a working flight computer Multichip Module developed jointly by JPL and TRW under their respective research programs in a collaborative fashion. The MCM is fabricated by nCHIP and is packaged within a 2 by 4 inch Al package from Coors. This flight computer module is one of three modules under development by NASA's Advanced Flight Computer (AFC) program. Further development of the Mass Memory and the programmable I/O MCM modules will follow. The three building block modules will then be stacked into a 3D MCM configuration. The mass and volume of the flight computer MCM achieved at 89 grams and 1.5 cubic inches respectively, represent a major enabling technology for future deep space as well as commercial remote sensing applications.

  19. Advanced Simulation and Computing FY17 Implementation Plan, Version 0

    SciTech Connect

    McCoy, Michel; Archer, Bill; Hendrickson, Bruce; Wade, Doug; Hoang, Thuc

    2016-08-29

    The Stockpile Stewardship Program (SSP) is an integrated technical program for maintaining the safety, surety, and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational capabilities to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources that support annual stockpile assessment and certification, study advanced nuclear weapons design and manufacturing processes, analyze accident scenarios and weapons aging, and provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balance of resource, including technical staff, hardware, simulation software, and computer science solutions. ASC is now focused on increasing predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (sufficient resolution, dimensionality, and scientific details), and quantifying critical margins and uncertainties. Resolving each issue requires increasingly difficult analyses because the aging process has progressively moved the stockpile further away from the original test base. Where possible, the program also enables the use of high performance computing (HPC) and simulation tools to address broader national security needs, such as foreign nuclear weapon assessments and counter nuclear terrorism.

  20. An infrastructure with a unified control plane to integrate IP into optical metro networks to provide flexible and intelligent bandwidth on demand for cloud computing

    NASA Astrophysics Data System (ADS)

    Yang, Wei; Hall, Trevor

    2012-12-01

    The Internet is entering an era of cloud computing to provide more cost effective, eco-friendly and reliable services to consumer and business users and the nature of the Internet traffic will undertake a fundamental transformation. Consequently, the current Internet will no longer suffice for serving cloud traffic in metro areas. This work proposes an infrastructure with a unified control plane that integrates simple packet aggregation technology with optical express through the interoperation between IP routers and electrical traffic controllers in optical metro networks. The proposed infrastructure provides flexible, intelligent, and eco-friendly bandwidth on demand for cloud computing in metro areas.

  1. Activities of the Research Institute for Advanced Computer Science

    NASA Technical Reports Server (NTRS)

    Oliger, Joseph

    1994-01-01

    The Research Institute for Advanced Computer Science (RIACS) was established by the Universities Space Research Association (USRA) at the NASA Ames Research Center (ARC) on June 6, 1983. RIACS is privately operated by USRA, a consortium of universities with research programs in the aerospace sciences, under contract with NASA. The primary mission of RIACS is to provide research and expertise in computer science and scientific computing to support the scientific missions of NASA ARC. The research carried out at RIACS must change its emphasis from year to year in response to NASA ARC's changing needs and technological opportunities. Research at RIACS is currently being done in the following areas: (1) parallel computing; (2) advanced methods for scientific computing; (3) high performance networks; and (4) learning systems. RIACS technical reports are usually preprints of manuscripts that have been submitted to research journals or conference proceedings. A list of these reports for the period January 1, 1994 through December 31, 1994 is in the Reports and Abstracts section of this report.

  2. Soft computing in design and manufacturing of advanced materials

    NASA Technical Reports Server (NTRS)

    Cios, Krzysztof J.; Baaklini, George Y; Vary, Alex

    1993-01-01

    The potential of fuzzy sets and neural networks, often referred to as soft computing, for aiding in all aspects of manufacturing of advanced materials like ceramics is addressed. In design and manufacturing of advanced materials, it is desirable to find which of the many processing variables contribute most to the desired properties of the material. There is also interest in real time quality control of parameters that govern material properties during processing stages. The concepts of fuzzy sets and neural networks are briefly introduced and it is shown how they can be used in the design and manufacturing processes. These two computational methods are alternatives to other methods such as the Taguchi method. The two methods are demonstrated by using data collected at NASA Lewis Research Center. Future research directions are also discussed.

  3. Advanced computer modeling techniques expand belt conveyor technology

    SciTech Connect

    Alspaugh, M.

    1998-07-01

    Increased mining production is continuing to challenge engineers and manufacturers to keep up. The pressure to produce larger and more versatile equipment is increasing. This paper will show some recent major projects in the belt conveyor industry that have pushed the limits of design and engineering technology. Also, it will discuss the systems engineering discipline and advanced computer modeling tools that have helped make these achievements possible. Several examples of technologically advanced designs will be reviewed. However, new technology can sometimes produce increased problems with equipment availability and reliability if not carefully developed. Computer modeling techniques that help one design larger equipment can also compound operational headaches if engineering processes and algorithms are not carefully analyzed every step of the way.

  4. A computational infrastructure for evaluating Care-Coordination and Telehealth services in Europe.

    PubMed

    Natsiavas, Pantelis; Filos, Dimitiris; Maramis, Christos; Chouvarda, Ioanna; Schonenberg, Helen; Pauws, Steffen; Bescos, Cristina; Westerteicher, Christoph; Maglaveras, Nicos

    2014-01-01

    This paper presents the computational framework that is employed for the analysis of health related key drivers and indicators within ACT, a project aiming to improve the deployment of Care Coordination and Telehealth services/programmes across Europe, through an iterative evidence collection-evaluation-refinement process. An open-source solution is proposed, combining a series of established software technologies. The paper focuses on technical aspects of the framework and presents a worked example of a usage scenario.

  5. Advances in Cross-Cutting Ideas for Computational Climate Science

    SciTech Connect

    Ng, Esmond; Evans, Katherine J.; Caldwell, Peter; Hoffman, Forrest M.; Jackson, Charles; Kerstin, Van Dam; Leung, Ruby; Martin, Daniel F.; Ostrouchov, George; Tuminaro, Raymond; Ullrich, Paul; Wild, S.; Williams, Samuel

    2017-01-01

    This report presents results from the DOE-sponsored workshop titled, ``Advancing X-Cutting Ideas for Computational Climate Science Workshop,'' known as AXICCS, held on September 12--13, 2016 in Rockville, MD. The workshop brought together experts in climate science, computational climate science, computer science, and mathematics to discuss interesting but unsolved science questions regarding climate modeling and simulation, promoted collaboration among the diverse scientists in attendance, and brainstormed about possible tools and capabilities that could be developed to help address them. Emerged from discussions at the workshop were several research opportunities that the group felt could advance climate science significantly. These include (1) process-resolving models to provide insight into important processes and features of interest and inform the development of advanced physical parameterizations, (2) a community effort to develop and provide integrated model credibility, (3) including, organizing, and managing increasingly connected model components that increase model fidelity yet complexity, and (4) treating Earth system models as one interconnected organism without numerical or data based boundaries that limit interactions. The group also identified several cross-cutting advances in mathematics, computer science, and computational science that would be needed to enable one or more of these big ideas. It is critical to address the need for organized, verified, and optimized software, which enables the models to grow and continue to provide solutions in which the community can have confidence. Effectively utilizing the newest computer hardware enables simulation efficiency and the ability to handle output from increasingly complex and detailed models. This will be accomplished through hierarchical multiscale algorithms in tandem with new strategies for data handling, analysis, and storage. These big ideas and cross-cutting technologies for enabling

  6. The advanced computational testing and simulation toolkit (ACTS)

    SciTech Connect

    Drummond, L.A.; Marques, O.

    2002-05-21

    During the past decades there has been a continuous growth in the number of physical and societal problems that have been successfully studied and solved by means of computational modeling and simulation. Distinctively, a number of these are important scientific problems ranging in scale from the atomic to the cosmic. For example, ionization is a phenomenon as ubiquitous in modern society as the glow of fluorescent lights and the etching on silicon computer chips; but it was not until 1999 that researchers finally achieved a complete numerical solution to the simplest example of ionization, the collision of a hydrogen atom with an electron. On the opposite scale, cosmologists have long wondered whether the expansion of the Universe, which began with the Big Bang, would ever reverse itself, ending the Universe in a Big Crunch. In 2000, analysis of new measurements of the cosmic microwave background radiation showed that the geometry of the Universe is flat, and thus the Universe will continue expanding forever. Both of these discoveries depended on high performance computer simulations that utilized computational tools included in the Advanced Computational Testing and Simulation (ACTS) Toolkit. The ACTS Toolkit is an umbrella project that brought together a number of general purpose computational tool development projects funded and supported by the U.S. Department of Energy (DOE). These tools, which have been developed independently, mainly at DOE laboratories, make it easier for scientific code developers to write high performance applications for parallel computers. They tackle a number of computational issues that are common to a large number of scientific applications, mainly implementation of numerical algorithms, and support for code development, execution and optimization. The ACTS Toolkit Project enables the use of these tools by a much wider community of computational scientists, and promotes code portability, reusability, reduction of duplicate efforts

  7. CSDMS2.0: Computational Infrastructure for Community Surface Dynamics Modeling

    NASA Astrophysics Data System (ADS)

    Syvitski, J. P.; Hutton, E.; Peckham, S. D.; Overeem, I.; Kettner, A.

    2012-12-01

    The Community Surface Dynamic Modeling System (CSDMS) is an NSF-supported, international and community-driven program that seeks to transform the science and practice of earth-surface dynamics modeling. CSDMS integrates a diverse community of more than 850 geoscientists representing 360 international institutions (academic, government, industry) from 60 countries and is supported by a CSDMS Interagency Committee (22 Federal agencies), and a CSDMS Industrial Consortia (18 companies). CSDMS presently distributes more 200 Open Source models and modeling tools, access to high performance computing clusters in support of developing and running models, and a suite of products for education and knowledge transfer. CSDMS software architecture employs frameworks and services that convert stand-alone models into flexible "plug-and-play" components to be assembled into larger applications. CSDMS2.0 will support model applications within a web browser, on a wider variety of computational platforms, and on other high performance computing clusters to ensure robustness and sustainability of the framework. Conversion of stand-alone models into "plug-and-play" components will employ automated wrapping tools. Methods for quantifying model uncertainty are being adapted as part of the modeling framework. Benchmarking data is being incorporated into the CSDMS modeling framework to support model inter-comparison. Finally, a robust mechanism for ingesting and utilizing semantic mediation databases is being developed within the Modeling Framework. Six new community initiatives are being pursued: 1) an earth - ecosystem modeling initiative to capture ecosystem dynamics and ensuing interactions with landscapes, 2) a geodynamics initiative to investigate the interplay among climate, geomorphology, and tectonic processes, 3) an Anthropocene modeling initiative, to incorporate mechanistic models of human influences, 4) a coastal vulnerability modeling initiative, with emphasis on deltas and

  8. Dynamic provisioning of a HEP computing infrastructure on a shared hybrid HPC system

    NASA Astrophysics Data System (ADS)

    Meier, Konrad; Fleig, Georg; Hauth, Thomas; Janczyk, Michael; Quast, Günter; von Suchodoletz, Dirk; Wiebelt, Bernd

    2016-10-01

    Experiments in high-energy physics (HEP) rely on elaborate hardware, software and computing systems to sustain the high data rates necessary to study rare physics processes. The Institut fr Experimentelle Kernphysik (EKP) at KIT is a member of the CMS and Belle II experiments, located at the LHC and the Super-KEKB accelerators, respectively. These detectors share the requirement, that enormous amounts of measurement data must be processed and analyzed and a comparable amount of simulated events is required to compare experimental results with theoretical predictions. Classical HEP computing centers are dedicated sites which support multiple experiments and have the required software pre-installed. Nowadays, funding agencies encourage research groups to participate in shared HPC cluster models, where scientist from different domains use the same hardware to increase synergies. This shared usage proves to be challenging for HEP groups, due to their specialized software setup which includes a custom OS (often Scientific Linux), libraries and applications. To overcome this hurdle, the EKP and data center team of the University of Freiburg have developed a system to enable the HEP use case on a shared HPC cluster. To achieve this, an OpenStack-based virtualization layer is installed on top of a bare-metal cluster. While other user groups can run their batch jobs via the Moab workload manager directly on bare-metal, HEP users can request virtual machines with a specialized machine image which contains a dedicated operating system and software stack. In contrast to similar installations, in this hybrid setup, no static partitioning of the cluster into a physical and virtualized segment is required. As a unique feature, the placement of the virtual machine on the cluster nodes is scheduled by Moab and the job lifetime is coupled to the lifetime of the virtual machine. This allows for a seamless integration with the jobs sent by other user groups and honors the fairshare

  9. PRACE - The European HPC Infrastructure

    NASA Astrophysics Data System (ADS)

    Stadelmeyer, Peter

    2014-05-01

    The mission of PRACE (Partnership for Advanced Computing in Europe) is to enable high impact scientific discovery and engineering research and development across all disciplines to enhance European competitiveness for the benefit of society. PRACE seeks to realize this mission by offering world class computing and data management resources and services through a peer review process. This talk gives a general overview about PRACE and the PRACE research infrastructure (RI). PRACE is established as an international not-for-profit association and the PRACE RI is a pan-European supercomputing infrastructure which offers access to computing and data management resources at partner sites distributed throughout Europe. Besides a short summary about the organization, history, and activities of PRACE, it is explained how scientists and researchers from academia and industry from around the world can access PRACE systems and which education and training activities are offered by PRACE. The overview also contains a selection of PRACE contributions to societal challenges and ongoing activities. Examples of the latter are beside others petascaling, application benchmark suite, best practice guides for efficient use of key architectures, application enabling / scaling, new programming models, and industrial applications. The Partnership for Advanced Computing in Europe (PRACE) is an international non-profit association with its seat in Brussels. The PRACE Research Infrastructure provides a persistent world-class high performance computing service for scientists and researchers from academia and industry in Europe. The computer systems and their operations accessible through PRACE are provided by 4 PRACE members (BSC representing Spain, CINECA representing Italy, GCS representing Germany and GENCI representing France). The Implementation Phase of PRACE receives funding from the EU's Seventh Framework Programme (FP7/2007-2013) under grant agreements RI-261557, RI-283493 and RI

  10. A Geometry Based Infra-Structure for Computational Analysis and Design

    NASA Technical Reports Server (NTRS)

    Haimes, Robert

    1998-01-01

    The computational steps traditionally taken for most engineering analysis suites (computational fluid dynamics (CFD), structural analysis, heat transfer and etc.) are: (1) Surface Generation -- usually by employing a Computer Assisted Design (CAD) system; (2) Grid Generation -- preparing the volume for the simulation; (3) Flow Solver -- producing the results at the specified operational point; (4) Post-processing Visualization -- interactively attempting to understand the results. For structural analysis, integrated systems can be obtained from a number of commercial vendors. These vendors couple directly to a number of CAD systems and are executed from within the CAD Graphical User Interface (GUI). It should be noted that the structural analysis problem is more tractable than CFD; there are fewer mesh topologies used and the grids are not as fine (this problem space does not have the length scaling issues of fluids). For CFD, these steps have worked well in the past for simple steady-state simulations at the expense of much user interaction. The data was transmitted between phases via files. In most cases, the output from a CAD system could go to Initial Graphics Exchange Specification (IGES) or Standard Exchange Program (STEP) files. The output from Grid Generators and Solvers do not really have standards though there are a couple of file formats that can be used for a subset of the gridding (i.e. PLOT3D data formats). The user would have to patch up the data or translate from one format to another to move to the next step. Sometimes this could take days. Specifically the problems with this procedure are:(1) File based -- Information flows from one step to the next via data files with formats specified for that procedure. File standards, when they exist, are wholly inadequate. For example, geometry from CAD systems (transmitted via IGES files) is defined as disjoint surfaces and curves (as well as masses of other information of no interest for the Grid Generator

  11. Experimental and computing strategies in advanced material characterization problems

    SciTech Connect

    Bolzon, G.

    2015-10-28

    The mechanical characterization of materials relies more and more often on sophisticated experimental methods that permit to acquire a large amount of data and, contemporarily, to reduce the invasiveness of the tests. This evolution accompanies the growing demand of non-destructive diagnostic tools that assess the safety level of components in use in structures and infrastructures, for instance in the strategic energy sector. Advanced material systems and properties that are not amenable to traditional techniques, for instance thin layered structures and their adhesion on the relevant substrates, can be also characterized by means of combined experimental-numerical tools elaborating data acquired by full-field measurement techniques. In this context, parameter identification procedures involve the repeated simulation of the laboratory or in situ tests by sophisticated and usually expensive non-linear analyses while, in some situation, reliable and accurate results would be required in real time. The effectiveness and the filtering capabilities of reduced models based on decomposition and interpolation techniques can be profitably used to meet these conflicting requirements. This communication intends to summarize some results recently achieved in this field by the author and her co-workers. The aim is to foster further interaction between engineering and mathematical communities.

  12. National Computational Infrastructure for Lattice Gauge Theory SciDAC-2 Closeout Report

    SciTech Connect

    Sun, Xian-He

    2013-08-01

    As part of this project work, researchers from Vanderbilt University, Fermi National Laboratory and Illinois Institute of technology developed a real-time cluster fault-tolerant cluster monitoring framework. This framework is open source and is available for download upon request. This work has also been used at Fermi Laboratory, Vanderbilt University and Mississippi State University across projects other than LQCD. The goal for the scientific workflow project is to investigate and develop domain-specific workflow tools for LQCD to help effectively orchestrate, in parallel, computational campaigns consisting of many loosely-coupled batch processing jobs. Major requirements for an LQCD workflow system include: a system to manage input metadata, e.g. physics parameters such as masses, a system to manage and permit the reuse of templates describing workflows, a system to capture data provenance information, a systems to manage produced data, a means of monitoring workflow progress and status, a means of resuming or extending a stopped workflow, fault tolerance features to enhance the reliability of running workflows. Requirements for an LQCD workflow system are available in documentation.

  13. A Geometry Based Infra-structure for Computational Analysis and Design

    NASA Technical Reports Server (NTRS)

    Haimes, Robert

    1997-01-01

    The computational steps traditionally taken for most engineering analysis (CFD, structural analysis, and etc.) are: Surface Generation - usually by employing a CAD system; Grid Generation - preparing the volume for the simulation; Flow Solver - producing the results at the specified operational point; and Post-processing Visualization - interactively attempting to understand the results For structural analysis, integrated systems can be obtained from a number of commercial vendors. For CFD, these steps have worked well in the past for simple steady-state simulations at the expense of much user interaction. The data was transmitted between phases via files. Specifically the problems with this procedure are: (1) File based. Information flows from one step to the next via data files with formats specified for that procedure. (2) 'Good' Geometry. A bottleneck in getting results from a solver is the construction of proper geometry to be fed to the grid generator. With 'good' geometry a grid can be constructed in tens of minutes (even with a complex configuration) using unstructured techniques. (3) One-Way communication. All information travels on from one phase to the next. Until this process can be automated, more complex problems such as multi-disciplinary analysis or using the above procedure for design becomes prohibitive.

  14. Computation of the tip vortex flowfield for advanced aircraft propellers

    NASA Technical Reports Server (NTRS)

    Tsai, Tommy M.; Dejong, Frederick J.; Levy, Ralph

    1988-01-01

    The tip vortex flowfield plays a significant role in the performance of advanced aircraft propellers. The flowfield in the tip region is complex, three-dimensional and viscous with large secondary velocities. An analysis is presented using an approximate set of equations which contains the physics required by the tip vortex flowfield, but which does not require the resources of the full Navier-Stokes equations. A computer code was developed to predict the tip vortex flowfield of advanced aircraft propellers. A grid generation package was developed to allow specification of a variety of advanced aircraft propeller shapes. Calculations of the tip vortex generation on an SR3 type blade at high Reynolds numbers were made using this code and a parametric study was performed to show the effect of tip thickness on tip vortex intensity. In addition, calculations of the tip vortex generation on a NACA 0012 type blade were made, including the flowfield downstream of the blade trailing edge. Comparison of flowfield calculations with experimental data from an F4 blade was made. A user's manual was also prepared for the computer code (NASA CR-182178).

  15. XII Advanced Computing and Analysis Techniques in Physics Research

    NASA Astrophysics Data System (ADS)

    Speer, Thomas; Carminati, Federico; Werlen, Monique

    November 2008 will be a few months after the official start of LHC when the highest quantum energy ever produced by mankind will be observed by the most complex piece of scientific equipment ever built. LHC will open a new era in physics research and push further the frontier of Knowledge This achievement has been made possible by new technological developments in many fields, but computing is certainly the technology that has made possible this whole enterprise. Accelerator and detector design, construction management, data acquisition, detectors monitoring, data analysis, event simulation and theoretical interpretation are all computing based HEP activities but also occurring many other research fields. Computing is everywhere and forms the common link between all involved scientists and engineers. The ACAT workshop series, created back in 1990 as AIHENP (Artificial Intelligence in High Energy and Nuclear Research) has been covering the tremendous evolution of computing in its most advanced topics, trying to setup bridges between computer science, experimental and theoretical physics. Conference web-site: http://acat2008.cern.ch/ Programme and presentations: http://indico.cern.ch/conferenceDisplay.py?confId=34666

  16. National Computational Infrastructure for LatticeGauge Theory SciDAC-2 Closeout Report

    SciTech Connect

    Bapty, Theodore; Dubey, Abhishek

    2013-07-18

    As part of the reliability project work, researchers from Vanderbilt University, Fermi National Laboratory and Illinois Institute of technology developed a real-time cluster fault-tolerant cluster monitoring framework. The goal for the scientific workflow project is to investigate and develop domain-specific workflow tools for LQCD to help effectively orchestrate, in parallel, computational campaigns consisting of many loosely-coupled batch processing jobs. Major requirements for an LQCD workflow system include: a system to manage input metadata, e.g. physics parameters such as masses, a system to manage and permit the reuse of templates describing workflows, a system to capture data provenance information, a systems to manage produced data, a means of monitoring workflow progress and status, a means of resuming or extending a stopped workflow, fault tolerance features to enhance the reliability of running workflows. In summary, these achievements are reported: • Implemented a software system to manage parameters. This includes a parameter set language based on a superset of the JSON data-interchange format, parsers in multiple languages (C++, Python, Ruby), and a web-based interface tool. It also includes a templating system that can produce input text for LQCD applications like MILC. • Implemented a monitoring sensor framework in software that is in production on the Fermilab USQCD facility. This includes equipment health, process accounting, MPI/QMP process tracking, and batch system (Torque) job monitoring. All sensor data are available from databases, and various query tools can be used to extract common data patterns and perform ad hoc searches. Common batch system queries such as job status are available in command line tools and are used in actual workflow-based production by a subset of Fermilab users. • Developed a formal state machine model for scientific workflow and reliability systems. This includes the use of Vanderbilt’s Generic Modeling

  17. High resolution computed tomography of advanced composite and ceramic materials

    NASA Technical Reports Server (NTRS)

    Yancey, R. N.; Klima, S. J.

    1991-01-01

    Advanced composite and ceramic materials are being developed for use in many new defense and commercial applications. In order to achieve the desired mechanical properties of these materials, the structural elements must be carefully analyzed and engineered. A study was conducted to evaluate the use of high resolution computed tomography (CT) as a macrostructural analysis tool for advanced composite and ceramic materials. Several samples were scanned using a laboratory high resolution CT scanner. Samples were also destructively analyzed at the locations of the scans and the nondestructive and destructive results were compared. The study provides useful information outlining the strengths and limitations of this technique and the prospects for further research in this area.

  18. Advances in Computational Stability Analysis of Composite Aerospace Structures

    SciTech Connect

    Degenhardt, R.; Araujo, F. C. de

    2010-09-30

    European aircraft industry demands for reduced development and operating costs. Structural weight reduction by exploitation of structural reserves in composite aerospace structures contributes to this aim, however, it requires accurate and experimentally validated stability analysis of real structures under realistic loading conditions. This paper presents different advances from the area of computational stability analysis of composite aerospace structures which contribute to that field. For stringer stiffened panels main results of the finished EU project COCOMAT are given. It investigated the exploitation of reserves in primary fibre composite fuselage structures through an accurate and reliable simulation of postbuckling and collapse. For unstiffened cylindrical composite shells a proposal for a new design method is presented.

  19. Recent Advances in Computed Tomographic Technology: Cardiopulmonary Imaging Applications.

    PubMed

    Tabari, Azadeh; Lo Gullo, Roberto; Murugan, Venkatesh; Otrakji, Alexi; Digumarthy, Subba; Kalra, Mannudeep

    2017-03-01

    Cardiothoracic diseases result in substantial morbidity and mortality. Chest computed tomography (CT) has been an imaging modality of choice for assessing a host of chest diseases, and technologic advances have enabled the emergence of coronary CT angiography as a robust noninvasive test for cardiac imaging. Technologic developments in CT have also enabled the application of dual-energy CT scanning for assessing pulmonary vascular and neoplastic processes. Concerns over increasing radiation dose from CT scanning are being addressed with introduction of more dose-efficient wide-area detector arrays and iterative reconstruction techniques. This review article discusses the technologic innovations in CT and their effect on cardiothoracic applications.

  20. Advanced Computer Science on Internal Ballistics of Solid Rocket Motors

    NASA Astrophysics Data System (ADS)

    Shimada, Toru; Kato, Kazushige; Sekino, Nobuhiro; Tsuboi, Nobuyuki; Seike, Yoshio; Fukunaga, Mihoko; Daimon, Yu; Hasegawa, Hiroshi; Asakawa, Hiroya

    In this paper, described is the development of a numerical simulation system, what we call “Advanced Computer Science on SRM Internal Ballistics (ACSSIB)”, for the purpose of improvement of performance and reliability of solid rocket motors (SRM). The ACSSIB system is consisting of a casting simulation code of solid propellant slurry, correlation database of local burning-rate of cured propellant in terms of local slurry flow characteristics, and a numerical code for the internal ballistics of SRM, as well as relevant hardware. This paper describes mainly the objectives, the contents of this R&D, and the output of the fiscal year of 2008.

  1. Advances in Electromagnetic Modelling through High Performance Computing

    SciTech Connect

    Ko, K.; Folwell, N.; Ge, L.; Guetz, A.; Lee, L.; Li, Z.; Ng, C.; Prudencio, E.; Schussman, G.; Uplenchwar, R.; Xiao, L.; /SLAC

    2006-03-29

    Under the DOE SciDAC project on Accelerator Science and Technology, a suite of electromagnetic codes has been under development at SLAC that are based on unstructured grids for higher accuracy, and use parallel processing to enable large-scale simulation. The new modeling capability is supported by SciDAC collaborations on meshing, solvers, refinement, optimization and visualization. These advances in computational science are described and the application of the parallel eigensolver Omega3P to the cavity design for the International Linear Collider is discussed.

  2. Computational methods of the Advanced Fluid Dynamics Model

    SciTech Connect

    Bohl, W.R.; Wilhelm, D.; Parker, F.R.; Berthier, J.; Maudlin, P.J.; Schmuck, P.; Goutagny, L.; Ichikawa, S.; Ninokata, H.; Luck, L.B.

    1987-01-01

    To more accurately treat severe accidents in fast reactors, a program has been set up to investigate new computational models and approaches. The product of this effort is a computer code, the Advanced Fluid Dynamics Model (AFDM). This paper describes some of the basic features of the numerical algorithm used in AFDM. Aspects receiving particular emphasis are the fractional-step method of time integration, the semi-implicit pressure iteration, the virtual mass inertial terms, the use of three velocity fields, higher order differencing, convection of interfacial area with source and sink terms, multicomponent diffusion processes in heat and mass transfer, the SESAME equation of state, and vectorized programming. A calculated comparison with an isothermal tetralin/ammonia experiment is performed. We conclude that significant improvements are possible in reliably calculating the progression of severe accidents with further development.

  3. The virtual machine (VM) scaler: an infrastructure manager supporting environmental modeling on IaaS clouds

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Infrastructure-as-a-service (IaaS) clouds provide a new medium for deployment of environmental modeling applications. Harnessing advancements in virtualization, IaaS clouds can provide dynamic scalable infrastructure to better support scientific modeling computational demands. Providing scientific m...

  4. Computational ocean acoustics: Advances in 3D ocean acoustic modeling

    NASA Astrophysics Data System (ADS)

    Schmidt, Henrik; Jensen, Finn B.

    2012-11-01

    The numerical model of ocean acoustic propagation developed in the 1980's are still in widespread use today, and the field of computational ocean acoustics is often considered a mature field. However, the explosive increase in computational power available to the community has created opportunities for modeling phenomena that earlier were beyond reach. Most notably, three-dimensional propagation and scattering problems have been prohibitive computationally, but are now addressed routinely using brute force numerical approaches such as the Finite Element Method, in particular for target scattering problems, where they are being combined with the traditional wave theory propagation models in hybrid modeling frameworks. Also, recent years has seen the development of hybrid approaches coupling oceanographic circulation models with acoustic propagation models, enabling the forecasting of sonar performance uncertainty in dynamic ocean environments. These and other advances made over the last couple of decades support the notion that the field of computational ocean acoustics is far from being mature. [Work supported by the Office of Naval Research, Code 321OA].

  5. Computational cardiology: how computer simulations could be used to develop new therapies and advance existing ones

    PubMed Central

    Trayanova, Natalia A.; O'Hara, Thomas; Bayer, Jason D.; Boyle, Patrick M.; McDowell, Kathleen S.; Constantino, Jason; Arevalo, Hermenegild J.; Hu, Yuxuan; Vadakkumpadan, Fijoy

    2012-01-01

    This article reviews the latest developments in computational cardiology. It focuses on the contribution of cardiac modelling to the development of new therapies as well as the advancement of existing ones for cardiac arrhythmias and pump dysfunction. Reviewed are cardiac modelling efforts aimed at advancing and optimizing existent therapies for cardiac disease (defibrillation, ablation of ventricular tachycardia, and cardiac resynchronization therapy) and at suggesting novel treatments, including novel molecular targets, as well as efforts to use cardiac models in stratification of patients likely to benefit from a given therapy, and the use of models in diagnostic procedures. PMID:23104919

  6. Strategies for casualty mitigation programs by using advanced tsunami computation

    NASA Astrophysics Data System (ADS)

    IMAI, K.; Imamura, F.

    2012-12-01

    1. Purpose of the study In this study, based on the scenario of great earthquakes along the Nankai trough, we aim on the estimation of the run up and high accuracy inundation process of tsunami in coastal areas including rivers. Here, using a practical method of tsunami analytical model, and taking into account characteristics of detail topography, land use and climate change in a realistic present and expected future environment, we examined the run up and tsunami inundation process. Using these results we estimated the damage due to tsunami and obtained information for the mitigation of human casualties. Considering the time series from the occurrence of the earthquake and the risk of tsunami damage, in order to mitigate casualties we provide contents of disaster risk information displayed in a tsunami hazard and risk map. 2. Creating a tsunami hazard and risk map From the analytical and practical tsunami model (a long wave approximated model) and the high resolution topography (5 m) including detailed data of shoreline, rivers, building and houses, we present a advanced analysis of tsunami inundation considering the land use. Based on the results of tsunami inundation and its analysis; it is possible to draw a tsunami hazard and risk map with information of human casualty, building damage estimation, drift of vehicles, etc. 3. Contents of disaster prevention information To improve the hazard, risk and evacuation information distribution, it is necessary to follow three steps. (1) Provide basic information such as tsunami attack info, areas and routes for evacuation and location of tsunami evacuation facilities. (2) Provide as additional information the time when inundation starts, the actual results of inundation, location of facilities with hazard materials, presence or absence of public facilities and areas underground that required evacuation. (3) Provide information to support disaster response such as infrastructure and traffic network damage prediction

  7. DOE Advanced Scientific Computing Advisory Committee (ASCAC) Report: Exascale Computing Initiative Review

    SciTech Connect

    Reed, Daniel; Berzins, Martin; Pennington, Robert; Sarkar, Vivek; Taylor, Valerie

    2015-08-01

    On November 19, 2014, the Advanced Scientific Computing Advisory Committee (ASCAC) was charged with reviewing the Department of Energy’s conceptual design for the Exascale Computing Initiative (ECI). In particular, this included assessing whether there are significant gaps in the ECI plan or areas that need to be given priority or extra management attention. Given the breadth and depth of previous reviews of the technical challenges inherent in exascale system design and deployment, the subcommittee focused its assessment on organizational and management issues, considering technical issues only as they informed organizational or management priorities and structures. This report presents the observations and recommendations of the subcommittee.

  8. Recent advances in computational mechanics of the human knee joint.

    PubMed

    Kazemi, M; Dabiri, Y; Li, L P

    2013-01-01

    Computational mechanics has been advanced in every area of orthopedic biomechanics. The objective of this paper is to provide a general review of the computational models used in the analysis of the mechanical function of the knee joint in different loading and pathological conditions. Major review articles published in related areas are summarized first. The constitutive models for soft tissues of the knee are briefly discussed to facilitate understanding the joint modeling. A detailed review of the tibiofemoral joint models is presented thereafter. The geometry reconstruction procedures as well as some critical issues in finite element modeling are also discussed. Computational modeling can be a reliable and effective method for the study of mechanical behavior of the knee joint, if the model is constructed correctly. Single-phase material models have been used to predict the instantaneous load response for the healthy knees and repaired joints, such as total and partial meniscectomies, ACL and PCL reconstructions, and joint replacements. Recently, poromechanical models accounting for fluid pressurization in soft tissues have been proposed to study the viscoelastic response of the healthy and impaired knee joints. While the constitutive modeling has been considerably advanced at the tissue level, many challenges still exist in applying a good material model to three-dimensional joint simulations. A complete model validation at the joint level seems impossible presently, because only simple data can be obtained experimentally. Therefore, model validation may be concentrated on the constitutive laws using multiple mechanical tests of the tissues. Extensive model verifications at the joint level are still crucial for the accuracy of the modeling.

  9. Recent Advances in Computational Mechanics of the Human Knee Joint

    PubMed Central

    Kazemi, M.; Dabiri, Y.; Li, L. P.

    2013-01-01

    Computational mechanics has been advanced in every area of orthopedic biomechanics. The objective of this paper is to provide a general review of the computational models used in the analysis of the mechanical function of the knee joint in different loading and pathological conditions. Major review articles published in related areas are summarized first. The constitutive models for soft tissues of the knee are briefly discussed to facilitate understanding the joint modeling. A detailed review of the tibiofemoral joint models is presented thereafter. The geometry reconstruction procedures as well as some critical issues in finite element modeling are also discussed. Computational modeling can be a reliable and effective method for the study of mechanical behavior of the knee joint, if the model is constructed correctly. Single-phase material models have been used to predict the instantaneous load response for the healthy knees and repaired joints, such as total and partial meniscectomies, ACL and PCL reconstructions, and joint replacements. Recently, poromechanical models accounting for fluid pressurization in soft tissues have been proposed to study the viscoelastic response of the healthy and impaired knee joints. While the constitutive modeling has been considerably advanced at the tissue level, many challenges still exist in applying a good material model to three-dimensional joint simulations. A complete model validation at the joint level seems impossible presently, because only simple data can be obtained experimentally. Therefore, model validation may be concentrated on the constitutive laws using multiple mechanical tests of the tissues. Extensive model verifications at the joint level are still crucial for the accuracy of the modeling. PMID:23509602

  10. NASA Trapezoidal Wing Computations Including Transition and Advanced Turbulence Modeling

    NASA Technical Reports Server (NTRS)

    Rumsey, C. L.; Lee-Rausch, E. M.

    2012-01-01

    Flow about the NASA Trapezoidal Wing is computed with several turbulence models by using grids from the first High Lift Prediction Workshop in an effort to advance understanding of computational fluid dynamics modeling for this type of flowfield. Transition is accounted for in many of the computations. In particular, a recently-developed 4-equation transition model is utilized and works well overall. Accounting for transition tends to increase lift and decrease moment, which improves the agreement with experiment. Upper surface flap separation is reduced, and agreement with experimental surface pressures and velocity profiles is improved. The predicted shape of wakes from upstream elements is strongly influenced by grid resolution in regions above the main and flap elements. Turbulence model enhancements to account for rotation and curvature have the general effect of increasing lift and improving the resolution of the wing tip vortex as it convects downstream. However, none of the models improve the prediction of surface pressures near the wing tip, where more grid resolution is needed.

  11. OpenCMISS: a multi-physics & multi-scale computational infrastructure for the VPH/Physiome project.

    PubMed

    Bradley, Chris; Bowery, Andy; Britten, Randall; Budelmann, Vincent; Camara, Oscar; Christie, Richard; Cookson, Andrew; Frangi, Alejandro F; Gamage, Thiranja Babarenda; Heidlauf, Thomas; Krittian, Sebastian; Ladd, David; Little, Caton; Mithraratne, Kumar; Nash, Martyn; Nickerson, David; Nielsen, Poul; Nordbø, Oyvind; Omholt, Stig; Pashaei, Ali; Paterson, David; Rajagopal, Vijayaraghavan; Reeve, Adam; Röhrle, Oliver; Safaei, Soroush; Sebastián, Rafael; Steghöfer, Martin; Wu, Tim; Yu, Ting; Zhang, Heye; Hunter, Peter

    2011-10-01

    The VPH/Physiome Project is developing the model encoding standards CellML (cellml.org) and FieldML (fieldml.org) as well as web-accessible model repositories based on these standards (models.physiome.org). Freely available open source computational modelling software is also being developed to solve the partial differential equations described by the models and to visualise results. The OpenCMISS code (opencmiss.org), described here, has been developed by the authors over the last six years to replace the CMISS code that has supported a number of organ system Physiome projects. OpenCMISS is designed to encompass multiple sets of physical equations and to link subcellular and tissue-level biophysical processes into organ-level processes. In the Heart Physiome project, for example, the large deformation mechanics of the myocardial wall need to be coupled to both ventricular flow and embedded coronary flow, and the reaction-diffusion equations that govern the propagation of electrical waves through myocardial tissue need to be coupled with equations that describe the ion channel currents that flow through the cardiac cell membranes. In this paper we discuss the design principles and distributed memory architecture behind the OpenCMISS code. We also discuss the design of the interfaces that link the sets of physical equations across common boundaries (such as fluid-structure coupling), or between spatial fields over the same domain (such as coupled electromechanics), and the concepts behind CellML and FieldML that are embodied in the OpenCMISS data structures. We show how all of these provide a flexible infrastructure for combining models developed across the VPH/Physiome community.

  12. Computational methods for predicting the response of critical as-built infrastructure to dynamic loads (architectural surety)

    SciTech Connect

    Preece, D.S.; Weatherby, J.R.; Attaway, S.W.; Swegle, J.W.; Matalucci, R.V.

    1998-06-01

    Coupled blast-structural computational simulations using supercomputer capabilities will significantly advance the understanding of how complex structures respond under dynamic loads caused by explosives and earthquakes, an understanding with application to the surety of both federal and nonfederal buildings. Simulation of the effects of explosives on structures is a challenge because the explosive response can best be simulated using Eulerian computational techniques and structural behavior is best modeled using Lagrangian methods. Due to the different methodologies of the two computational techniques and code architecture requirements, they are usually implemented in different computer programs. Explosive and structure modeling in two different codes make it difficult or next to impossible to do coupled explosive/structure interaction simulations. Sandia National Laboratories has developed two techniques for solving this problem. The first is called Smoothed Particle Hydrodynamics (SPH), a relatively new gridless method comparable to Eulerian, that is especially suited for treating liquids and gases such as those produced by an explosive. The SPH capability has been fully implemented into the transient dynamics finite element (Lagrangian) codes PRONTO-2D and -3D. A PRONTO-3D/SPH simulation of the effect of a blast on a protective-wall barrier is presented in this paper. The second technique employed at Sandia National Laboratories uses a relatively new code called ALEGRA which is an ALE (Arbitrary Lagrangian-Eulerian) wave code with specific emphasis on large deformation and shock propagation. ALEGRA is capable of solving many shock-wave physics problems but it is especially suited for modeling problems involving the interaction of decoupled explosives with structures.

  13. High performance parallel computers for science: New developments at the Fermilab advanced computer program

    SciTech Connect

    Nash, T.; Areti, H.; Atac, R.; Biel, J.; Cook, A.; Deppe, J.; Edel, M.; Fischler, M.; Gaines, I.; Hance, R.

    1988-08-01

    Fermilab's Advanced Computer Program (ACP) has been developing highly cost effective, yet practical, parallel computers for high energy physics since 1984. The ACP's latest developments are proceeding in two directions. A Second Generation ACP Multiprocessor System for experiments will include $3500 RISC processors each with performance over 15 VAX MIPS. To support such high performance, the new system allows parallel I/O, parallel interprocess communication, and parallel host processes. The ACP Multi-Array Processor, has been developed for theoretical physics. Each $4000 node is a FORTRAN or C programmable pipelined 20 MFlops (peak), 10 MByte single board computer. These are plugged into a 16 port crossbar switch crate which handles both inter and intra crate communication. The crates are connected in a hypercube. Site oriented applications like lattice gauge theory are supported by system software called CANOPY, which makes the hardware virtually transparent to users. A 256 node, 5 GFlop, system is under construction. 10 refs., 7 figs.

  14. Advanced information processing system: Inter-computer communication services

    NASA Technical Reports Server (NTRS)

    Burkhardt, Laura; Masotto, Tom; Sims, J. Terry; Whittredge, Roy; Alger, Linda S.

    1991-01-01

    The purpose is to document the functional requirements and detailed specifications for the Inter-Computer Communications Services (ICCS) of the Advanced Information Processing System (AIPS). An introductory section is provided to outline the overall architecture and functional requirements of the AIPS and to present an overview of the ICCS. An overview of the AIPS architecture as well as a brief description of the AIPS software is given. The guarantees of the ICCS are provided, and the ICCS is described as a seven-layered International Standards Organization (ISO) Model. The ICCS functional requirements, functional design, and detailed specifications as well as each layer of the ICCS are also described. A summary of results and suggestions for future work are presented.

  15. Recent advances in computational structural reliability analysis methods

    NASA Technical Reports Server (NTRS)

    Thacker, Ben H.; Wu, Y.-T.; Millwater, Harry R.; Torng, Tony Y.; Riha, David S.

    1993-01-01

    The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.

  16. Reliability of an interactive computer program for advance care planning.

    PubMed

    Schubart, Jane R; Levi, Benjamin H; Camacho, Fabian; Whitehead, Megan; Farace, Elana; Green, Michael J

    2012-06-01

    Despite widespread efforts to promote advance directives (ADs), completion rates remain low. Making Your Wishes Known: Planning Your Medical Future (MYWK) is an interactive computer program that guides individuals through the process of advance care planning, explaining health conditions and interventions that commonly involve life or death decisions, helps them articulate their values/goals, and translates users' preferences into a detailed AD document. The purpose of this study was to demonstrate that (in the absence of major life changes) the AD generated by MYWK reliably reflects an individual's values/preferences. English speakers ≥30 years old completed MYWK twice, 4 to 6 weeks apart. Reliability indices were assessed for three AD components: General Wishes; Specific Wishes for treatment; and Quality-of-Life values (QoL). Twenty-four participants completed the study. Both the Specific Wishes and QoL scales had high internal consistency in both time periods (Knuder Richardson formula 20 [KR-20]=0.83-0.95, and 0.86-0.89). Test-retest reliability was perfect for General Wishes (κ=1), high for QoL (Pearson's correlation coefficient=0.83), but lower for Specific Wishes (Pearson's correlation coefficient=0.57). MYWK generates an AD where General Wishes and QoL (but not Specific Wishes) statements remain consistent over time.

  17. Using advanced computer vision algorithms on small mobile robots

    NASA Astrophysics Data System (ADS)

    Kogut, G.; Birchmore, F.; Biagtan Pacis, E.; Everett, H. R.

    2006-05-01

    The Technology Transfer project employs a spiral development process to enhance the functionality and autonomy of mobile robot systems in the Joint Robotics Program (JRP) Robotic Systems Pool by converging existing component technologies onto a transition platform for optimization. An example of this approach is the implementation of advanced computer vision algorithms on small mobile robots. We demonstrate the implementation and testing of the following two algorithms useful on mobile robots: 1) object classification using a boosted Cascade of classifiers trained with the Adaboost training algorithm, and 2) human presence detection from a moving platform. Object classification is performed with an Adaboost training system developed at the University of California, San Diego (UCSD) Computer Vision Lab. This classification algorithm has been used to successfully detect the license plates of automobiles in motion in real-time. While working towards a solution to increase the robustness of this system to perform generic object recognition, this paper demonstrates an extension to this application by detecting soda cans in a cluttered indoor environment. The human presence detection from a moving platform system uses a data fusion algorithm which combines results from a scanning laser and a thermal imager. The system is able to detect the presence of humans while both the humans and the robot are moving simultaneously. In both systems, the two aforementioned algorithms were implemented on embedded hardware and optimized for use in real-time. Test results are shown for a variety of environments.

  18. Advances in the computational study of language acquisition.

    PubMed

    Brent, M R

    1996-01-01

    This paper provides a tutorial introduction to computational studies of how children learn their native languages. Its aim is to make recent advances accessible to the broader research community, and to place them in the context of current theoretical issues. The first section locates computational studies and behavioral studies within a common theoretical framework. The next two sections review two papers that appear in this volume: one on learning the meanings of words and one or learning the sounds of words. The following section highlights an idea which emerges independently in these two papers and which I have dubbed autonomous bootstrapping. Classical bootstrapping hypotheses propose that children begin to get a toc-hold in a particular linguistic domain, such as syntax, by exploiting information from another domain, such as semantics. Autonomous bootstrapping complements the cross-domain acquisition strategies of classical bootstrapping with strategies that apply within a single domain. Autonomous bootstrapping strategies work by representing partial and/or uncertain linguistic knowledge and using it to analyze the input. The next two sections review two more more contributions to this special issue: one on learning word meanings via selectional preferences and one on algorithms for setting grammatical parameters. The final section suggests directions for future research.

  19. Securing Infrastructure from High Explosive Threats

    SciTech Connect

    Glascoe, L; Noble, C; Reynolds, J; Kuhl, A; Morris, J

    2009-03-20

    Lawrence Livermore National Laboratory (LLNL) is working with the Department of Homeland Security's Science and Technology Directorate, the Transportation Security Administration, and several infrastructure partners to characterize and help mitigate principal structural vulnerabilities to explosive threats. Given the importance of infrastructure to the nation's security and economy, there is a clear need for applied research and analyses (1) to improve understanding of the vulnerabilities of these systems to explosive threats and (2) to provide decision makers with time-critical technical assistance concerning countermeasure and mitigation options. Fully-coupled high performance calculations of structural response to ideal and non-ideal explosives help bound and quantify specific critical vulnerabilities, and help identify possible corrective schemes. Experimental validation of modeling approaches and methodologies builds confidence in the prediction, while advanced stochastic techniques allow for optimal use of scarce computational resources to efficiently provide infrastructure owners and decision makers with timely analyses.

  20. Recovery Act: Advanced Direct Methanol Fuel Cell for Mobile Computing

    SciTech Connect

    Fletcher, James H.; Cox, Philip; Harrington, William J; Campbell, Joseph L

    2013-09-03

    ABSTRACT Project Title: Recovery Act: Advanced Direct Methanol Fuel Cell for Mobile Computing PROJECT OBJECTIVE The objective of the project was to advance portable fuel cell system technology towards the commercial targets of power density, energy density and lifetime. These targets were laid out in the DOE’s R&D roadmap to develop an advanced direct methanol fuel cell power supply that meets commercial entry requirements. Such a power supply will enable mobile computers to operate non-stop, unplugged from the wall power outlet, by using the high energy density of methanol fuel contained in a replaceable fuel cartridge. Specifically this project focused on balance-of-plant component integration and miniaturization, as well as extensive component, subassembly and integrated system durability and validation testing. This design has resulted in a pre-production power supply design and a prototype that meet the rigorous demands of consumer electronic applications. PROJECT TASKS The proposed work plan was designed to meet the project objectives, which corresponded directly with the objectives outlined in the Funding Opportunity Announcement: To engineer the fuel cell balance-of-plant and packaging to meet the needs of consumer electronic systems, specifically at power levels required for mobile computing. UNF used existing balance-of-plant component technologies developed under its current US Army CERDEC project, as well as a previous DOE project completed by PolyFuel, to further refine them to both miniaturize and integrate their functionality to increase the system power density and energy density. Benefits of UNF’s novel passive water recycling MEA (membrane electrode assembly) and the simplified system architecture it enabled formed the foundation of the design approach. The package design was hardened to address orientation independence, shock, vibration, and environmental requirements. Fuel cartridge and fuel subsystems were improved to ensure effective fuel

  1. Design and Implementation of a Computation Server for Optimization with Application to the Analysis of Critical Infrastructure

    DTIC Science & Technology

    2013-06-01

    confidentiality, integrity, and availability ( Salehi et al., 2007). The most important tactic to be used at the architectural level will be improving...analyzing critical infrastructure interdependencies. IEEE Control Systems, 21(6):11–25. Salehi , P., Jaferian, P., and Barforoush, A. (2007). Modeling

  2. Critical Infrastructure Protection II, The International Federation for Information Processing, Volume 290.

    NASA Astrophysics Data System (ADS)

    Papa, Mauricio; Shenoi, Sujeet

    The information infrastructure -- comprising computers, embedded devices, networks and software systems -- is vital to day-to-day operations in every sector: information and telecommunications, banking and finance, energy, chemicals and hazardous materials, agriculture, food, water, public health, emergency services, transportation, postal and shipping, government and defense. Global business and industry, governments, indeed society itself, cannot function effectively if major components of the critical information infrastructure are degraded, disabled or destroyed. Critical Infrastructure Protection II describes original research results and innovative applications in the interdisciplinary field of critical infrastructure protection. Also, it highlights the importance of weaving science, technology and policy in crafting sophisticated, yet practical, solutions that will help secure information, computer and network assets in the various critical infrastructure sectors. Areas of coverage include: - Themes and Issues - Infrastructure Security - Control Systems Security - Security Strategies - Infrastructure Interdependencies - Infrastructure Modeling and Simulation This book is the second volume in the annual series produced by the International Federation for Information Processing (IFIP) Working Group 11.10 on Critical Infrastructure Protection, an international community of scientists, engineers, practitioners and policy makers dedicated to advancing research, development and implementation efforts focused on infrastructure protection. The book contains a selection of twenty edited papers from the Second Annual IFIP WG 11.10 International Conference on Critical Infrastructure Protection held at George Mason University, Arlington, Virginia, USA in the spring of 2008.

  3. Quantitative Computed Tomography and Image Analysis for Advanced Muscle Assessment

    PubMed Central

    Edmunds, Kyle Joseph; Gíslason, Magnus K.; Arnadottir, Iris D.; Marcante, Andrea; Piccione, Francesco; Gargiulo, Paolo

    2016-01-01

    Medical imaging is of particular interest in the field of translational myology, as extant literature describes the utilization of a wide variety of techniques to non-invasively recapitulate and quantity various internal and external tissue morphologies. In the clinical context, medical imaging remains a vital tool for diagnostics and investigative assessment. This review outlines the results from several investigations on the use of computed tomography (CT) and image analysis techniques to assess muscle conditions and degenerative process due to aging or pathological conditions. Herein, we detail the acquisition of spiral CT images and the use of advanced image analysis tools to characterize muscles in 2D and 3D. Results from these studies recapitulate changes in tissue composition within muscles, as visualized by the association of tissue types to specified Hounsfield Unit (HU) values for fat, loose connective tissue or atrophic muscle, and normal muscle, including fascia and tendon. We show how results from these analyses can be presented as both average HU values and compositions with respect to total muscle volumes, demonstrating the reliability of these tools to monitor, assess and characterize muscle degeneration. PMID:27478562

  4. Distributed telemedicine for the National Information Infrastructure

    SciTech Connect

    Forslund, D.W.; Lee, Seong H.; Reverbel, F.C.

    1997-08-01

    TeleMed is an advanced system that provides a distributed multimedia electronic medical record available over a wide area network. It uses object-based computing, distributed data repositories, advanced graphical user interfaces, and visualization tools along with innovative concept extraction of image information for storing and accessing medical records developed in a separate project from 1994-5. In 1996, we began the transition to Java, extended the infrastructure, and worked to begin deploying TeleMed-like technologies throughout the nation. Other applications are mentioned.

  5. Security-Oriented and Load-Balancing Wireless Data Routing Game in the Integration of Advanced Metering Infrastructure Network in Smart Grid

    SciTech Connect

    He, Fulin; Cao, Yang; Zhang, Jun Jason; Wei, Jiaolong; Zhang, Yingchen; Muljadi, Eduard; Gao, Wenzhong

    2016-11-21

    Ensuring flexible and reliable data routing is indispensable for the integration of Advanced Metering Infrastructure (AMI) networks, we propose a secure-oriented and load-balancing wireless data routing scheme. A novel utility function is designed based on security routing scheme. Then, we model the interactive security-oriented routing strategy among meter data concentrators or smart grid meters as a mixed-strategy network formation game. Finally, such problem results in a stable probabilistic routing scheme with proposed distributed learning algorithm. One contributions is that we studied that different types of applications affect the routing selection strategy and the strategy tendency. Another contributions is that the chosen strategy of our mixed routing can adaptively to converge to a new mixed strategy Nash equilibrium (MSNE) during the learning process in the smart grid.

  6. Recent advances in data assimilation in computational geodynamic models

    NASA Astrophysics Data System (ADS)

    Ismail-Zadeh, Alik

    2010-05-01

    . The QRV method was most recently introduced in geodynamic modelling (Ismail-Zadeh et al., 2007, 2008; Tantsyrev, 2008; Glisovic et al., 2009). The advances in computational geodynamics and in data assimilation attract an interest of the community dealing with lithosphere, mantle and core dynamics.

  7. Green Infrastructure

    EPA Science Inventory

    Large paved surfaces keep rain from infiltrating the soil and recharging groundwater supplies. Alternatively, Green infrastructure uses natural processes to reduce and treat stormwater in place by soaking up and storing water. These systems provide many environmental, social, an...

  8. The Design and Transfer of Advanced Command and Control (C2) Computer-Based Systems

    DTIC Science & Technology

    1980-03-31

    TECHNICAL REPORT 80-02 QUARTERLY TECHNICAL REPORT: THE DESIGN AND TRANSFER OF ADVANCED COMMAND AND CONTROL (C 2 ) COMPUTER-BASED SYSTEMS ARPA...The Tasks/Objectives and/or Purposes of the overall project are connected with the design , development, demonstration and transfer of advanced...command and control (C2 ) computer-based systems; this report covers work in the computer-based design and transfer areas only. The Technical Problems thus

  9. First 3 years of operation of RIACS (Research Institute for Advanced Computer Science) (1983-1985)

    NASA Technical Reports Server (NTRS)

    Denning, P. J.

    1986-01-01

    The focus of the Research Institute for Advanced Computer Science (RIACS) is to explore matches between advanced computing architectures and the processes of scientific research. An architecture evaluation of the MIT static dataflow machine, specification of a graphical language for expressing distributed computations, and specification of an expert system for aiding in grid generation for two-dimensional flow problems was initiated. Research projects for 1984 and 1985 are summarized.

  10. RecceMan: an interactive recognition assistance for image-based reconnaissance: synergistic effects of human perception and computational methods for object recognition, identification, and infrastructure analysis

    NASA Astrophysics Data System (ADS)

    El Bekri, Nadia; Angele, Susanne; Ruckhäberle, Martin; Peinsipp-Byma, Elisabeth; Haelke, Bruno

    2015-10-01

    This paper introduces an interactive recognition assistance system for imaging reconnaissance. This system supports aerial image analysts on missions during two main tasks: Object recognition and infrastructure analysis. Object recognition concentrates on the classification of one single object. Infrastructure analysis deals with the description of the components of an infrastructure and the recognition of the infrastructure type (e.g. military airfield). Based on satellite or aerial images, aerial image analysts are able to extract single object features and thereby recognize different object types. It is one of the most challenging tasks in the imaging reconnaissance. Currently, there are no high potential ATR (automatic target recognition) applications available, as consequence the human observer cannot be replaced entirely. State-of-the-art ATR applications cannot assume in equal measure human perception and interpretation. Why is this still such a critical issue? First, cluttered and noisy images make it difficult to automatically extract, classify and identify object types. Second, due to the changed warfare and the rise of asymmetric threats it is nearly impossible to create an underlying data set containing all features, objects or infrastructure types. Many other reasons like environmental parameters or aspect angles compound the application of ATR supplementary. Due to the lack of suitable ATR procedures, the human factor is still important and so far irreplaceable. In order to use the potential benefits of the human perception and computational methods in a synergistic way, both are unified in an interactive assistance system. RecceMan® (Reconnaissance Manual) offers two different modes for aerial image analysts on missions: the object recognition mode and the infrastructure analysis mode. The aim of the object recognition mode is to recognize a certain object type based on the object features that originated from the image signatures. The

  11. Generalized Advanced Propeller Analysis System (GAPAS). Volume 2: Computer program user manual

    NASA Technical Reports Server (NTRS)

    Glatt, L.; Crawford, D. R.; Kosmatka, J. B.; Swigart, R. J.; Wong, E. W.

    1986-01-01

    The Generalized Advanced Propeller Analysis System (GAPAS) computer code is described. GAPAS was developed to analyze advanced technology multi-bladed propellers which operate on aircraft with speeds up to Mach 0.8 and altitudes up to 40,000 feet. GAPAS includes technology for analyzing aerodynamic, structural, and acoustic performance of propellers. The computer code was developed for the CDC 7600 computer and is currently available for industrial use on the NASA Langley computer. A description of all the analytical models incorporated in GAPAS is included. Sample calculations are also described as well as users requirements for modifying the analysis system. Computer system core requirements and running times are also discussed.

  12. Recent advances in computational methods for nuclear magnetic resonance data processing.

    PubMed

    Gao, Xin

    2013-02-01

    Although three-dimensional protein structure determination using nuclear magnetic resonance (NMR) spectroscopy is a computationally costly and tedious process that would benefit from advanced computational techniques, it has not garnered much research attention from specialists in bioinformatics and computational biology. In this paper, we review recent advances in computational methods for NMR protein structure determination. We summarize the advantages of and bottlenecks in the existing methods and outline some open problems in the field. We also discuss current trends in NMR technology development and suggest directions for research on future computational methods for NMR.

  13. Computer architectures for computational physics work done by Computational Research and Technology Branch and Advanced Computational Concepts Group

    NASA Technical Reports Server (NTRS)

    1985-01-01

    Slides are reproduced that describe the importance of having high performance number crunching and graphics capability. They also indicate the types of research and development underway at Ames Research Center to ensure that, in the near term, Ames is a smart buyer and user, and in the long-term that Ames knows the best possible solutions for number crunching and graphics needs. The drivers for this research are real computational physics applications of interest to Ames and NASA. They are concerned with how to map the applications, and how to maximize the physics learned from the results of the calculations. The computer graphics activities are aimed at getting maximum information from the three-dimensional calculations by using the real time manipulation of three-dimensional data on the Silicon Graphics workstation. Work is underway on new algorithms that will permit the display of experimental results that are sparse and random, the same way that the dense and regular computed results are displayed.

  14. The Advance of Computing from the Ground to the Cloud

    ERIC Educational Resources Information Center

    Breeding, Marshall

    2009-01-01

    A trend toward the abstraction of computing platforms that has been developing in the broader IT arena over the last few years is just beginning to make inroads into the library technology scene. Cloud computing offers for libraries many interesting possibilities that may help reduce technology costs and increase capacity, reliability, and…

  15. Computational Enzyme Design: Advances, hurdles and possible ways forward

    PubMed Central

    Linder, Mats

    2012-01-01

    This mini review addresses recent developments in computational enzyme design. Successful protocols as well as known issues and limitations are discussed from an energetic perspective. It will be argued that improved results can be obtained by including a dynamic treatment in the design protocol. Finally, a molecular dynamics-based approach for evaluating and refining computational designs is presented. PMID:24688650

  16. The use of regional advance mitigation planning (RAMP) to integrate transportation infrastructure impacts with sustainability; a perspective from the USA

    NASA Astrophysics Data System (ADS)

    Thorne, James H.; Huber, Patrick R.; O'Donoghue, Elizabeth; Santos, Maria J.

    2014-05-01

    Globally, urban areas are expanding, and their regional, spatially cumulative, environmental impacts from transportation projects are not typically assessed. However, incorporation of a Regional Advance Mitigation Planning (RAMP) framework can promote more effective, ecologically sound, and less expensive environmental mitigation. As a demonstration of the first phase of the RAMP framework, we assessed environmental impacts from 181 planned transportation projects in the 19 368 km2 San Francisco Bay Area. We found that 107 road and railroad projects will impact 2411-3490 ha of habitat supporting 30-43 threatened or endangered species. In addition, 1175 ha of impacts to agriculture and native vegetation are expected, as well as 125 crossings of waterways supporting anadromous fish species. The extent of these spatially cumulative impacts shows the need for a regional approach to associated environmental offsets. Many of the impacts were comprised of numerous small projects, where project-by-project mitigation would result in increased transaction costs, land costs, and lost project time. Ecological gains can be made if a regional approach is taken through the avoidance of small-sized reserves and the ability to target parcels for acquisition that fit within conservation planning designs. The methods are straightforward, and can be used in other metropolitan areas.

  17. A Study into Advanced Guidance Laws Using Computational Methods

    DTIC Science & Technology

    2011-12-01

    computing aerodynamic forces % and moments. Except where noted, all dimensions in % MKS system. % Inputs...9] R. L. Shaw, Fighter Combat: Tactics and Maneuvering. Annapolis, MD: Naval Institute Press, 1988. [10] U. S. Shukla and P. R. Mahapatra

  18. Advances in Domain Mapping of Massively Parallel Scientific Computations

    SciTech Connect

    Leland, Robert W.; Hendrickson, Bruce A.

    2015-10-01

    One of the most important concerns in parallel computing is the proper distribution of workload across processors. For most scientific applications on massively parallel machines, the best approach to this distribution is to employ data parallelism; that is, to break the datastructures supporting a computation into pieces and then to assign those pieces to different processors. Collectively, these partitioning and assignment tasks comprise the domain mapping problem.

  19. The cloud services innovation platform- enabling service-based environmental modelling using infrastructure-as-a-service cloud computing

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Service oriented architectures allow modelling engines to be hosted over the Internet abstracting physical hardware configuration and software deployments from model users. Many existing environmental models are deployed as desktop applications running on user's personal computers (PCs). Migration ...

  20. Advances in computational design and analysis of airbreathing propulsion systems

    NASA Technical Reports Server (NTRS)

    Klineberg, John M.

    1989-01-01

    The development of commercial and military aircraft depends, to a large extent, on engine manufacturers being able to achieve significant increases in propulsion capability through improved component aerodynamics, materials, and structures. The recent history of propulsion has been marked by efforts to develop computational techniques that can speed up the propulsion design process and produce superior designs. The availability of powerful supercomputers, such as the NASA Numerical Aerodynamic Simulator, and the potential for even higher performance offered by parallel computer architectures, have opened the door to the use of multi-dimensional simulations to study complex physical phenomena in propulsion systems that have previously defied analysis or experimental observation. An overview of several NASA Lewis research efforts is provided that are contributing toward the long-range goal of a numerical test-cell for the integrated, multidisciplinary design, analysis, and optimization of propulsion systems. Specific examples in Internal Computational Fluid Mechanics, Computational Structural Mechanics, Computational Materials Science, and High Performance Computing are cited and described in terms of current capabilities, technical challenges, and future research directions.

  1. Development Model for Research Infrastructures

    NASA Astrophysics Data System (ADS)

    Wächter, Joachim; Hammitzsch, Martin; Kerschke, Dorit; Lauterjung, Jörn

    2015-04-01

    Research infrastructures (RIs) are platforms integrating facilities, resources and services used by the research communities to conduct research and foster innovation. RIs include scientific equipment, e.g., sensor platforms, satellites or other instruments, but also scientific data, sample repositories or archives. E-infrastructures on the other hand provide the technological substratum and middleware to interlink distributed RI components with computing systems and communication networks. The resulting platforms provide the foundation for the design and implementation of RIs and play an increasing role in the advancement and exploitation of knowledge and technology. RIs are regarded as essential to achieve and maintain excellence in research and innovation crucial for the European Research Area (ERA). The implementation of RIs has to be considered as a long-term, complex development process often over a period of 10 or more years. The ongoing construction of Spatial Data Infrastructures (SDIs) provides a good example for the general complexity of infrastructure development processes especially in system-of-systems environments. A set of directives issued by the European Commission provided a framework of guidelines for the implementation processes addressing the relevant content and the encoding of data as well as the standards for service interfaces and the integration of these services into networks. Additionally, a time schedule for the overall construction process has been specified. As a result this process advances with a strong participation of member states and responsible organisations. Today, SDIs provide the operational basis for new digital business processes in both national and local authorities. Currently, the development of integrated RIs in Earth and Environmental Sciences is characterised by the following properties: • A high number of parallel activities on European and national levels with numerous institutes and organisations participating

  2. OPENING REMARKS: SciDAC: Scientific Discovery through Advanced Computing

    NASA Astrophysics Data System (ADS)

    Strayer, Michael

    2005-01-01

    Good morning. Welcome to SciDAC 2005 and San Francisco. SciDAC is all about computational science and scientific discovery. In a large sense, computational science characterizes SciDAC and its intent is change. It transforms both our approach and our understanding of science. It opens new doors and crosses traditional boundaries while seeking discovery. In terms of twentieth century methodologies, computational science may be said to be transformational. There are a number of examples to this point. First are the sciences that encompass climate modeling. The application of computational science has in essence created the field of climate modeling. This community is now international in scope and has provided precision results that are challenging our understanding of our environment. A second example is that of lattice quantum chromodynamics. Lattice QCD, while adding precision and insight to our fundamental understanding of strong interaction dynamics, has transformed our approach to particle and nuclear science. The individual investigator approach has evolved to teams of scientists from different disciplines working side-by-side towards a common goal. SciDAC is also undergoing a transformation. This meeting is a prime example. Last year it was a small programmatic meeting tracking progress in SciDAC. This year, we have a major computational science meeting with a variety of disciplines and enabling technologies represented. SciDAC 2005 should position itself as a new corner stone for Computational Science and its impact on science. As we look to the immediate future, FY2006 will bring a new cycle to SciDAC. Most of the program elements of SciDAC will be re-competed in FY2006. The re-competition will involve new instruments for computational science, new approaches for collaboration, as well as new disciplines. There will be new opportunities for virtual experiments in carbon sequestration, fusion, and nuclear power and nuclear waste, as well as collaborations

  3. Multi-Hazard Sustainability: Towards Infrastructure Resilience

    NASA Astrophysics Data System (ADS)

    Lin, T.

    2015-12-01

    Natural and anthropogenic hazards pose significant challenges to civil infrastructure. This presents opportunities in investigating site-specific hazards in structural engineering to aid mitigation and adaptation efforts. This presentation will highlight: (a) recent advances in hazard-consistent ground motion selection methodology for nonlinear dynamic analyses, (b) ongoing efforts in validation of earthquake simulations and their effects on tall buildings, and (c) a pilot study on probabilistic sea-level rise hazard analysis incorporating aleatory and epistemic uncertainties. High performance computing and visualization further facilitate research and outreach to improve resilience under multiple hazards in the face of climate change.

  4. Computer vision-based technologies and commercial best practices for the advancement of the motion imagery tradecraft

    NASA Astrophysics Data System (ADS)

    Phipps, Marja; Capel, David; Srinivasan, James

    2014-06-01

    Motion imagery capabilities within the Department of Defense/Intelligence Community (DoD/IC) have advanced significantly over the last decade, attempting to meet continuously growing data collection, video processing and analytical demands in operationally challenging environments. The motion imagery tradecraft has evolved accordingly, enabling teams of analysts to effectively exploit data and generate intelligence reports across multiple phases in structured Full Motion Video (FMV) Processing Exploitation and Dissemination (PED) cells. Yet now the operational requirements are drastically changing. The exponential growth in motion imagery data continues, but to this the community adds multi-INT data, interoperability with existing and emerging systems, expanded data access, nontraditional users, collaboration, automation, and support for ad hoc configurations beyond the current FMV PED cells. To break from the legacy system lifecycle, we look towards a technology application and commercial adoption model course which will meet these future Intelligence, Surveillance and Reconnaissance (ISR) challenges. In this paper, we explore the application of cutting edge computer vision technology to meet existing FMV PED shortfalls and address future capability gaps. For example, real-time georegistration services developed from computer-vision-based feature tracking, multiple-view geometry, and statistical methods allow the fusion of motion imagery with other georeferenced information sources - providing unparalleled situational awareness. We then describe how these motion imagery capabilities may be readily deployed in a dynamically integrated analytical environment; employing an extensible framework, leveraging scalable enterprise-wide infrastructure and following commercial best practices.

  5. Advanced Simulation and Computing Co-Design Strategy

    SciTech Connect

    Ang, James A.; Hoang, Thuc T.; Kelly, Suzanne M.; McPherson, Allen; Neely, Rob

    2015-11-01

    This ASC Co-design Strategy lays out the full continuum and components of the co-design process, based on what we have experienced thus far and what we wish to do more in the future to meet the program’s mission of providing high performance computing (HPC) and simulation capabilities for NNSA to carry out its stockpile stewardship responsibility.

  6. Connecting Performance Analysis and Visualization to Advance Extreme Scale Computing

    SciTech Connect

    Bremer, Peer-Timo; Mohr, Bernd; Schulz, Martin; Pasccci, Valerio; Gamblin, Todd; Brunst, Holger

    2015-07-29

    The characterization, modeling, analysis, and tuning of software performance has been a central topic in High Performance Computing (HPC) since its early beginnings. The overall goal is to make HPC software run faster on particular hardware, either through better scheduling, on-node resource utilization, or more efficient distributed communication.

  7. Advanced Computer Image Generation Techniques Exploiting Perceptual Characteristics. Final Report.

    ERIC Educational Resources Information Center

    Stenger, Anthony J.; And Others

    This study suggests and identifies computer image generation (CIG) algorithms for visual simulation that improve the training effectiveness of CIG simulators and identifies areas of basic research in visual perception that are significant for improving CIG technology. The first phase of the project entailed observing three existing CIG simulators.…

  8. MAX - An advanced parallel computer for space applications

    NASA Technical Reports Server (NTRS)

    Lewis, Blair F.; Bunker, Robert L.

    1991-01-01

    MAX is a fault-tolerant multicomputer hardware and software architecture designed to meet the needs of NASA spacecraft systems. It consists of conventional computing modules (computers) connected via a dual network topology. One network is used to transfer data among the computers and between computers and I/O devices. This network's topology is arbitrary. The second network operates as a broadcast medium for operating system synchronization messages and supports the operating system's Byzantine resilience. A fully distributed operating system supports multitasking in an asynchronous event and data driven environment. A large grain dataflow paradigm is used to coordinate the multitasking and provide easy control of concurrency. It is the basis of the system's fault tolerance and allows both static and dynamical location of tasks. Redundant execution of tasks with software voting of results may be specified for critical tasks. The dataflow paradigm also supports simplified software design, test and maintenance. A unique feature is a method for reliably patching code in an executing dataflow application.

  9. Advanced Computational Aeroacoustics Methods for Fan Noise Prediction

    NASA Technical Reports Server (NTRS)

    Envia, Edmane (Technical Monitor); Tam, Christopher

    2003-01-01

    Direct computation of fan noise is presently not possible. One of the major difficulties is the geometrical complexity of the problem. In the case of fan noise, the blade geometry is critical to the loading on the blade and hence the intensity of the radiated noise. The precise geometry must be incorporated into the computation. In computational fluid dynamics (CFD), there are two general ways to handle problems with complex geometry. One way is to use unstructured grids. The other is to use body fitted overset grids. In the overset grid method, accurate data transfer is of utmost importance. For acoustic computation, it is not clear that the currently used data transfer methods are sufficiently accurate as not to contaminate the very small amplitude acoustic disturbances. In CFD, low order schemes are, invariably, used in conjunction with unstructured grids. However, low order schemes are known to be numerically dispersive and dissipative. dissipative errors are extremely undesirable for acoustic wave problems. The objective of this project is to develop a high order unstructured grid Dispersion-Relation-Preserving (DRP) scheme. would minimize numerical dispersion and dissipation errors. contains the results of the funded portion of the project. scheme on an unstructured grid has been developed. constructed in the wave number space. The characteristics of the scheme can be improved by the inclusion of additional constraints. Stability of the scheme has been investigated. Stability can be improved by adopting the upwinding strategy.

  10. Project T.E.A.M. (Technical Education Advancement Modules). Introduction to Computers.

    ERIC Educational Resources Information Center

    Ellis, Brenda

    This instructional guide, one of a series developed by the Technical Education Advancement Modules (TEAM) project, is a 3-hour introduction to computers. The purpose is to develop the following competencies: (1) orientation to data processing; (2) use of data entry devices; (3) use of computer menus; and (4) entry of data with accuracy and…

  11. Advanced Computational Thermal Studies and their Assessment for Supercritical-Pressure Reactors (SCRs)

    SciTech Connect

    D. M. McEligot; J. Y. Yoo; J. S. Lee; S. T. Ro; E. Lurien; S. O. Park; R. H. Pletcher; B. L. Smith; P. Vukoslavcevic; J. M. Wallace

    2009-04-01

    The goal of this laboratory / university collaboration of coupled computational and experimental studies is the improvement of predictive methods for supercritical-pressure reactors. The general objective is to develop supporting knowledge needed of advanced computational techniques for the technology development of the concepts and their safety systems.

  12. Teaching Advanced Concepts in Computer Networks: VNUML-UM Virtualization Tool

    ERIC Educational Resources Information Center

    Ruiz-Martinez, A.; Pereniguez-Garcia, F.; Marin-Lopez, R.; Ruiz-Martinez, P. M.; Skarmeta-Gomez, A. F.

    2013-01-01

    In the teaching of computer networks the main problem that arises is the high price and limited number of network devices the students can work with in the laboratories. Nowadays, with virtualization we can overcome this limitation. In this paper, we present a methodology that allows students to learn advanced computer network concepts through…

  13. Advances on modelling of ITER scenarios: physics and computational challenges

    NASA Astrophysics Data System (ADS)

    Giruzzi, G.; Garcia, J.; Artaud, J. F.; Basiuk, V.; Decker, J.; Imbeaux, F.; Peysson, Y.; Schneider, M.

    2011-12-01

    Methods and tools for design and modelling of tokamak operation scenarios are discussed with particular application to ITER advanced scenarios. Simulations of hybrid and steady-state scenarios performed with the integrated tokamak modelling suite of codes CRONOS are presented. The advantages of a possible steady-state scenario based on cyclic operations, alternating phases of positive and negative loop voltage, with no magnetic flux consumption on average, are discussed. For regimes in which current alignment is an issue, a general method for scenario design is presented, based on the characteristics of the poloidal current density profile.

  14. National facility for advanced computational science: A sustainable path to scientific discovery

    SciTech Connect

    Simon, Horst; Kramer, William; Saphir, William; Shalf, John; Bailey, David; Oliker, Leonid; Banda, Michael; McCurdy, C. William; Hules, John; Canning, Andrew; Day, Marc; Colella, Philip; Serafini, David; Wehner, Michael; Nugent, Peter

    2004-04-02

    Lawrence Berkeley National Laboratory (Berkeley Lab) proposes to create a National Facility for Advanced Computational Science (NFACS) and to establish a new partnership between the American computer industry and a national consortium of laboratories, universities, and computing facilities. NFACS will provide leadership-class scientific computing capability to scientists and engineers nationwide, independent of their institutional affiliation or source of funding. This partnership will bring into existence a new class of computational capability in the United States that is optimal for science and will create a sustainable path towards petaflops performance.

  15. Using Advanced Computer Vision Algorithms on Small Mobile Robots

    DTIC Science & Technology

    2006-04-20

    Lab. This classification algorithm has been used to successfully detect the license plates of automobiles in motion in real-time. While working...use in real-time. Test results are shown for a variety of environments. KEYWORDS: robotics, computer vision, car /license plate detection, SIFT...when detecting the make and model of automobiles , SIFT can be used to achieve very high detection rates at the expense of a hefty performance cost when

  16. Cogeneration computer model assessment: Advanced cogeneration research study

    NASA Technical Reports Server (NTRS)

    Rosenberg, L.

    1983-01-01

    Cogeneration computer simulation models to recommend the most desirable models or their components for use by the Southern California Edison Company (SCE) in evaluating potential cogeneration projects was assessed. Existing cogeneration modeling capabilities are described, preferred models are identified, and an approach to the development of a code which will best satisfy SCE requirements is recommended. Five models (CELCAP, COGEN 2, CPA, DEUS, and OASIS) are recommended for further consideration.

  17. Advanced Computational Methods for Security Constrained Financial Transmission Rights

    SciTech Connect

    Kalsi, Karanjit; Elbert, Stephen T.; Vlachopoulou, Maria; Zhou, Ning; Huang, Zhenyu

    2012-07-26

    Financial Transmission Rights (FTRs) are financial insurance tools to help power market participants reduce price risks associated with transmission congestion. FTRs are issued based on a process of solving a constrained optimization problem with the objective to maximize the FTR social welfare under power flow security constraints. Security constraints for different FTR categories (monthly, seasonal or annual) are usually coupled and the number of constraints increases exponentially with the number of categories. Commercial software for FTR calculation can only provide limited categories of FTRs due to the inherent computational challenges mentioned above. In this paper, first an innovative mathematical reformulation of the FTR problem is presented which dramatically improves the computational efficiency of optimization problem. After having re-formulated the problem, a novel non-linear dynamic system (NDS) approach is proposed to solve the optimization problem. The new formulation and performance of the NDS solver is benchmarked against widely used linear programming (LP) solvers like CPLEX™ and tested on both standard IEEE test systems and large-scale systems using data from the Western Electricity Coordinating Council (WECC). The performance of the NDS is demonstrated to be comparable and in some cases is shown to outperform the widely used CPLEX algorithms. The proposed formulation and NDS based solver is also easily parallelizable enabling further computational improvement.

  18. Parallel computing in genomic research: advances and applications

    PubMed Central

    Ocaña, Kary; de Oliveira, Daniel

    2015-01-01

    Today’s genomic experiments have to process the so-called “biological big data” that is now reaching the size of Terabytes and Petabytes. To process this huge amount of data, scientists may require weeks or months if they use their own workstations. Parallelism techniques and high-performance computing (HPC) environments can be applied for reducing the total processing time and to ease the management, treatment, and analyses of this data. However, running bioinformatics experiments in HPC environments such as clouds, grids, clusters, and graphics processing unit requires the expertise from scientists to integrate computational, biological, and mathematical techniques and technologies. Several solutions have already been proposed to allow scientists for processing their genomic experiments using HPC capabilities and parallelism techniques. This article brings a systematic review of literature that surveys the most recently published research involving genomics and parallel computing. Our objective is to gather the main characteristics, benefits, and challenges that can be considered by scientists when running their genomic experiments to benefit from parallelism techniques and HPC capabilities. PMID:26604801

  19. Parallel computing in genomic research: advances and applications.

    PubMed

    Ocaña, Kary; de Oliveira, Daniel

    2015-01-01

    Today's genomic experiments have to process the so-called "biological big data" that is now reaching the size of Terabytes and Petabytes. To process this huge amount of data, scientists may require weeks or months if they use their own workstations. Parallelism techniques and high-performance computing (HPC) environments can be applied for reducing the total processing time and to ease the management, treatment, and analyses of this data. However, running bioinformatics experiments in HPC environments such as clouds, grids, clusters, and graphics processing unit requires the expertise from scientists to integrate computational, biological, and mathematical techniques and technologies. Several solutions have already been proposed to allow scientists for processing their genomic experiments using HPC capabilities and parallelism techniques. This article brings a systematic review of literature that surveys the most recently published research involving genomics and parallel computing. Our objective is to gather the main characteristics, benefits, and challenges that can be considered by scientists when running their genomic experiments to benefit from parallelism techniques and HPC capabilities.

  20. Vision 20/20: Automation and advanced computing in clinical radiation oncology

    SciTech Connect

    Moore, Kevin L. Moiseenko, Vitali; Kagadis, George C.; McNutt, Todd R.; Mutic, Sasa

    2014-01-15

    This Vision 20/20 paper considers what computational advances are likely to be implemented in clinical radiation oncology in the coming years and how the adoption of these changes might alter the practice of radiotherapy. Four main areas of likely advancement are explored: cloud computing, aggregate data analyses, parallel computation, and automation. As these developments promise both new opportunities and new risks to clinicians and patients alike, the potential benefits are weighed against the hazards associated with each advance, with special considerations regarding patient safety under new computational platforms and methodologies. While the concerns of patient safety are legitimate, the authors contend that progress toward next-generation clinical informatics systems will bring about extremely valuable developments in quality improvement initiatives, clinical efficiency, outcomes analyses, data sharing, and adaptive radiotherapy.

  1. A Cloud-Based Infrastructure for Feedback-Driven Training and Image Recognition.

    PubMed

    Abedini, Mani; von Cavallar, Stefan; Chakravorty, Rajib; Davis, Matthew; Garnavi, Rahil

    2015-01-01

    Advanced techniques in machine learning combined with scalable "cloud" computing infrastructure are driving the creation of new and innovative health diagnostic applications. We describe a service and application for performing image training and recognition, tailored to dermatology and melanoma identification. The system implements new machine learning approaches to provide a feedback-driven training loop. This training sequence enhances classification performance by incrementally retraining the classifier model from expert responses. To easily provide this application and associated web service to clinical practices, we also describe a scalable cloud infrastructure, deployable in public cloud infrastructure and private, on-premise systems.

  2. First Responders Guide to Computer Forensics: Advanced Topics

    DTIC Science & Technology

    2005-09-01

    server of the sender , the mail server of the receiver, and the computer that receives the email. Assume that Alice wants to send an email to her friend...pleased to meet you MAIL FROM: alice.price@alphanet.com 250 alice.price@alphanet.com... Sender ok RCPT TO: bob.doe@betanet.com 250 bob.doe...betanet.com... Sender ok DATA 354 Please start mail input From: alice.price@alphanet.com To: bob.doe@betanet.com Subject: Lunch Bob, It was good

  3. Advanced Computational Methods for Thermal Radiative Heat Transfer

    SciTech Connect

    Tencer, John; Carlberg, Kevin Thomas; Larsen, Marvin E.; Hogan, Roy E.

    2016-10-01

    Participating media radiation (PMR) in weapon safety calculations for abnormal thermal environments are too costly to do routinely. This cost may be s ubstantially reduced by applying reduced order modeling (ROM) techniques. The application of ROM to PMR is a new and unique approach for this class of problems. This approach was investigated by the authors and shown to provide significant reductions in the computational expense associated with typical PMR simulations. Once this technology is migrated into production heat transfer analysis codes this capability will enable the routine use of PMR heat transfer in higher - fidelity simulations of weapon resp onse in fire environments.

  4. Advanced Computational Framework for Environmental Management ZEM, Version 1.x

    SciTech Connect

    Vesselinov, Velimir V.; O'Malley, Daniel; Pandey, Sachin

    2016-11-04

    Typically environmental management problems require analysis of large and complex data sets originating from concurrent data streams with different data collection frequencies and pedigree. These big data sets require on-the-fly integration into a series of models with different complexity for various types of model analyses where the data are applied as soft and hard model constraints. This is needed to provide fast iterative model analyses based on the latest available data to guide decision-making. Furthermore, the data and model are associated with uncertainties. The uncertainties are probabilistic (e.g. measurement errors) and non-probabilistic (unknowns, e.g. alternative conceptual models characterizing site conditions). To address all of these issues, we have developed an integrated framework for real-time data and model analyses for environmental decision-making called ZEM. The framework allows for seamless and on-the-fly integration of data and modeling results for robust and scientifically-defensible decision-making applying advanced decision analyses tools such as Bayesian- Information-Gap Decision Theory (BIG-DT). The framework also includes advanced methods for optimization that are capable of dealing with a large number of unknown model parameters, and surrogate (reduced order) modeling capabilities based on support vector regression techniques. The framework is coded in Julia, a state-of-the-art high-performance programing language (http://julialang.org). The ZEM framework is open-source and can be applied to any environmental management site. The framework will be open-source and released under GPL V3 license.

  5. FabSim: Facilitating computational research through automation on large-scale and distributed e-infrastructures

    NASA Astrophysics Data System (ADS)

    Groen, Derek; Bhati, Agastya P.; Suter, James; Hetherington, James; Zasada, Stefan J.; Coveney, Peter V.

    2016-10-01

    We present FabSim, a toolkit developed to simplify a range of computational tasks for researchers in diverse disciplines. FabSim is flexible, adaptable, and allows users to perform a wide range of tasks with ease. It also provides a systematic way to automate the use of resources, including HPC and distributed machines, and to make tasks easier to repeat by recording contextual information. To demonstrate this, we present three use cases where FabSim has enhanced our research productivity. These include simulating cerebrovascular bloodflow, modelling clay-polymer nanocomposites across multiple scales, and calculating ligand-protein binding affinities.

  6. Computational Efforts in Support of Advanced Coal Research

    SciTech Connect

    Suljo Linic

    2006-08-17

    The focus in this project was to employ first principles computational methods to study the underlying molecular elementary processes that govern hydrogen diffusion through Pd membranes as well as the elementary processes that govern the CO- and S-poisoning of these membranes. Our computational methodology integrated a multiscale hierarchical modeling approach, wherein a molecular understanding of the interactions between various species is gained from ab-initio quantum chemical Density Functional Theory (DFT) calculations, while a mesoscopic statistical mechanical model like Kinetic Monte Carlo is employed to predict the key macroscopic membrane properties such as permeability. The key developments are: (1) We have coupled systematically the ab initio calculations with Kinetic Monte Carlo (KMC) simulations to model hydrogen diffusion through the Pd based-membranes. The predicted tracer diffusivity of hydrogen atoms through the bulk of Pd lattice from KMC simulations are in excellent agreement with experiments. (2) The KMC simulations of dissociative adsorption of H{sub 2} over Pd(111) surface indicates that for thin membranes (less than 10{micro} thick), the diffusion of hydrogen from surface to the first subsurface layer is rate limiting. (3) Sulfur poisons the Pd surface by altering the electronic structure of the Pd atoms in the vicinity of the S atom. The KMC simulations indicate that increasing sulfur coverage drastically reduces the hydrogen coverage on the Pd surface and hence the driving force for diffusion through the membrane.

  7. Activities and operations of the Advanced Computing Research Facility, July-October 1986

    SciTech Connect

    Pieper, G.W.

    1986-01-01

    Research activities and operations of the Advanced Computing Research Facility (ACRF) at Argonne National Laboratory are discussed for the period from July 1986 through October 1986. The facility is currently supported by the Department of Energy, and is operated by the Mathematics and Computer Science Division at Argonne. Over the past four-month period, a new commercial multiprocessor, the Intel iPSC-VX/d4 hypercube was installed. In addition, four other commercial multiprocessors continue to be available for research - an Encore Multimax, a Sequent Balance 21000, an Alliant FX/8, and an Intel iPSC/d5 - as well as a locally designed multiprocessor, the Lemur. These machines are being actively used by scientists at Argonne and throughout the nation in a wide variety of projects concerning computer systems with parallel and vector architectures. A variety of classes, workshops, and seminars have been sponsored to train researchers on computing techniques for the advanced computer systems at the Advanced Computing Research Facility. For example, courses were offered on writing programs for parallel computer systems and hosted the first annual Alliant users group meeting. A Sequent users group meeting and a two-day workshop on performance evaluation of parallel computers and programs are being organized.

  8. Advances in x-ray computed microtomography at the NSLS

    SciTech Connect

    Dowd, B.A.; Andrews, A.B.; Marr, R.B.; Siddons, D.P.; Jones, K.W.; Peskin, A.M.

    1998-08-01

    The X-Ray Computed Microtomography workstation at beamline X27A at the NSLS has been utilized by scientists from a broad range of disciplines from industrial materials processing to environmental science. The most recent applications are presented here as well as a description of the facility that has evolved to accommodate a wide variety of materials and sample sizes. One of the most exciting new developments reported here resulted from a pursuit of faster reconstruction techniques. A Fast Filtered Back Transform (FFBT) reconstruction program has been developed and implemented, that is based on a refinement of the gridding algorithm first developed for use with radio astronomical data. This program has reduced the reconstruction time to 8.5 sec for a 929 x 929 pixel{sup 2} slice on an R10,000 CPU, more than 8x reduction compared with the Filtered Back-Projection method.

  9. ADVANCES IN X-RAY COMPUTED MICROTOMOGRAPHY AT THE NSLS.

    SciTech Connect

    DOWD,B.A.

    1998-08-07

    The X-Ray Computed Microtomography workstation at beamline X27A at the NSLS has been utilized by scientists from a broad range of disciplines from industrial materials processing to environmental science. The most recent applications are presented here as well as a description of the facility that has evolved to accommodate a wide variety of materials and sample sizes. One of the most exciting new developments reported here resulted from a pursuit of faster reconstruction techniques. A Fast Filtered Back Transform (FFBT) reconstruction program has been developed and implemented, that is based on a refinement of the ''gridding'' algorithm first developed for use with radio astronomical data. This program has reduced the reconstruction time to 8.5 sec for a 929 x 929 pixel{sup 2} slice on an R10,000 CPU, more than 8x reduction compared with the Filtered Back-Projection method.

  10. National Energy Research Scientific Computing Center (NERSC): Advancing the frontiers of computational science and technology

    SciTech Connect

    Hules, J.

    1996-11-01

    National Energy Research Scientific Computing Center (NERSC) provides researchers with high-performance computing tools to tackle science`s biggest and most challenging problems. Founded in 1974 by DOE/ER, the Controlled Thermonuclear Research Computer Center was the first unclassified supercomputer center and was the model for those that followed. Over the years the center`s name was changed to the National Magnetic Fusion Energy Computer Center and then to NERSC; it was relocated to LBNL. NERSC, one of the largest unclassified scientific computing resources in the world, is the principal provider of general-purpose computing services to DOE/ER programs: Magnetic Fusion Energy, High Energy and Nuclear Physics, Basic Energy Sciences, Health and Environmental Research, and the Office of Computational and Technology Research. NERSC users are a diverse community located throughout US and in several foreign countries. This brochure describes: the NERSC advantage, its computational resources and services, future technologies, scientific resources, and computational science of scale (interdisciplinary research over a decade or longer; examples: combustion in engines, waste management chemistry, global climate change modeling).

  11. Making green infrastructure healthier infrastructure.

    PubMed

    Lõhmus, Mare; Balbus, John

    2015-01-01

    Increasing urban green and blue structure is often pointed out to be critical for sustainable development and climate change adaptation, which has led to the rapid expansion of greening activities in cities throughout the world. This process is likely to have a direct impact on the citizens' quality of life and public health. However, alongside numerous benefits, green and blue infrastructure also has the potential to create unexpected, undesirable, side-effects for health. This paper considers several potential harmful public health effects that might result from increased urban biodiversity, urban bodies of water, and urban tree cover projects. It does so with the intent of improving awareness and motivating preventive measures when designing and initiating such projects. Although biodiversity has been found to be associated with physiological benefits for humans in several studies, efforts to increase the biodiversity of urban environments may also promote the introduction and survival of vector or host organisms for infectious pathogens with resulting spread of a variety of diseases. In addition, more green connectivity in urban areas may potentiate the role of rats and ticks in the spread of infectious diseases. Bodies of water and wetlands play a crucial role in the urban climate adaptation and mitigation process. However, they also provide habitats for mosquitoes and toxic algal blooms. Finally, increasing urban green space may also adversely affect citizens allergic to pollen. Increased awareness of the potential hazards of urban green and blue infrastructure should not be a reason to stop or scale back projects. Instead, incorporating public health awareness and interventions into urban planning at the earliest stages can help insure that green and blue infrastructure achieves full potential for health promotion.

  12. Making green infrastructure healthier infrastructure

    PubMed Central

    Lõhmus, Mare; Balbus, John

    2015-01-01

    Increasing urban green and blue structure is often pointed out to be critical for sustainable development and climate change adaptation, which has led to the rapid expansion of greening activities in cities throughout the world. This process is likely to have a direct impact on the citizens’ quality of life and public health. However, alongside numerous benefits, green and blue infrastructure also has the potential to create unexpected, undesirable, side-effects for health. This paper considers several potential harmful public health effects that might result from increased urban biodiversity, urban bodies of water, and urban tree cover projects. It does so with the intent of improving awareness and motivating preventive measures when designing and initiating such projects. Although biodiversity has been found to be associated with physiological benefits for humans in several studies, efforts to increase the biodiversity of urban environments may also promote the introduction and survival of vector or host organisms for infectious pathogens with resulting spread of a variety of diseases. In addition, more green connectivity in urban areas may potentiate the role of rats and ticks in the spread of infectious diseases. Bodies of water and wetlands play a crucial role in the urban climate adaptation and mitigation process. However, they also provide habitats for mosquitoes and toxic algal blooms. Finally, increasing urban green space may also adversely affect citizens allergic to pollen. Increased awareness of the potential hazards of urban green and blue infrastructure should not be a reason to stop or scale back projects. Instead, incorporating public health awareness and interventions into urban planning at the earliest stages can help insure that green and blue infrastructure achieves full potential for health promotion. PMID:26615823

  13. Modeling, Simulation and Analysis of Public Key Infrastructure

    NASA Technical Reports Server (NTRS)

    Liu, Yuan-Kwei; Tuey, Richard; Ma, Paul (Technical Monitor)

    1998-01-01

    Security is an essential part of network communication. The advances in cryptography have provided solutions to many of the network security requirements. Public Key Infrastructure (PKI) is the foundation of the cryptography applications. The main objective of this research is to design a model to simulate a reliable, scalable, manageable, and high-performance public key infrastructure. We build a model to simulate the NASA public key infrastructure by using SimProcess and MatLab Software. The simulation is from top level all the way down to the computation needed for encryption, decryption, digital signature, and secure web server. The application of secure web server could be utilized in wireless communications. The results of the simulation are analyzed and confirmed by using queueing theory.

  14. CART V: recent advancements in computer-aided camouflage assessment

    NASA Astrophysics Data System (ADS)

    Müller, Thomas; Müller, Markus

    2011-05-01

    In order to facilitate systematic, computer aided improvements of camouflage and concealment assessment methods, the software system CART (Camouflage Assessment in Real-Time) was built up for the camouflage assessment of objects in multispectral image sequences (see contributions to SPIE 2007-2010 [1], [2], [3], [4]). It comprises a semi-automatic marking of target objects (ground truth generation) including their propagation over the image sequence and the evaluation via user-defined feature extractors as well as methods to assess the object's movement conspicuity. In this fifth part in an annual series at the SPIE conference in Orlando, this paper presents the enhancements over the recent year and addresses the camouflage assessment of static and moving objects in multispectral image data that can show noise or image artefacts. The presented methods fathom the correlations between image processing and camouflage assessment. A novel algorithm is presented based on template matching to assess the structural inconspicuity of an object objectively and quantitatively. The results can easily be combined with an MTI (moving target indication) based movement conspicuity assessment function in order to explore the influence of object movement to a camouflage effect in different environments. As the results show, the presented methods contribute to a significant benefit in the field of camouflage assessment.

  15. Block sparse Cholesky algorithms on advanced uniprocessor computers

    SciTech Connect

    Ng, E.G.; Peyton, B.W.

    1991-12-01

    As with many other linear algebra algorithms, devising a portable implementation of sparse Cholesky factorization that performs well on the broad range of computer architectures currently available is a formidable challenge. Even after limiting our attention to machines with only one processor, as we have done in this report, there are still several interesting issues to consider. For dense matrices, it is well known that block factorization algorithms are the best means of achieving this goal. We take this approach for sparse factorization as well. This paper has two primary goals. First, we examine two sparse Cholesky factorization algorithms, the multifrontal method and a blocked left-looking sparse Cholesky method, in a systematic and consistent fashion, both to illustrate the strengths of the blocking techniques in general and to obtain a fair evaluation of the two approaches. Second, we assess the impact of various implementation techniques on time and storage efficiency, paying particularly close attention to the work-storage requirement of the two methods and their variants.

  16. Advances in computed radiography systems and their physical imaging characteristics.

    PubMed

    Cowen, A R; Davies, A G; Kengyelics, S M

    2007-12-01

    Radiological imaging is progressing towards an all-digital future, across the spectrum of medical imaging techniques. Computed radiography (CR) has provided a ready pathway from screen film to digital radiography and a convenient entry point to PACS. This review briefly revisits the principles of modern CR systems and their physical imaging characteristics. Wide dynamic range and digital image enhancement are well-established benefits of CR, which lend themselves to improved image presentation and reduced rates of repeat exposures. However, in its original form CR offered limited scope for reducing the radiation dose per radiographic exposure, compared with screen film. Recent innovations in CR, including the use of dual-sided image readout and channelled storage phosphor have eased these concerns. For example, introduction of these technologies has improved detective quantum efficiency (DQE) by approximately 50 and 100%, respectively, compared with standard CR. As a result CR currently affords greater scope for reducing patient dose, and provides a more substantive challenge to the new solid-state, flat-panel, digital radiography detectors.

  17. U.S. Department of Energy Vehicle Technologies Program -- Advanced Vehicle Testing Activity -- Plug-in Hybrid Electric Vehicle Charging Infrastructure Review

    SciTech Connect

    Kevin Morrow; Donald Darner; James Francfort

    2008-11-01

    Plug-in hybrid electric vehicles (PHEVs) are under evaluation by various stake holders to better understand their capability and potential benefits. PHEVs could allow users to significantly improve fuel economy over a standard HEV and in some cases, depending on daily driving requirements and vehicle design, have the ability to eliminate fuel consumption entirely for daily vehicle trips. The cost associated with providing charge infrastructure for PHEVs, along with the additional costs for the on-board power electronics and added battery requirements associated with PHEV technology will be a key factor in the success of PHEVs. This report analyzes the infrastructure requirements for PHEVs in single family residential, multi-family residential and commercial situations. Costs associated with this infrastructure are tabulated, providing an estimate of the infrastructure costs associated with PHEV deployment.

  18. Uncertainty Analyses of Advanced Fuel Cycles

    SciTech Connect

    Laurence F. Miller; J. Preston; G. Sweder; T. Anderson; S. Janson; M. Humberstone; J. MConn; J. Clark

    2008-12-12

    The Department of Energy is developing technology, experimental protocols, computational methods, systems analysis software, and many other capabilities in order to advance the nuclear power infrastructure through the Advanced Fuel Cycle Initiative (AFDI). Our project, is intended to facilitate will-informed decision making for the selection of fuel cycle options and facilities for development.

  19. WAATS: A computer program for Weights Analysis of Advanced Transportation Systems

    NASA Technical Reports Server (NTRS)

    Glatt, C. R.

    1974-01-01

    A historical weight estimating technique for advanced transportation systems is presented. The classical approach to weight estimation is discussed and sufficient data is presented to estimate weights for a large spectrum of flight vehicles including horizontal and vertical takeoff aircraft, boosters and reentry vehicles. A computer program, WAATS (Weights Analysis for Advanced Transportation Systems) embracing the techniques discussed has been written and user instructions are presented. The program was developed for use in the ODIN (Optimal Design Integration System) system.

  20. ADVANCED METHODS FOR THE COMPUTATION OF PARTICLE BEAM TRANSPORT AND THE COMPUTATION OF ELECTROMAGNETIC FIELDS AND MULTIPARTICLE PHENOMENA

    SciTech Connect

    Alex J. Dragt

    2012-08-31

    Since 1980, under the grant DEFG02-96ER40949, the Department of Energy has supported the educational and research work of the University of Maryland Dynamical Systems and Accelerator Theory (DSAT) Group. The primary focus of this educational/research group has been on the computation and analysis of charged-particle beam transport using Lie algebraic methods, and on advanced methods for the computation of electromagnetic fields and multiparticle phenomena. This Final Report summarizes the accomplishments of the DSAT Group from its inception in 1980 through its end in 2011.

  1. ADVANCED COMPUTATIONAL MODEL FOR THREE-PHASE SLURRY REACTORS

    SciTech Connect

    Goodarz Ahmadi

    2004-10-01

    In this project, an Eulerian-Lagrangian formulation for analyzing three-phase slurry flows in a bubble column was developed. The approach used an Eulerian analysis of liquid flows in the bubble column, and made use of the Lagrangian trajectory analysis for the bubbles and particle motions. The bubble-bubble and particle-particle collisions are included the model. The model predictions are compared with the experimental data and good agreement was found An experimental setup for studying two-dimensional bubble columns was developed. The multiphase flow conditions in the bubble column were measured using optical image processing and Particle Image Velocimetry techniques (PIV). A simple shear flow device for bubble motion in a constant shear flow field was also developed. The flow conditions in simple shear flow device were studied using PIV method. Concentration and velocity of particles of different sizes near a wall in a duct flow was also measured. The technique of Phase-Doppler anemometry was used in these studies. An Eulerian volume of fluid (VOF) computational model for the flow condition in the two-dimensional bubble column was also developed. The liquid and bubble motions were analyzed and the results were compared with observed flow patterns in the experimental setup. Solid-fluid mixture flows in ducts and passages at different angle of orientations were also analyzed. The model predictions were compared with the experimental data and good agreement was found. Gravity chute flows of solid-liquid mixtures were also studied. The simulation results were compared with the experimental data and discussed A thermodynamically consistent model for multiphase slurry flows with and without chemical reaction in a state of turbulent motion was developed. The balance laws were obtained and the constitutive laws established.

  2. AFDX Based Data Infrastructures in Space

    NASA Astrophysics Data System (ADS)

    Vogel, Torsten

    2010-08-01

    An assessment on the requirements for the avionics infrastructure of future spacecraft like the Advanced Re-Entry Vehicle ARV has identified a distinctive lack of performance on the traditional MIL-STD-1553B installations and thus initiated a quest for an equally reliable but much more powerful successor. Ever increasing processing performance available for onboard computers opens opportunities for advanced applications like real time image processing for close range navigation. In a distributed data handling system these functions heavily rely on equally advanced and powerful network architecture. This paper will identify the requirements for modern avionics data-buses and evaluate the suitability of possible candidates. Subsequently a dedicated focus will be given to an Avionics Full DupleX Switched Ethernet (AFDX) infrastructure as backbone for future spacecraft control applications. A detailed section will focus on the central aspects that make AFDX an interesting choice for space network installations. It will cover specifically the enhancements of the underlying IEEE 802.3 Ethernet technology.

  3. Advances and perspectives in lung cancer imaging using multidetector row computed tomography.

    PubMed

    Coche, Emmanuel

    2012-10-01

    The introduction of multidetector row computed tomography (CT) into clinical practice has revolutionized many aspects of the clinical work-up. Lung cancer imaging has benefited from various breakthroughs in computing technology, with advances in the field of lung cancer detection, tissue characterization, lung cancer staging and response to therapy. Our paper discusses the problems of radiation, image visualization and CT examination comparison. It also reviews the most significant advances in lung cancer imaging and highlights the emerging clinical applications that use state of the art CT technology in the field of lung cancer diagnosis and follow-up.

  4. Advanced entry guidance algorithm with landing footprint computation

    NASA Astrophysics Data System (ADS)

    Leavitt, James Aaron

    -determined angle of attack profile. The method is also capable of producing orbital footprints using an automatically-generated set of angle of attack profiles of varying range, with the lowest profile designed for near-maximum range in the absence of an active heat load constraint. The accuracy of the footprint method is demonstrated by direct comparison with footprints computed independently by an optimization program.

  5. The Nuclear Energy Advanced Modeling and Simulation Enabling Computational Technologies FY09 Report

    SciTech Connect

    Diachin, L F; Garaizar, F X; Henson, V E; Pope, G

    2009-10-12

    In this document we report on the status of the Nuclear Energy Advanced Modeling and Simulation (NEAMS) Enabling Computational Technologies (ECT) effort. In particular, we provide the context for ECT In the broader NEAMS program and describe the three pillars of the ECT effort, namely, (1) tools and libraries, (2) software quality assurance, and (3) computational facility (computers, storage, etc) needs. We report on our FY09 deliverables to determine the needs of the integrated performance and safety codes (IPSCs) in these three areas and lay out the general plan for software quality assurance to meet the requirements of DOE and the DOE Advanced Fuel Cycle Initiative (AFCI). We conclude with a brief description of our interactions with the Idaho National Laboratory computer center to determine what is needed to expand their role as a NEAMS user facility.

  6. Computational Modeling and High Performance Computing in Advanced Materials Processing, Synthesis, and Design

    DTIC Science & Technology

    2014-12-07

    research efforts in this project focused on the synergistic coupling of: • Computational material science and mechanics of hybrid and light weight polymeric...MATERIAL SCIENCE AND MECHANICS OF HYBRID AND LIGHT WEIGHT POLYMERIC COMPOSITE STRUCTURES 11 A-l-l: Atomistic Modeling in Polymer Nanocomposite Systems...DETAILED TECHNICAL REPORT 16 A-1: COMPUTATIONAL MATERIAL SCIENCE AND MECHANICS OF HYBRID AND LIGHT WEIGHT POLYMERIC COMPOSITE STRUCTURES 16 A-l-l

  7. Projected role of advanced computational aerodynamic methods at the Lockheed-Georgia company

    NASA Technical Reports Server (NTRS)

    Lores, M. E.

    1978-01-01

    Experience with advanced computational methods being used at the Lockheed-Georgia Company to aid in the evaluation and design of new and modified aircraft indicates that large and specialized computers will be needed to make advanced three-dimensional viscous aerodynamic computations practical. The Numerical Aerodynamic Simulation Facility should be used to provide a tool for designing better aerospace vehicles while at the same time reducing development costs by performing computations using Navier-Stokes equations solution algorithms and permitting less sophisticated but nevertheless complex calculations to be made efficiently. Configuration definition procedures and data output formats can probably best be defined in cooperation with industry, therefore, the computer should handle many remote terminals efficiently. The capability of transferring data to and from other computers needs to be provided. Because of the significant amount of input and output associated with 3-D viscous flow calculations and because of the exceedingly fast computation speed envisioned for the computer, special attention should be paid to providing rapid, diversified, and efficient input and output.

  8. Volumes to learn: advancing therapeutics with innovative computed tomography image data analysis.

    PubMed

    Maitland, Michael L

    2010-09-15

    Semi-automated methods for calculating tumor volumes from computed tomography images are a new tool for advancing the development of cancer therapeutics. Volumetric measurements, relying on already widely available standard clinical imaging techniques, could shorten the observation intervals needed to identify cohorts of patients sensitive or resistant to treatment.

  9. Next Generation Risk Assessment: Incorporation of Recent Advances in Molecular, Computational, and Systems Biology (Final Report)

    EPA Science Inventory

    EPA announced the release of the final report, Next Generation Risk Assessment: Incorporation of Recent Advances in Molecular, Computational, and Systems Biology. This report describes new approaches that are faster, less resource intensive, and more robust that can help ...

  10. Response to House Joint Resolution No. 118 [To Advance Computer-Assisted Instruction].

    ERIC Educational Resources Information Center

    Virginia State General Assembly, Richmond.

    This response by the Virginia Department of Education to House Joint Resolution No. 118 of the General Assembly of Virginia, which requested the Department of Education to study initiatives to advance computer-assisted instruction, is based on input from state and national task forces and on a 1986 survey of 80 Viriginia school divisions. The…

  11. PARTNERING WITH DOE TO APPLY ADVANCED BIOLOGICAL, ENVIRONMENTAL, AND COMPUTATIONAL SCIENCE TO ENVIRONMENTAL ISSUES

    EPA Science Inventory

    On February 18, 2004, the U.S. Environmental Protection Agency and Department of Energy signed a Memorandum of Understanding to expand the research collaboration of both agencies to advance biological, environmental, and computational sciences for protecting human health and the ...

  12. COMPUTATIONAL TOXICOLOGY ADVANCES: EMERGING CAPABILITIES FOR DATA EXPLORATION AND SAR MODEL DEVELOPMENT

    EPA Science Inventory

    Computational Toxicology Advances: Emerging capabilities for data exploration and SAR model development
    Ann M. Richard and ClarLynda R. Williams, National Health & Environmental Effects Research Laboratory, US EPA, Research Triangle Park, NC, USA; email: richard.ann@epa.gov

  13. Computers-for-edu: An Advanced Business Application Programming (ABAP) Teaching Case

    ERIC Educational Resources Information Center

    Boyle, Todd A.

    2007-01-01

    The "Computers-for-edu" case is designed to provide students with hands-on exposure to creating Advanced Business Application Programming (ABAP) reports and dialogue programs, as well as navigating various mySAP Enterprise Resource Planning (ERP) transactions needed by ABAP developers. The case requires students to apply a wide variety…

  14. Scientific Discovery through Advanced Computing (SciDAC-3) Partnership Project Annual Report

    SciTech Connect

    Hoffman, Forest M.; Bochev, Pavel B.; Cameron-Smith, Philip J..; Easter, Richard C; Elliott, Scott M.; Ghan, Steven J.; Liu, Xiaohong; Lowrie, Robert B.; Lucas, Donald D.; Ma, Po-lun; Sacks, William J.; Shrivastava, Manish; Singh, Balwinder; Tautges, Timothy J.; Taylor, Mark A.; Vertenstein, Mariana; Worley, Patrick H.

    2014-01-15

    The Applying Computationally Efficient Schemes for BioGeochemical Cycles ACES4BGC Project is advancing the predictive capabilities of Earth System Models (ESMs) by reducing two of the largest sources of uncertainty, aerosols and biospheric feedbacks, with a highly efficient computational approach. In particular, this project is implementing and optimizing new computationally efficient tracer advection algorithms for large numbers of tracer species; adding important biogeochemical interactions between the atmosphere, land, and ocean models; and applying uncertainty quanti cation (UQ) techniques to constrain process parameters and evaluate uncertainties in feedbacks between biogeochemical cycles and the climate system.

  15. Research in Computational Aeroscience Applications Implemented on Advanced Parallel Computing Systems

    NASA Technical Reports Server (NTRS)

    Wigton, Larry

    1996-01-01

    Improving the numerical linear algebra routines for use in new Navier-Stokes codes, specifically Tim Barth's unstructured grid code, with spin-offs to TRANAIR is reported. A fast distance calculation routine for Navier-Stokes codes using the new one-equation turbulence models is written. The primary focus of this work was devoted to improving matrix-iterative methods. New algorithms have been developed which activate the full potential of classical Cray-class computers as well as distributed-memory parallel computers.

  16. Impact of computer advances on future finite elements computations. [for aircraft and spacecraft design

    NASA Technical Reports Server (NTRS)

    Fulton, Robert E.

    1985-01-01

    Research performed over the past 10 years in engineering data base management and parallel computing is discussed, and certain opportunities for research toward the next generation of structural analysis capability are proposed. Particular attention is given to data base management associated with the IPAD project and parallel processing associated with the Finite Element Machine project, both sponsored by NASA, and a near term strategy for a distributed structural analysis capability based on relational data base management software and parallel computers for a future structural analysis system.

  17. A first attempt to bring computational biology into advanced high school biology classrooms.

    PubMed

    Gallagher, Suzanne Renick; Coon, William; Donley, Kristin; Scott, Abby; Goldberg, Debra S

    2011-10-01

    Computer science has become ubiquitous in many areas of biological research, yet most high school and even college students are unaware of this. As a result, many college biology majors graduate without adequate computational skills for contemporary fields of biology. The absence of a computational element in secondary school biology classrooms is of growing concern to the computational biology community and biology teachers who would like to acquaint their students with updated approaches in the discipline. We present a first attempt to correct this absence by introducing a computational biology element to teach genetic evolution into advanced biology classes in two local high schools. Our primary goal was to show students how computation is used in biology and why a basic understanding of computation is necessary for research in many fields of biology. This curriculum is intended to be taught by a computational biologist who has worked with a high school advanced biology teacher to adapt the unit for his/her classroom, but a motivated high school teacher comfortable with mathematics and computing may be able to teach this alone. In this paper, we present our curriculum, which takes into consideration the constraints of the required curriculum, and discuss our experiences teaching it. We describe the successes and challenges we encountered while bringing this unit to high school students, discuss how we addressed these challenges, and make suggestions for future versions of this curriculum.We believe that our curriculum can be a valuable seed for further development of computational activities aimed at high school biology students. Further, our experiences may be of value to others teaching computational biology at this level. Our curriculum can be obtained at http://ecsite.cs.colorado.edu/?page_id=149#biology or by contacting the authors.

  18. Building an advanced climate model: Program plan for the CHAMMP (Computer Hardware, Advanced Mathematics, and Model Physics) Climate Modeling Program

    SciTech Connect

    Not Available

    1990-12-01

    The issue of global warming and related climatic changes from increasing concentrations of greenhouse gases in the atmosphere has received prominent attention during the past few years. The Computer Hardware, Advanced Mathematics, and Model Physics (CHAMMP) Climate Modeling Program is designed to contribute directly to this rapid improvement. The goal of the CHAMMP Climate Modeling Program is to develop, verify, and apply a new generation of climate models within a coordinated framework that incorporates the best available scientific and numerical approaches to represent physical, biogeochemical, and ecological processes, that fully utilizes the hardware and software capabilities of new computer architectures, that probes the limits of climate predictability, and finally that can be used to address the challenging problem of understanding the greenhouse climate issue through the ability of the models to simulate time-dependent climatic changes over extended times and with regional resolution.

  19. FY05-FY06 Advanced Simulation and Computing Implementation Plan, Volume 2

    SciTech Connect

    Baron, A L

    2004-07-19

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the safety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program will require the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear weapon design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable stockpile life extension programs and the resolution of significant finding investigations (SFIs). This requires a balanced system of technical staff, hardware, simulation software, and computer science solutions.

  20. Agile Infrastructure Monitoring

    NASA Astrophysics Data System (ADS)

    Andrade, P.; Ascenso, J.; Fedorko, I.; Fiorini, B.; Paladin, M.; Pigueiras, L.; Santos, M.

    2014-06-01

    At the present time, data centres are facing a massive rise in virtualisation and cloud computing. The Agile Infrastructure (AI) project is working to deliver new solutions to ease the management of CERN data centres. Part of the solution consists in a new "shared monitoring architecture" which collects and manages monitoring data from all data centre resources. In this article, we present the building blocks of this new monitoring architecture, the different open source technologies selected for each architecture layer, and how we are building a community around this common effort.

  1. Computation of Loads on the McDonnell Douglas Advanced Bearingless Rotor

    NASA Technical Reports Server (NTRS)

    Nguyen, Khanh; Lauzon, Dan; Anand, Vaidyanathan

    1994-01-01

    Computed results from UMARC and DART analyses are compared with the blade bending moments and vibratory hub loads data obtained from a full-scale wind tunnel test of the McDonnell Douglas five-bladed advanced bearingless rotor. The 5 per-rev vibratory hub loads data are corrected using results from a dynamic calibration of the rotor balance. The comparison between UMARC computed blade bending moments at different flight conditions are poor to fair, while DART results are fair to good. Using the free wake module, UMARC adequately computes the 5P vibratory hub loads for this rotor, capturing both magnitude and variations with forward speed. DART employs a uniform inflow wake model and does not adequately compute the 5P vibratory hub loads for this rotor.

  2. California Hydrogen Infrastructure Project

    SciTech Connect

    Heydorn, Edward C

    2013-03-12

    Air Products and Chemicals, Inc. has completed a comprehensive, multiyear project to demonstrate a hydrogen infrastructure in California. The specific primary objective of the project was to demonstrate a model of a real-world retail hydrogen infrastructure and acquire sufficient data within the project to assess the feasibility of achieving the nation's hydrogen infrastructure goals. The project helped to advance hydrogen station technology, including the vehicle-to-station fueling interface, through consumer experiences and feedback. By encompassing a variety of fuel cell vehicles, customer profiles and fueling experiences, this project was able to obtain a complete portrait of real market needs. The project also opened its stations to other qualified vehicle providers at the appropriate time to promote widespread use and gain even broader public understanding of a hydrogen infrastructure. The project engaged major energy companies to provide a fueling experience similar to traditional gasoline station sites to foster public acceptance of hydrogen. Work over the course of the project was focused in multiple areas. With respect to the equipment needed, technical design specifications (including both safety and operational considerations) were written, reviewed, and finalized. After finalizing individual equipment designs, complete station designs were started including process flow diagrams and systems safety reviews. Material quotes were obtained, and in some cases, depending on the project status and the lead time, equipment was placed on order and fabrication began. Consideration was given for expected vehicle usage and station capacity, standard features needed, and the ability to upgrade the station at a later date. In parallel with work on the equipment, discussions were started with various vehicle manufacturers to identify vehicle demand (short- and long-term needs). Discussions included identifying potential areas most suited for hydrogen fueling stations

  3. Intelligent systems technology infrastructure for integrated systems

    NASA Technical Reports Server (NTRS)

    Lum, Henry

    1991-01-01

    A system infrastructure must be properly designed and integrated from the conceptual development phase to accommodate evolutionary intelligent technologies. Several technology development activities were identified that may have application to rendezvous and capture systems. Optical correlators in conjunction with fuzzy logic control might be used for the identification, tracking, and capture of either cooperative or non-cooperative targets without the intensive computational requirements associated with vision processing. A hybrid digital/analog system was developed and tested with a robotic arm. An aircraft refueling application demonstration is planned within two years. Initially this demonstration will be ground based with a follow-on air based demonstration. System dependability measurement and modeling techniques are being developed for fault management applications. This involves usage of incremental solution/evaluation techniques and modularized systems to facilitate reuse and to take advantage of natural partitions in system models. Though not yet commercially available and currently subject to accuracy limitations, technology is being developed to perform optical matrix operations to enhance computational speed. Optical terrain recognition using camera image sequencing processed with optical correlators is being developed to determine position and velocity in support of lander guidance. The system is planned for testing in conjunction with Dryden Flight Research Facility. Advanced architecture technology is defining open architecture design constraints, test bed concepts (processors, multiple hardware/software and multi-dimensional user support, knowledge/tool sharing infrastructure), and software engineering interface issues.

  4. Advanced Simulation & Computing FY15 Implementation Plan Volume 2, Rev. 0.5

    SciTech Connect

    McCoy, Michel; Archer, Bill; Matzen, M. Keith

    2014-09-16

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources that support annual stockpile assessment and certification, study advanced nuclear weapons design and manufacturing processes, analyze accident scenarios and weapons aging, and provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balance of resource, including technical staff, hardware, simulation software, and computer science solutions. As the program approaches the end of its second decade, ASC is intently focused on increasing predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (sufficient resolution, dimensionality, and scientific details), quantify critical margins and uncertainties, and resolve increasingly difficult analyses needed for the SSP. Where possible, the program also enables the use of high-performance simulation and computing tools to address broader national security needs, such as foreign nuclear weapon assessments and counternuclear terrorism.

  5. PREFACE: 16th International workshop on Advanced Computing and Analysis Techniques in physics research (ACAT2014)

    NASA Astrophysics Data System (ADS)

    Fiala, L.; Lokajicek, M.; Tumova, N.

    2015-05-01

    This volume of the IOP Conference Series is dedicated to scientific contributions presented at the 16th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2014), this year the motto was ''bridging disciplines''. The conference took place on September 1-5, 2014, at the Faculty of Civil Engineering, Czech Technical University in Prague, Czech Republic. The 16th edition of ACAT explored the boundaries of computing system architectures, data analysis algorithmics, automatic calculations, and theoretical calculation technologies. It provided a forum for confronting and exchanging ideas among these fields, where new approaches in computing technologies for scientific research were explored and promoted. This year's edition of the workshop brought together over 140 participants from all over the world. The workshop's 16 invited speakers presented key topics on advanced computing and analysis techniques in physics. During the workshop, 60 talks and 40 posters were presented in three tracks: Computing Technology for Physics Research, Data Analysis - Algorithms and Tools, and Computations in Theoretical Physics: Techniques and Methods. The round table enabled discussions on expanding software, knowledge sharing and scientific collaboration in the respective areas. ACAT 2014 was generously sponsored by Western Digital, Brookhaven National Laboratory, Hewlett Packard, DataDirect Networks, M Computers, Bright Computing, Huawei and PDV-Systemhaus. Special appreciations go to the track liaisons Lorenzo Moneta, Axel Naumann and Grigory Rubtsov for their work on the scientific program and the publication preparation. ACAT's IACC would also like to express its gratitude to all referees for their work on making sure the contributions are published in the proceedings. Our thanks extend to the conference liaisons Andrei Kataev and Jerome Lauret who worked with the local contacts and made this conference possible as well as to the program

  6. Advances in computer-aided design and computer-aided manufacture technology.

    PubMed

    Calamia, J R

    1994-01-01

    Although the development of computer-aided design (CAD) and computer-aided manufacture (CAM) technology and the benefits of increased productivity became obvious in the automobile and aerospace industries in the 1970s, investigations of this technology's application in the field of dentistry did not begin until the 1980s. Only now are we beginning to see the fruits of this work with the commercial availability of some systems; the potential for this technology seems boundless. This article reviews the recent literature with emphasis on the period from June 1992 to May 1993. This review should familiarize the reader with some of the latest developments in this technology, including a brief description of some systems currently available and the clinical and economical rationale for their acceptance into the dental mainstream. This article concentrates on a particular system, the Cerec (Siemens/Pelton and Crane, Charlotte, NC) system, for three reasons: first, this system has been available since 1985 and, as a result, has a track record of almost 7 years of data. Most of the data have just recently been released and consequently, much of this year's literature on CAD-CAM is monopolized by studies using this system. Second, this system was developed as a mobile, affordable, direct chairside CAD-CAM restorative method. As such, it is of special interest to the dentist who will offer this new technology directly to the patient, providing a one-visit restoration. Third, the author is currently engaged in research using this particular system and has a working knowledge of this system's capabilities.

  7. DOE Advanced Scientific Computing Advisory Subcommittee (ASCAC) Report: Top Ten Exascale Research Challenges

    SciTech Connect

    Lucas, Robert; Ang, James; Bergman, Keren; Borkar, Shekhar; Carlson, William; Carrington, Laura; Chiu, George; Colwell, Robert; Dally, William; Dongarra, Jack; Geist, Al; Haring, Rud; Hittinger, Jeffrey; Hoisie, Adolfy; Klein, Dean Micron; Kogge, Peter; Lethin, Richard; Sarkar, Vivek; Schreiber, Robert; Shalf, John; Sterling, Thomas; Stevens, Rick; Bashor, Jon; Brightwell, Ron; Coteus, Paul; Debenedictus, Erik; Hiller, Jon; Kim, K. H.; Langston, Harper; Murphy, Richard Micron; Webster, Clayton; Wild, Stefan; Grider, Gary; Ross, Rob; Leyffer, Sven; Laros III, James

    2014-02-10

    Exascale computing systems are essential for the scientific fields that will transform the 21st century global economy, including energy, biotechnology, nanotechnology, and materials science. Progress in these fields is predicated on the ability to perform advanced scientific and engineering simulations, and analyze the deluge of data. On July 29, 2013, ASCAC was charged by Patricia Dehmer, the Acting Director of the Office of Science, to assemble a subcommittee to provide advice on exascale computing. This subcommittee was directed to return a list of no more than ten technical approaches (hardware and software) that will enable the development of a system that achieves the Department's goals for exascale computing. Numerous reports over the past few years have documented the technical challenges and the non¬-viability of simply scaling existing computer designs to reach exascale. The technical challenges revolve around energy consumption, memory performance, resilience, extreme concurrency, and big data. Drawing from these reports and more recent experience, this ASCAC subcommittee has identified the top ten computing technology advancements that are critical to making a capable, economically viable, exascale system.

  8. Recovery Act: Advanced Interaction, Computation, and Visualization Tools for Sustainable Building Design

    SciTech Connect

    Greenberg, Donald P.; Hencey, Brandon M.

    2013-08-20

    Current building energy simulation technology requires excessive labor, time and expertise to create building energy models, excessive computational time for accurate simulations and difficulties with the interpretation of the results. These deficiencies can be ameliorated using modern graphical user interfaces and algorithms which take advantage of modern computer architectures and display capabilities. To prove this hypothesis, we developed an experimental test bed for building energy simulation. This novel test bed environment offers an easy-to-use interactive graphical interface, provides access to innovative simulation modules that run at accelerated computational speeds, and presents new graphics visualization methods to interpret simulation results. Our system offers the promise of dramatic ease of use in comparison with currently available building energy simulation tools. Its modular structure makes it suitable for early stage building design, as a research platform for the investigation of new simulation methods, and as a tool for teaching concepts of sustainable design. Improvements in the accuracy and execution speed of many of the simulation modules are based on the modification of advanced computer graphics rendering algorithms. Significant performance improvements are demonstrated in several computationally expensive energy simulation modules. The incorporation of these modern graphical techniques should advance the state of the art in the domain of whole building energy analysis and building performance simulation, particularly at the conceptual design stage when decisions have the greatest impact. More importantly, these better simulation tools will enable the transition from prescriptive to performative energy codes, resulting in better, more efficient designs for our future built environment.

  9. High Performance Computing Modeling Advances Accelerator Science for High-Energy Physics

    SciTech Connect

    Amundson, James; Macridin, Alexandru; Spentzouris, Panagiotis

    2014-07-28

    The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space, and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing high-performance computing are essential for accurately modeling them. In the past decade, the US Department of Energy's SciDAC program has produced accelerator-modeling tools that have been employed to tackle some of the most difficult accelerator science problems. The authors discuss the Synergia framework and its applications to high-intensity particle accelerator physics. Synergia is an accelerator simulation package capable of handling the entire spectrum of beam dynamics simulations. Our authors present Synergia's design principles and its performance on HPC platforms.

  10. High Performance Computing Modeling Advances Accelerator Science for High-Energy Physics

    DOE PAGES

    Amundson, James; Macridin, Alexandru; Spentzouris, Panagiotis

    2014-07-28

    The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space, and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing high-performance computing are essential for accurately modeling them. In the past decade, the US Department of Energy's SciDAC program has produced accelerator-modeling tools that have been employed to tackle some of the most difficult accelerator science problems. The authors discuss the Synergia framework and its applications to high-intensity particle accelerator physics. Synergia is an accelerator simulation package capable ofmore » handling the entire spectrum of beam dynamics simulations. Our authors present Synergia's design principles and its performance on HPC platforms.« less

  11. Recent Advances in Cardiac Computed Tomography: Dual Energy, Spectral and Molecular CT Imaging

    PubMed Central

    Danad, Ibrahim; Fayad, Zahi A.; Willemink, Martin J.; Min, James K.

    2015-01-01

    Computed tomography (CT) evolved into a powerful diagnostic tool and it is impossible to imagine current clinical practice without CT imaging. Due to its widespread availability, ease of clinical application, superb sensitivity for detection of CAD, and non-invasive nature, CT has become a valuable tool within the armamentarium of the cardiologist. In the last few years, numerous technological advances in CT have occurred—including dual energy CT (DECT), spectral CT and CT-based molecular imaging. By harnessing the advances in technology, cardiac CT has advanced beyond the mere evaluation of coronary stenosis to an imaging modality tool that permits accurate plaque characterization, assessment of myocardial perfusion and even probing of molecular processes that are involved in coronary atherosclerosis. Novel innovations in CT contrast agents and pre-clinical spectral CT devices have paved the way for CT-based molecular imaging. PMID:26068288

  12. Advances in Single-Photon Emission Computed Tomography Hardware and Software.

    PubMed

    Piccinelli, Marina; Garcia, Ernest V

    2016-02-01

    Nuclear imaging techniques remain today's most reliable modality for the assessment and quantification of myocardial perfusion. In recent years, the field has experienced tremendous progress both in terms of dedicated cameras for cardiac applications and software techniques for image reconstruction. The most recent advances in single-photon emission computed tomography hardware and software are reviewed, focusing on how these improvements have resulted in an even more powerful diagnostic tool with reduced injected radiation dose and acquisition time.

  13. Condition monitoring through advanced sensor and computational technology : final report (January 2002 to May 2005).

    SciTech Connect

    Kim, Jung-Taek; Luk, Vincent K.

    2005-05-01

    The overall goal of this joint research project was to develop and demonstrate advanced sensors and computational technology for continuous monitoring of the condition of components, structures, and systems in advanced and next-generation nuclear power plants (NPPs). This project included investigating and adapting several advanced sensor technologies from Korean and US national laboratory research communities, some of which were developed and applied in non-nuclear industries. The project team investigated and developed sophisticated signal processing, noise reduction, and pattern recognition techniques and algorithms. The researchers installed sensors and conducted condition monitoring tests on two test loops, a check valve (an active component) and a piping elbow (a passive component), to demonstrate the feasibility of using advanced sensors and computational technology to achieve the project goal. Acoustic emission (AE) devices, optical fiber sensors, accelerometers, and ultrasonic transducers (UTs) were used to detect mechanical vibratory response of check valve and piping elbow in normal and degraded configurations. Chemical sensors were also installed to monitor the water chemistry in the piping elbow test loop. Analysis results of processed sensor data indicate that it is feasible to differentiate between the normal and degraded (with selected degradation mechanisms) configurations of these two components from the acquired sensor signals, but it is questionable that these methods can reliably identify the level and type of degradation. Additional research and development efforts are needed to refine the differentiation techniques and to reduce the level of uncertainties.

  14. Computer-assisted virtual planning and surgical template fabrication for frontoorbital advancement.

    PubMed

    Soleman, Jehuda; Thieringer, Florian; Beinemann, Joerg; Kunz, Christoph; Guzman, Raphael

    2015-05-01

    OBJECT The authors describe a novel technique using computer-assisted design (CAD) and computed-assisted manufacturing (CAM) for the fabrication of individualized 3D printed surgical templates for frontoorbital advancement surgery. METHODS Two patients underwent frontoorbital advancement surgery for unilateral coronal synostosis. Virtual surgical planning (SurgiCase-CMF, version 5.0, Materialise) was done by virtual mirroring techniques and superposition of an age-matched normative 3D pediatric skull model. Based on these measurements, surgical templates were fabricated using a 3D printer. Bifrontal craniotomy and the osteotomies for the orbital bandeau were performed based on the sterilized 3D templates. The remodeling was then done placing the bone plates within the negative 3D templates and fixing them using absorbable poly-dl-lactic acid plates and screws. RESULTS Both patients exhibited a satisfying head shape postoperatively and at follow-up. No surgery-related complications occurred. The cutting and positioning of the 3D surgical templates proved to be very accurate and easy to use as well as reproducible and efficient. CONCLUSIONS Computer-assisted virtual planning and 3D template fabrication for frontoorbital advancement surgery leads to reconstructions based on standardizedmeasurements, precludes subjective remodeling, and seems to be overall safe and feasible. A larger series of patients with long-term follow-up is needed for further evaluation of this novel technique.

  15. Recent advances in 3D computed tomography techniques for simulation and navigation in hepatobiliary pancreatic surgery.

    PubMed

    Uchida, Masafumi

    2014-04-01

    A few years ago it could take several hours to complete a 3D image using a 3D workstation. Thanks to advances in computer science, obtaining results of interest now requires only a few minutes. Many recent 3D workstations or multimedia computers are equipped with onboard 3D virtual patient modeling software, which enables patient-specific preoperative assessment and virtual planning, navigation, and tool positioning. Although medical 3D imaging can now be conducted using various modalities, including computed tomography (CT), magnetic resonance imaging (MRI), positron emission tomography (PET), and ultrasonography (US) among others, the highest quality images are obtained using CT data, and CT images are now the most commonly used source of data for 3D simulation and navigation image. If the 2D source image is bad, no amount of 3D image manipulation in software will provide a quality 3D image. In this exhibition, the recent advances in CT imaging technique and 3D visualization of the hepatobiliary and pancreatic abnormalities are featured, including scan and image reconstruction technique, contrast-enhanced techniques, new application of advanced CT scan techniques, and new virtual reality simulation and navigation imaging.

  16. Research Institute for Advanced Computer Science: Annual Report October 1998 through September 1999

    NASA Technical Reports Server (NTRS)

    Leiner, Barry M.; Gross, Anthony R. (Technical Monitor)

    1999-01-01

    The Research Institute for Advanced Computer Science (RIACS) carries out basic research and technology development in computer science, in support of the National Aeronautics and Space Administration's missions. RIACS is located at the NASA Ames Research Center (ARC). It currently operates under a multiple year grant/cooperative agreement that began on October 1, 1997 and is up for renewal in the year 2002. ARC has been designated NASA's Center of Excellence in Information Technology. In this capacity, ARC is charged with the responsibility to build an Information Technology Research Program that is preeminent within NASA. RIACS serves as a bridge between NASA ARC and the academic community, and RIACS scientists and visitors work in close collaboration with NASA scientists. RIACS has the additional goal of broadening the base of researchers in these areas of importance to the nation's space and aeronautics enterprises. RIACS research focuses on the three cornerstones of information technology research necessary to meet the future challenges of NASA missions: (1) Automated Reasoning for Autonomous Systems. Techniques are being developed enabling spacecraft that will be self-guiding and self-correcting to the extent that they will require little or no human intervention. Such craft will be equipped to independently solve problems as they arise, and fulfill their missions with minimum direction from Earth. (2) Human-Centered Computing. Many NASA missions require synergy between humans and computers, with sophisticated computational aids amplifying human cognitive and perceptual abilities; (3) High Performance Computing and Networking Advances in the performance of computing and networking continue to have major impact on a variety of NASA endeavors, ranging from modeling and simulation to data analysis of large datasets to collaborative engineering, planning and execution. In addition, RIACS collaborates with NASA scientists to apply information technology research to

  17. Internet 2 Distributed Storage Infrastructure.

    ERIC Educational Resources Information Center

    Simco, Greg

    2003-01-01

    The Distributed Storage Infrastructure (DSI) project, a cooperative effort of the University of Tennessee and University of North Carolina, is an example of the Internet 2 (I2) efforts to enable remote collaboration among the research and educational communities. It extends the domain of a distributed high-speed computing environment to enable…

  18. Computer-Assisted Instruction in the Context of the Advanced Instructional System: Authoring Support Software. Final Report.

    ERIC Educational Resources Information Center

    Montgomery, Ann D.; Judd, Wilson A.

    This report details the design, development, and implementation of computer software to support the cost-effective production of computer assisted instruction (CAI) within the context of the Advanced Instructional System (AIS) located at Lowry Air Force Base. The report supplements the computer managed Air Force technical training that is…

  19. 13th International Workshop on Advanced Computing and Analysis Techniques in Physics Research

    NASA Astrophysics Data System (ADS)

    Speer, T.; Boudjema, F.; Lauret, J.; Naumann, A.; Teodorescu, L.; Uwer, P.

    "Beyond the Cutting edge in Computing" Fundamental research is dealing, by definition, with the two extremes: the extremely small and the extremely large. The LHC and Astroparticle physics experiments will soon offer new glimpses beyond the current frontiers. And the computing infrastructure to support such physics research needs to look beyond the cutting edge. Once more it seems that we are on the edge of a computing revolution. But perhaps what we are seeing now is a even more epochal change where not only the pace of the revolution is changing, but also its very nature. Change is not any more an "event" meant to open new possibilities that have to be understood first and exploited then to prepare the ground for a new leap. Change is becoming the very essence of the computing reality, sustained by a continuous flow of technical and paradigmatic innovation. The hardware is definitely moving toward more massive parallelism, in a breathtaking synthesis of all the past techniques of concurrent computation. New many-core machines offer opportunities for all sorts of Single/Multiple Instructions, Single/Multiple Data and Vector computations that in the past required specialised hardware. At the same time, all levels of virtualisation imagined till now seem to be possible via Clouds, and possibly many more. Information Technology has been the working backbone of the Global Village, and now, in more than one sense, it is becoming itself the Global Village. Between these two, the gap between the need for adapting applications to exploit the new hardware possibilities and the push toward virtualisation of resources is widening, creating more challenges as technical and intellectual progress continues. ACAT 2010 proposes to explore and confront the different boundaries of the evolution of computing, and its possible consequences on our scientific activity. What do these new technologies entail for physics research? How will physics research benefit from this revolution in

  20. ADVANCING THE FUNDAMENTAL UNDERSTANDING AND SCALE-UP OF TRISO FUEL COATERS VIA ADVANCED MEASUREMENT AND COMPUTATIONAL TECHNIQUES

    SciTech Connect

    Biswas, Pratim; Al-Dahhan, Muthanna

    2012-11-01

    to advance the fundamental understanding of the hydrodynamics by systematically investigating the effect of design and operating variables, to evaluate the reported dimensionless groups as scaling factors, and to establish a reliable scale-up methodology for the TRISO fuel particle spouted bed coaters based on hydrodynamic similarity via advanced measurement and computational techniques. An additional objective is to develop an on-line non-invasive measurement technique based on gamma ray densitometry (i.e. Nuclear Gauge Densitometry) that can be installed and used for coater process monitoring to ensure proper performance and operation and to facilitate the developed scale-up methodology. To achieve the objectives set for the project, the work will use optical probes and gamma ray computed tomography (CT) (for the measurements of solids/voidage holdup cross-sectional distribution and radial profiles along the bed height, spouted diameter, and fountain height) and radioactive particle tracking (RPT) (for the measurements of the 3D solids flow field, velocity, turbulent parameters, circulation time, solids lagrangian trajectories, and many other of spouted bed related hydrodynamic parameters). In addition, gas dynamic measurement techniques and pressure transducers will be utilized to complement the obtained information. The measurements obtained by these techniques will be used as benchmark data to evaluate and validate the computational fluid dynamic (CFD) models (two fluid model or discrete particle model) and their closures. The validated CFD models and closures will be used to facilitate the developed methodology for scale-up, design and hydrodynamic similarity. Successful execution of this work and the proposed tasks will advance the fundamental understanding of the coater flow field and quantify it for proper and safe design, scale-up, and performance. Such achievements will overcome the barriers to AGR applications and will help assure that the US maintains

  1. The National Information Infrastructure: Agenda for Action.

    ERIC Educational Resources Information Center

    Department of Commerce, Washington, DC. Information Infrastructure Task Force.

    The National Information Infrastructure (NII) is planned as a web of communications networks, computers, databases, and consumer electronics that will put vast amounts of information at the users' fingertips. Private sector firms are beginning to develop this infrastructure, but essential roles remain for the Federal Government. The National…

  2. Nuclear hybrid energy infrastructure

    SciTech Connect

    Agarwal, Vivek; Tawfik, Magdy S.

    2015-02-01

    The nuclear hybrid energy concept is becoming a reality for the US energy infrastructure where combinations of the various potential energy sources (nuclear, wind, solar, biomass, and so on) are integrated in a hybrid energy system. This paper focuses on challenges facing a hybrid system with a Small Modular Reactor at its core. The core of the paper will discuss efforts required to develop supervisory control center that collects data, supports decision-making, and serves as an information hub for supervisory control center. Such a center will also be a model for integrating future technologies and controls. In addition, advanced operations research, thermal cycle analysis, energy conversion analysis, control engineering, and human factors engineering will be part of the supervisory control center. Nuclear hybrid energy infrastructure would allow operators to optimize the cost of energy production by providing appropriate means of integrating different energy sources. The data needs to be stored, processed, analyzed, trended, and projected at right time to right operator to integrate different energy sources.

  3. PREFACE: 15th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT2013)

    NASA Astrophysics Data System (ADS)

    Wang, Jianxiong

    2014-06-01

    This volume of Journal of Physics: Conference Series is dedicated to scientific contributions presented at the 15th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2013) which took place on 16-21 May 2013 at the Institute of High Energy Physics, Chinese Academy of Sciences, Beijing, China. The workshop series brings together computer science researchers and practitioners, and researchers from particle physics and related fields to explore and confront the boundaries of computing and of automatic data analysis and theoretical calculation techniques. This year's edition of the workshop brought together over 120 participants from all over the world. 18 invited speakers presented key topics on the universe in computer, Computing in Earth Sciences, multivariate data analysis, automated computation in Quantum Field Theory as well as computing and data analysis challenges in many fields. Over 70 other talks and posters presented state-of-the-art developments in the areas of the workshop's three tracks: Computing Technologies, Data Analysis Algorithms and Tools, and Computational Techniques in Theoretical Physics. The round table discussions on open-source, knowledge sharing and scientific collaboration stimulate us to think over the issue in the respective areas. ACAT 2013 was generously sponsored by the Chinese Academy of Sciences (CAS), National Natural Science Foundation of China (NFSC), Brookhaven National Laboratory in the USA (BNL), Peking University (PKU), Theoretical Physics Cernter for Science facilities of CAS (TPCSF-CAS) and Sugon. We would like to thank all the participants for their scientific contributions and for the en- thusiastic participation in all its activities of the workshop. Further information on ACAT 2013 can be found at http://acat2013.ihep.ac.cn. Professor Jianxiong Wang Institute of High Energy Physics Chinese Academy of Science Details of committees and sponsors are available in the PDF

  4. PREFACE: 14th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2011)

    NASA Astrophysics Data System (ADS)

    Teodorescu, Liliana; Britton, David; Glover, Nigel; Heinrich, Gudrun; Lauret, Jérôme; Naumann, Axel; Speer, Thomas; Teixeira-Dias, Pedro

    2012-06-01

    ACAT2011 This volume of Journal of Physics: Conference Series is dedicated to scientific contributions presented at the 14th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2011) which took place on 5-7 September 2011 at Brunel University, UK. The workshop series, which began in 1990 in Lyon, France, brings together computer science researchers and practitioners, and researchers from particle physics and related fields in order to explore and confront the boundaries of computing and of automatic data analysis and theoretical calculation techniques. It is a forum for the exchange of ideas among the fields, exploring and promoting cutting-edge computing, data analysis and theoretical calculation techniques in fundamental physics research. This year's edition of the workshop brought together over 100 participants from all over the world. 14 invited speakers presented key topics on computing ecosystems, cloud computing, multivariate data analysis, symbolic and automatic theoretical calculations as well as computing and data analysis challenges in astrophysics, bioinformatics and musicology. Over 80 other talks and posters presented state-of-the art developments in the areas of the workshop's three tracks: Computing Technologies, Data Analysis Algorithms and Tools, and Computational Techniques in Theoretical Physics. Panel and round table discussions on data management and multivariate data analysis uncovered new ideas and collaboration opportunities in the respective areas. This edition of ACAT was generously sponsored by the Science and Technology Facility Council (STFC), the Institute for Particle Physics Phenomenology (IPPP) at Durham University, Brookhaven National Laboratory in the USA and Dell. We would like to thank all the participants of the workshop for the high level of their scientific contributions and for the enthusiastic participation in all its activities which were, ultimately, the key factors in the

  5. Advances and trends in the development of computational models for tires

    NASA Technical Reports Server (NTRS)

    Noor, A. K.; Tanner, J. A.

    1985-01-01

    Status and some recent developments of computational models for tires are summarized. Discussion focuses on a number of aspects of tire modeling and analysis including: tire materials and their characterization; evolution of tire models; characteristics of effective finite element models for analyzing tires; analysis needs for tires; and impact of the advances made in finite element technology, computational algorithms, and new computing systems on tire modeling and analysis. An initial set of benchmark problems has been proposed in concert with the U.S. tire industry. Extensive sets of experimental data will be collected for these problems and used for evaluating and validating different tire models. Also, the new Aircraft Landing Dynamics Facility (ALDF) at NASA Langley Research Center is described.

  6. Utilities building NGV infrastructure

    SciTech Connect

    Not Available

    1994-04-01

    Gas utilities across the US are aggressively pursuing the natural gas vehicle market by putting in place the infrastructure needed to ensure the growth of the important market. The first annual P and GJ NGV Marketing Survey has revealed many utilities plant to build and continue building NGV fueling facilities. The NGV industry in the US is confronting a classic chicken-or-egg quandary. Fleet operators and individual drivers are naturally unwilling to commit to a natural gas vehicle fuel until sufficient fueling facilities are in place, yet service station operators are reluctant to add NGV refueling capacity until enough CNG vehicles are on the road to create demand. The future of the NGV market is bright, but continued research and product improvements by suppliers as well as LDCs is needed if the potential is to be fulfilled. Advances in refueling facilities must continue if the market is to develop.

  7. IrLaW an OGC compliant infrared thermography measurement system developed on mini PC with real time computing capabilities for long term monitoring of transport infrastructures

    NASA Astrophysics Data System (ADS)

    Dumoulin, J.; Averty, R.

    2012-04-01

    One of the objectives of ISTIMES project is to evaluate the potentialities offered by the integration of different electromagnetic techniques able to perform non-invasive diagnostics for surveillance and monitoring of transport infrastructures. Among the EM methods investigated, uncooled infrared camera is a promising technique due to its dissemination potential according to its relative low cost on the market. Infrared thermography, when it is used in quantitative mode (not in laboratory conditions) and not in qualitative mode (vision applied to survey), requires to process in real time thermal radiative corrections on raw data acquired to take into account influences of natural environment evolution with time. But, camera sensor has to be enough smart to apply in real time calibration law and radiometric corrections in a varying atmosphere. So, a complete measurement system was studied and developed with low cost infrared cameras available on the market. In the system developed, infrared camera is coupled with other sensors to feed simplified radiative models running, in real time, on GPU available on small PC. The system studied and developed uses a fast Ethernet camera FLIR A320 [1] coupled with a VAISALA WXT520 [2] weather station and a light GPS unit [3] for positioning and dating. It can be used with other Ethernet infrared cameras (i.e. visible ones) but requires to be able to access measured data at raw level. In the present study, it has been made possible thanks to a specific agreement signed with FLIR Company. The prototype system studied and developed is implemented on low cost small computer that integrates a GPU card to allow real time parallel computing [4] of simplified radiometric [5] heat balance using information measured with the weather station. An HMI was developed under Linux using OpenSource and complementary pieces of software developed at IFSTTAR. This new HMI called "IrLaW" has various functionalities that let it compliant to be use in

  8. Computational fluid dynamic study on obstructive sleep apnea syndrome treated with maxillomandibular advancement.

    PubMed

    Yu, Chung-Chih; Hsiao, Hung-Da; Lee, Lung-Cheng; Yao, Chih-Min; Chen, Ning-Hung; Wang, Chau-Jan; Chen, Yu-Ray

    2009-03-01

    Maxillomandibular advancement is one of the treatments available for obstructive sleep apnea. The influence of this surgery on the upper airway and its mechanism are not fully understood. The present research simulates the flow fields of narrowed upper airways of 2 patients with obstructive sleep apnea treated with maxillomandibular advancement. The geometry of the upper airway was reconstructed from computed tomographic images taken before and after surgery. The consequent three-dimensional surface model was rendered for measurement and computational fluid dynamics simulation. Patients showed clinical improvement 6 months after surgery. The cross-sectional area of the narrowest part of the upper airway was increased in all dimensions. The simulated results showed a less constricted upper airway, with less velocity change and a decreased pressure gradient across the whole conduit during passage of air. Less breathing effort is therefore expected to achieve equivalent ventilation with the postoperative airway. This study demonstrates the possibility of computational fluid dynamics in providing information for understanding the pathogenesis of OSA and the effects of its treatment.

  9. Modeling, Simulation and Analysis of Complex Networked Systems: A Program Plan for DOE Office of Advanced Scientific Computing Research

    SciTech Connect

    Brown, D L

    2009-05-01

    Many complex systems of importance to the U.S. Department of Energy consist of networks of discrete components. Examples are cyber networks, such as the internet and local area networks over which nearly all DOE scientific, technical and administrative data must travel, the electric power grid, social networks whose behavior can drive energy demand, and biological networks such as genetic regulatory networks and metabolic networks. In spite of the importance of these complex networked systems to all aspects of DOE's operations, the scientific basis for understanding these systems lags seriously behind the strong foundations that exist for the 'physically-based' systems usually associated with DOE research programs that focus on such areas as climate modeling, fusion energy, high-energy and nuclear physics, nano-science, combustion, and astrophysics. DOE has a clear opportunity to develop a similarly strong scientific basis for understanding the structure and dynamics of networked systems by supporting a strong basic research program in this area. Such knowledge will provide a broad basis for, e.g., understanding and quantifying the efficacy of new security approaches for computer networks, improving the design of computer or communication networks to be more robust against failures or attacks, detecting potential catastrophic failure on the power grid and preventing or mitigating its effects, understanding how populations will respond to the availability of new energy sources or changes in energy policy, and detecting subtle vulnerabilities in large software systems to intentional attack. This white paper outlines plans for an aggressive new research program designed to accelerate the advancement of the scientific basis for complex networked systems of importance to the DOE. It will focus principally on four research areas: (1) understanding network structure, (2) understanding network dynamics, (3) predictive modeling and simulation for complex networked systems

  10. Interoperability and security in wireless body area network infrastructures.

    PubMed

    Warren, Steve; Lebak, Jeffrey; Yao, Jianchu; Creekmore, Jonathan; Milenkovic, Aleksandar; Jovanov, Emil

    2005-01-01

    Wireless body area networks (WBANs) and their supporting information infrastructures offer unprecedented opportunities to monitor state of health without constraining the activities of a wearer. These mobile point-of-care systems are now realizable due to the convergence of technologies such as low-power wireless communication standards, plug-and-play device buses, off-the-shelf development kits for low-power microcontrollers, handheld computers, electronic medical records, and the Internet. To increase acceptance of personal monitoring technology while lowering equipment cost, advances must be made in interoperability (at both the system and device levels) and security. This paper presents an overview of WBAN infrastructure work in these areas currently underway in the Medical Component Design Laboratory at Kansas State University (KSU) and at the University of Alabama in Huntsville (UAH). KSU efforts include the development of wearable health status monitoring systems that utilize ISO/IEEE 11073, Bluetooth, Health Level 7, and OpenEMed. WBAN efforts at UAH include the development of wearable activity and health monitors that incorporate ZigBee-compliant wireless sensor platforms with hardware-level encryption and the TinyOS development environment. WBAN infrastructures are complex, requiring many functional support elements. To realize these infrastructures through collaborative efforts, organizations such as KSU and UAH must define and utilize standard interfaces, nomenclature, and security approaches.

  11. Intelligent systems technology infrastructure for integrated systems

    NASA Technical Reports Server (NTRS)

    Lum, Henry, Jr.

    1991-01-01

    Significant advances have occurred during the last decade in intelligent systems technologies (a.k.a. knowledge-based systems, KBS) including research, feasibility demonstrations, and technology implementations in operational environments. Evaluation and simulation data obtained to date in real-time operational environments suggest that cost-effective utilization of intelligent systems technologies can be realized for Automated Rendezvous and Capture applications. The successful implementation of these technologies involve a complex system infrastructure integrating the requirements of transportation, vehicle checkout and health management, and communication systems without compromise to systems reliability and performance. The resources that must be invoked to accomplish these tasks include remote ground operations and control, built-in system fault management and control, and intelligent robotics. To ensure long-term evolution and integration of new validated technologies over the lifetime of the vehicle, system interfaces must also be addressed and integrated into the overall system interface requirements. An approach for defining and evaluating the system infrastructures including the testbed currently being used to support the on-going evaluations for the evolutionary Space Station Freedom Data Management System is presented and discussed. Intelligent system technologies discussed include artificial intelligence (real-time replanning and scheduling), high performance computational elements (parallel processors, photonic processors, and neural networks), real-time fault management and control, and system software development tools for rapid prototyping capabilities.

  12. Advancing global marine biogeography research with open-source GIS software and cloud-computing

    USGS Publications Warehouse

    Fujioka, Ei; Vanden Berghe, Edward; Donnelly, Ben; Castillo, Julio; Cleary, Jesse; Holmes, Chris; McKnight, Sean; Halpin, patrick

    2012-01-01

    Across many scientific domains, the ability to aggregate disparate datasets enables more meaningful global analyses. Within marine biology, the Census of Marine Life served as the catalyst for such a global data aggregation effort. Under the Census framework, the Ocean Biogeographic Information System was established to coordinate an unprecedented aggregation of global marine biogeography data. The OBIS data system now contains 31.3 million observations, freely accessible through a geospatial portal. The challenges of storing, querying, disseminating, and mapping a global data collection of this complexity and magnitude are significant. In the face of declining performance and expanding feature requests, a redevelopment of the OBIS data system was undertaken. Following an Open Source philosophy, the OBIS technology stack was rebuilt using PostgreSQL, PostGIS, GeoServer and OpenLayers. This approach has markedly improved the performance and online user experience while maintaining a standards-compliant and interoperable framework. Due to the distributed nature of the project and increasing needs for storage, scalability and deployment flexibility, the entire hardware and software stack was built on a Cloud Computing environment. The flexibility of the platform, combined with the power of the application stack, enabled rapid re-development of the OBIS infrastructure, and ensured complete standards-compliance.

  13. Advanced computer techniques for inverse modeling of electric current in cardiac tissue

    SciTech Connect

    Hutchinson, S.A.; Romero, L.A.; Diegert, C.F.

    1996-08-01

    For many years, ECG`s and vector cardiograms have been the tools of choice for non-invasive diagnosis of cardiac conduction problems, such as found in reentrant tachycardia or Wolff-Parkinson-White (WPW) syndrome. Through skillful analysis of these skin-surface measurements of cardiac generated electric currents, a physician can deduce the general location of heart conduction irregularities. Using a combination of high-fidelity geometry modeling, advanced mathematical algorithms and massively parallel computing, Sandia`s approach would provide much more accurate information and thus allow the physician to pinpoint the source of an arrhythmia or abnormal conduction pathway.

  14. NDE of advanced turbine engine components and materials by computed tomography

    NASA Technical Reports Server (NTRS)

    Yancey, R. N.; Baaklini, George Y.; Klima, Stanley J.

    1991-01-01

    Computed tomography (CT) is an X-ray technique that provides quantitative 3D density information of materials and components and can accurately detail spatial distributions of cracks, voids, and density variations. CT scans of ceramic materials, composites, and engine components were taken and the resulting images will be discussed. Scans were taken with two CT systems with different spatial resolution capabilities. The scans showed internal damage, density variations, and geometrical arrangement of various features in the materials and components. It was concluded that CT can play an important role in the characterization of advanced turbine engine materials and components. Future applications of this technology will be outlined.

  15. Advanced Imaging of Athletes: Added Value of Coronary Computed Tomography and Cardiac Magnetic Resonance Imaging.

    PubMed

    Martinez, Matthew W

    2015-07-01

    Cardiac magnetic resonance imaging and cardiac computed tomographic angiography have become important parts of the armamentarium for noninvasive diagnosis of cardiovascular disease. Emerging technologies have produced faster imaging, lower radiation dose, improved spatial and temporal resolution, as well as a wealth of prognostic data to support usage. Investigating true pathologic disease as well as distinguishing normal from potentially dangerous is now increasingly more routine for the cardiologist in practice. This article investigates how advanced imaging technologies can assist the clinician when evaluating all athletes for pathologic disease that may put them at risk.

  16. DOE Advanced Scientific Computing Advisory Committee (ASCAC) Subcommittee Report on Scientific and Technical Information

    SciTech Connect

    Hey, Tony; Agarwal, Deborah; Borgman, Christine; Cartaro, Concetta; Crivelli, Silvia; Van Dam, Kerstin Kleese; Luce, Richard; Arjun, Shankar; Trefethen, Anne; Wade, Alex; Williams, Dean

    2015-09-04

    The Advanced Scientific Computing Advisory Committee (ASCAC) was charged to form a standing subcommittee to review the Department of Energy’s Office of Scientific and Technical Information (OSTI) and to begin by assessing the quality and effectiveness of OSTI’s recent and current products and services and to comment on its mission and future directions in the rapidly changing environment for scientific publication and data. The Committee met with OSTI staff and reviewed available products, services and other materials. This report summaries their initial findings and recommendations.

  17. Cardiovascular proteomics in the era of big data: experimental and computational advances.

    PubMed

    Lam, Maggie P Y; Lau, Edward; Ng, Dominic C M; Wang, Ding; Ping, Peipei

    2016-01-01

    Proteomics plays an increasingly important role in our quest to understand cardiovascular biology. Fueled by analytical and computational advances in the past decade, proteomics applications can now go beyond merely inventorying protein species, and address sophisticated questions on cardiac physiology. The advent of massive mass spectrometry datasets has in turn led to increasing intersection between proteomics and big data science. Here we review new frontiers in technological developments and their applications to cardiovascular medicine. The impact of big data science on cardiovascular proteomics investigations and translation to medicine is highlighted.

  18. Computational Models of Exercise on the Advanced Resistance Exercise Device (ARED)

    NASA Technical Reports Server (NTRS)

    Newby, Nate; Caldwell, Erin; Scott-Pandorf, Melissa; Peters,Brian; Fincke, Renita; DeWitt, John; Poutz-Snyder, Lori

    2011-01-01

    Muscle and bone loss remain a concern for crew returning from space flight. The advanced resistance exercise device (ARED) is used for on-orbit resistance exercise to help mitigate these losses. However, characterization of how the ARED loads the body in microgravity has yet to be determined. Computational models allow us to analyze ARED exercise in both 1G and 0G environments. To this end, biomechanical models of the squat, single-leg squat, and deadlift exercise on the ARED have been developed to further investigate bone and muscle forces resulting from the exercises.

  19. Green Infrastructure Modeling Toolkit

    EPA Pesticide Factsheets

    EPA's Green Infrastructure Modeling Toolkit is a toolkit of 5 EPA green infrastructure models and tools, along with communication materials, that can be used as a teaching tool and a quick reference resource when making GI implementation decisions.

  20. Aging Water Infrastructure

    EPA Science Inventory

    The Aging Water Infrastructure (AWI) research program is part of EPA’s larger effort called the Sustainable Water Infrastructure (SI) initiative. The SI initiative brings together drinking water and wastewater utility managers; trade associations; local watershed protection organ...

  1. Green Infrastructure Modeling Tools

    EPA Pesticide Factsheets

    Modeling tools support planning and design decisions on a range of scales from setting a green infrastructure target for an entire watershed to designing a green infrastructure practice for a particular site.

  2. Sustainable Water Infrastructure

    EPA Pesticide Factsheets

    Resources for state and local environmental and public health officials, and water, infrastructure and utility professionals to learn about sustainable water infrastructure, sustainable water and energy practices, and their role.

  3. Climate Action Benefits: Infrastructure

    EPA Pesticide Factsheets

    This page provides background on the relationship between infrastructure and climate change and describes what the CIRA Infrastructure analyses cover. It provides links to the subsectors Bridges, Roads, Urban Drainage, and Coastal Property.

  4. Integrated Computational Materials Engineering (ICME) for Third Generation Advanced High-Strength Steel Development

    SciTech Connect

    Savic, Vesna; Hector, Louis G.; Ezzat, Hesham; Sachdev, Anil K.; Quinn, James; Krupitzer, Ronald; Sun, Xin

    2015-06-01

    This paper presents an overview of a four-year project focused on development of an integrated computational materials engineering (ICME) toolset for third generation advanced high-strength steels (3GAHSS). Following a brief look at ICME as an emerging discipline within the Materials Genome Initiative, technical tasks in the ICME project will be discussed. Specific aims of the individual tasks are multi-scale, microstructure-based material model development using state-of-the-art computational and experimental techniques, forming, toolset assembly, design optimization, integration and technical cost modeling. The integrated approach is initially illustrated using a 980 grade transformation induced plasticity (TRIP) steel, subject to a two-step quenching and partitioning (Q&P) heat treatment, as an example.

  5. Advanced Simulation and Computing Fiscal Year 2016 Implementation Plan, Version 0

    SciTech Connect

    McCoy, M.; Archer, B.; Hendrickson, B.

    2015-08-27

    The Stockpile Stewardship Program (SSP) is an integrated technical program for maintaining the safety, surety, and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational capabilities to support these programs. The purpose of this IP is to outline key work requirements to be performed and to control individual work activities within the scope of work. Contractors may not deviate from this plan without a revised WA or subsequent IP.

  6. A fission matrix based validation protocol for computed power distributions in the advanced test reactor

    SciTech Connect

    Nielsen, J. W.; Nigg, D. W.; LaPorta, A. W.

    2013-07-01

    The Idaho National Laboratory (INL) has been engaged in a significant multi year effort to modernize the computational reactor physics tools and validation procedures used to support operations of the Advanced Test Reactor (ATR) and its companion critical facility (ATRC). Several new protocols for validation of computed neutron flux distributions and spectra as well as for validation of computed fission power distributions, based on new experiments and well-recognized least-squares statistical analysis techniques, have been under development. In the case of power distributions, estimates of the a priori ATR-specific fuel element-to-element fission power correlation and covariance matrices are required for validation analysis. A practical method for generating these matrices using the element-to-element fission matrix is presented, along with a high-order scheme for estimating the underlying fission matrix itself. The proposed methodology is illustrated using the MCNP5 neutron transport code for the required neutronics calculations. The general approach is readily adaptable for implementation using any multidimensional stochastic or deterministic transport code that offers the required level of spatial, angular, and energy resolution in the computed solution for the neutron flux and fission source. (authors)

  7. Advanced computational sensors technology: testing and evaluation in visible, SWIR, and LWIR imaging

    NASA Astrophysics Data System (ADS)

    Rizk, Charbel G.; Wilson, John P.; Pouliquen, Philippe

    2015-05-01

    The Advanced Computational Sensors Team at the Johns Hopkins University Applied Physics Laboratory and the Johns Hopkins University Department of Electrical and Computer Engineering has been developing advanced readout integrated circuit (ROIC) technology for more than 10 years with a particular focus on the key challenges of dynamic range, sampling rate, system interface and bandwidth, and detector materials or band dependencies. Because the pixel array offers parallel sampling by default, the team successfully demonstrated that adding smarts in the pixel and the chip can increase performance significantly. Each pixel becomes a smart sensor and can operate independently in collecting, processing, and sharing data. In addition, building on the digital circuit revolution, the effective well size can be increased by orders of magnitude within the same pixel pitch over analog designs. This research has yielded an innovative class of a system-on-chip concept: the Flexible Readout and Integration Sensor (FRIS) architecture. All key parameters are programmable and/or can be adjusted dynamically, and this architecture can potentially be sensor and application agnostic. This paper reports on the testing and evaluation of one prototype that can support either detector polarity and includes sample results with visible, short-wavelength infrared (SWIR), and long-wavelength infrared (LWIR) imaging.

  8. Parallel digital forensics infrastructure.

    SciTech Connect

    Liebrock, Lorie M.; Duggan, David Patrick

    2009-10-01

    This report documents the architecture and implementation of a Parallel Digital Forensics infrastructure. This infrastructure is necessary for supporting the design, implementation, and testing of new classes of parallel digital forensics tools. Digital Forensics has become extremely difficult with data sets of one terabyte and larger. The only way to overcome the processing time of these large sets is to identify and develop new parallel algorithms for performing the analysis. To support algorithm research, a flexible base infrastructure is required. A candidate architecture for this base infrastructure was designed, instantiated, and tested by this project, in collaboration with New Mexico Tech. Previous infrastructures were not designed and built specifically for the development and testing of parallel algorithms. With the size of forensics data sets only expected to increase significantly, this type of infrastructure support is necessary for continued research in parallel digital forensics. This report documents the implementation of the parallel digital forensics (PDF) infrastructure architecture and implementation.

  9. Advanced Simulation and Computing: A Summary Report to the Director's Review

    SciTech Connect

    McCoy, M G; Peck, T

    2003-06-01

    It has now been three years since the Advanced Simulation and Computing Program (ASCI), as managed by Defense and Nuclear Technologies (DNT) Directorate, has been reviewed by this Director's Review Committee (DRC). Since that time, there has been considerable progress for all components of the ASCI Program, and these developments will be highlighted in this document and in the presentations planned for June 9 and 10, 2003. There have also been some name changes. Today, the Program is called ''Advanced Simulation and Computing,'' Although it retains the familiar acronym ASCI, the initiative nature of the effort has given way to sustained services as an integral part of the Stockpile Stewardship Program (SSP). All computing efforts at LLNL and the other two Defense Program (DP) laboratories are funded and managed under ASCI. This includes the so-called legacy codes, which remain essential tools in stockpile stewardship. The contract between the Department of Energy (DOE) and the University of California (UC) specifies an independent appraisal of Directorate technical work and programmatic management. Such represents the work of this DNT Review Committee. Beginning this year, the Laboratory is implementing a new review system. This process was negotiated between UC, the National Nuclear Security Administration (NNSA), and the Laboratory Directors. Central to this approach are eight performance objectives that focus on key programmatic and administrative goals. Associated with each of these objectives are a number of performance measures to more clearly characterize the attainment of the objectives. Each performance measure has a lead directorate and one or more contributing directorates. Each measure has an evaluation plan and has identified expected documentation to be included in the ''Assessment File''.

  10. Recent advances in computational methodology for simulation of mechanical circulatory assist devices

    PubMed Central

    Marsden, Alison L.; Bazilevs, Yuri; Long, Christopher C.; Behr, Marek

    2014-01-01

    Ventricular assist devices (VADs) provide mechanical circulatory support to offload the work of one or both ventricles during heart failure. They are used in the clinical setting as destination therapy, as bridge to transplant, or more recently as bridge to recovery to allow for myocardial remodeling. Recent developments in computational simulation allow for detailed assessment of VAD hemodynamics for device design and optimization for both children and adults. Here, we provide a focused review of the recent literature on finite element methods and optimization for VAD simulations. As VAD designs typically fall into two categories, pulsatile and continuous flow devices, we separately address computational challenges of both types of designs, and the interaction with the circulatory system with three representative case studies. In particular, we focus on recent advancements in finite element methodology that has increased the fidelity of VAD simulations. We outline key challenges, which extend to the incorporation of biological response such as thrombosis and hemolysis, as well as shape optimization methods and challenges in computational methodology. PMID:24449607

  11. SCIENCE BRIEF: ADVANCED CONCEPTS

    EPA Science Inventory

    Research on advanced concepts will evaluate and demonstrate the application of innovative infrastructure designs, management procedures and operational approaches. Advanced concepts go beyond simple asset management. The infusion of these advanced concepts into established wastew...

  12. JINR cloud infrastructure evolution

    NASA Astrophysics Data System (ADS)

    Baranov, A. V.; Balashov, N. A.; Kutovskiy, N. A.; Semenov, R. N.

    2016-09-01

    To fulfil JINR commitments in different national and international projects related to the use of modern information technologies such as cloud and grid computing as well as to provide a modern tool for JINR users for their scientific research a cloud infrastructure was deployed at Laboratory of Information Technologies of Joint Institute for Nuclear Research. OpenNebula software was chosen as a cloud platform. Initially it was set up in simple configuration with single front-end host and a few cloud nodes. Some custom development was done to tune JINR cloud installation to fit local needs: web form in the cloud web-interface for resources request, a menu item with cloud utilization statistics, user authentication via Kerberos, custom driver for OpenVZ containers. Because of high demand in that cloud service and its resources over-utilization it was re-designed to cover increasing users' needs in capacity, availability and reliability. Recently a new cloud instance has been deployed in high-availability configuration with distributed network file system and additional computing power.

  13. A modified eco-efficiency framework and methodology for advancing the state of practice of sustainability analysis as applied to green infrastructure.

    PubMed

    Ghimire, Santosh R; Johnston, John M

    2017-03-17

    We propose a modified eco-efficiency (EE) framework and novel sustainability analysis methodology for green infrastructure (GI) practices used in water resource management. GI practices such as rainwater harvesting (RWH), rain gardens, porous pavements, and green roofs are emerging as viable strategies for climate change adaptation. The modified framework includes four economic, 11 environmental, and three social indicators. Using six indicators from the framework, at least one from each dimension of sustainability, we demonstrate the methodology to analyze RWH designs. We use life-cycle assessment and life-cycle cost assessment to calculate the sustainability indicators of 20 design configurations as Decision Management Objectives (DMOs). Five DMOs emerged as relatively more sustainable along the EE analysis Tradeoff Line, and we used Data Envelopment Analysis (DEA), a widely applied statistical approach, to quantify the modified EE measures as DMO sustainability scores. We also addressed the subjectivity and sensitivity analysis requirements of sustainability analysis, and we evaluated the performance of 10 weighting schemes that included classical DEA, equal weights, National Institute of Standards and Technology's stakeholder panel, Eco-Indicator 99, Sustainable Society Foundation's Sustainable Society Index, and five derived schemes. We improved upon classical DEA by applying the weighting schemes to identify sustainability scores that ranged from 0.18 to 1.0, avoiding the non-uniqueness problem and revealing the least to most sustainable DMOs. Our methodology provides a more comprehensive view of water resource management and is generally applicable to GI and industrial, environmental, and engineered systems to explore the sustainability space of alternative design configurations. This article is protected by copyright. All rights reserved.

  14. Advanced Decentralized Water/Energy Network Design for Sustainable Infrastructure presentation at the 2012 National Association of Home Builders (NAHB) International Builders'Show

    EPA Science Inventory

    A university/industry panel will report on the progress and findings of a fivesteve-year project funded by the US Environmental Protection Agency. The project's end product will be a Web-based, 3D computer-simulated residential environment with a decision support system to assist...

  15. Energy Transmission and Infrastructure

    SciTech Connect

    Mathison, Jane

    2012-12-31

    The objective of Energy Transmission and Infrastructure Northern Ohio (OH) was to lay the conceptual and analytical foundation for an energy economy in northern Ohio that will: • improve the efficiency with which energy is used in the residential, commercial, industrial, agricultural, and transportation sectors for Oberlin, Ohio as a district-wide model for Congressional District OH-09; • identify the potential to deploy wind and solar technologies and the most effective configuration for the regional energy system (i.e., the ratio of distributed or centralized power generation); • analyze the potential within the district to utilize farm wastes to produce biofuels; • enhance long-term energy security by identifying ways to deploy local resources and building Ohio-based enterprises; • identify the policy, regulatory, and financial barriers impeding development of a new energy system; and • improve energy infrastructure within Congressional District OH-09. This objective of laying the foundation for a renewable energy system in Ohio was achieved through four primary areas of activity: 1. district-wide energy infrastructure assessments and alternative-energy transmission studies; 2. energy infrastructure improvement projects undertaken by American Municipal Power (AMP) affiliates in the northern Ohio communities of Elmore, Oak Harbor, and Wellington; 3. Oberlin, OH-area energy assessment initiatives; and 4. a district-wide conference held in September 2011 to disseminate year-one findings. The grant supported 17 research studies by leading energy, policy, and financial specialists, including studies on: current energy use in the district and the Oberlin area; regional potential for energy generation from renewable sources such as solar power, wind, and farm-waste; energy and transportation strategies for transitioning the City of Oberlin entirely to renewable resources and considering pedestrians, bicyclists, and public transportation as well as drivers

  16. Experimental and Computational Study on the Cusp-DEC and TWDEC for Advanced Fueled Fusion

    SciTech Connect

    Tomita, Y.; Yasaka, Y.; Takeno, H.; Ishikawa, M.; Nemoto, T.

    2005-01-15

    Experimental and computational results of direct energy converters (DECs) for advanced fueled fusion such as D-{sup 3}He are presented. Kinetic energy of thermal component of end loss plasma is converted to electricity by using the Cusp DEC. The proof-of-principle experiments of a single slanted cusp have been carried out and verified the faculty of the configuration. To improve a separation of electrons from ions, numerical simulation shows a Helmholtz magnetic configuration with a uniform magnetic field is more effective than the Cusp DEC. The fusion-produced high-energy ions like 15 MeV protons in D-{sup 3}He fueled fusion can pass through the Cusp DEC without disturbing their orbits and enter a traveling-wave direct energy converter (TWDEC). Small scale experiments have shown the effectiveness of the TWDEC and the numerical simulation on optimization of interval of electrodes in a decelerator gives high conversion efficiency up to 60 %.

  17. Development of an Advanced Computational Model for OMCVD of Indium Nitride

    NASA Technical Reports Server (NTRS)

    Cardelino, Carlos A.; Moore, Craig E.; Cardelino, Beatriz H.; Zhou, Ning; Lowry, Sam; Krishnan, Anantha; Frazier, Donald O.; Bachmann, Klaus J.

    1999-01-01

    An advanced computational model is being developed to predict the formation of indium nitride (InN) film from the reaction of trimethylindium (In(CH3)3) with ammonia (NH3). The components are introduced into the reactor in the gas phase within a background of molecular nitrogen (N2). Organometallic chemical vapor deposition occurs on a heated sapphire surface. The model simulates heat and mass transport with gas and surface chemistry under steady state and pulsed conditions. The development and validation of an accurate model for the interactions between the diffusion of gas phase species and surface kinetics is essential to enable the regulation of the process in order to produce a low defect material. The validation of the model will be performed in concert with a NASA-North Carolina State University project.

  18. Advanced computational tools for optimization and uncertainty quantification of carbon capture processes

    SciTech Connect

    Miller, David C.; Ng, Brenda; Eslick, John

    2014-01-01

    Advanced multi-scale modeling and simulation has the potential to dramatically reduce development time, resulting in considerable cost savings. The Carbon Capture Simulation Initiative (CCSI) is a partnership among national laboratories, industry and universities that is developing, demonstrating, and deploying a suite of multi-scale modeling and simulation tools. One significant computational tool is FOQUS, a Framework for Optimization and Quantification of Uncertainty and Sensitivity, which enables basic data submodels, including thermodynamics and kinetics, to be used within detailed process models to rapidly synthesize and optimize a process and determine the level of uncertainty associated with the resulting process. The overall approach of CCSI is described with a more detailed discussion of FOQUS and its application to carbon capture systems.

  19. Advanced Scientific Computing Research Network Requirements: ASCR Network Requirements Review Final Report

    SciTech Connect

    Bacon, Charles; Bell, Greg; Canon, Shane; Dart, Eli; Dattoria, Vince; Goodwin, Dave; Lee, Jason; Hicks, Susan; Holohan, Ed; Klasky, Scott; Lauzon, Carolyn; Rogers, Jim; Shipman, Galen; Skinner, David; Tierney, Brian

    2013-03-08

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the U.S. Department of Energy (DOE) Office of Science (SC), the single largest supporter of basic research in the physical sciences in the United States. In support of SC programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 25 years. In October 2012, ESnet and the Office of Advanced Scientific Computing Research (ASCR) of the DOE SC organized a review to characterize the networking requirements of the programs funded by the ASCR program office. The requirements identified at the review are summarized in the Findings section, and are described in more detail in the body of the report.

  20. Advanced Computational Modeling of Vapor Deposition in a High-pressure Reactor

    NASA Technical Reports Server (NTRS)

    Cardelino, Beatriz H.; Moore, Craig E.; McCall, Sonya D.; Cardelino, Carlos A.; Dietz, Nikolaus; Bachmann, Klaus

    2004-01-01

    In search of novel approaches to produce new materials for electro-optic technologies, advances have been achieved in the development of computer models for vapor deposition reactors in space. Numerical simulations are invaluable tools for costly and difficult processes, such as those experiments designed for high pressures and microgravity conditions. Indium nitride is a candidate compound for high-speed laser and photo diodes for optical communication system, as well as for semiconductor lasers operating into the blue and ultraviolet regions. But InN and other nitride compounds exhibit large thermal decomposition at its optimum growth temperature. In addition, epitaxy at lower temperatures and subatmospheric pressures incorporates indium droplets into the InN films. However, surface stabilization data indicate that InN could be grown at 900 K in high nitrogen pressures, and microgravity could provide laminar flow conditions. Numerical models for chemical vapor deposition have been developed, coupling complex chemical kinetics with fluid dynamic properties.

  1. Advanced Computational Modeling of Vapor Deposition in a High-Pressure Reactor

    NASA Technical Reports Server (NTRS)

    Cardelino, Beatriz H.; Moore, Craig E.; McCall, Sonya D.; Cardelino, Carlos A.; Dietz, Nikolaus; Bachmann, Klaus

    2004-01-01

    In search of novel approaches to produce new materials for electro-optic technologies, advances have been achieved in the development of computer models for vapor deposition reactors in space. Numerical simulations are invaluable tools for costly and difficult processes, such as those experiments designed for high pressures and microgravity conditions. Indium nitride is a candidate compound for high-speed laser and photo diodes for optical communication system, as well as for semiconductor lasers operating into the blue and ultraviolet regions. But InN and other nitride compounds exhibit large thermal decomposition at its optimum growth temperature. In addition, epitaxy at lower temperatures and subatmospheric pressures incorporates indium droplets into the InN films. However, surface stabilization data indicate that InN could be grown at 900 K in high nitrogen pressures, and microgravity could provide laminar flow conditions. Numerical models for chemical vapor deposition have been developed, coupling complex chemical kinetics with fluid dynamic properties.

  2. A Computational Methodology for Simulating Thermal Loss Testing of the Advanced Stirling Convertor

    NASA Technical Reports Server (NTRS)

    Reid, Terry V.; Wilson, Scott D.; Schifer, Nicholas A.; Briggs, Maxwell H.

    2012-01-01

    The U.S. Department of Energy (DOE) and Lockheed Martin Space Systems Company (LMSSC) have been developing the Advanced Stirling Radioisotope Generator (ASRG) for use as a power system for space science missions. This generator would use two highefficiency Advanced Stirling Convertors (ASCs), developed by Sunpower Inc. and NASA Glenn Research Center (GRC). The ASCs convert thermal energy from a radioisotope heat source into electricity. As part of ground testing of these ASCs, different operating conditions are used to simulate expected mission conditions. These conditions require achieving a particular operating frequency, hot end and cold end temperatures, and specified electrical power output for a given net heat input. In an effort to improve net heat input predictions, numerous tasks have been performed which provided a more accurate value for net heat input into the ASCs, including the use of multidimensional numerical models. Validation test hardware has also been used to provide a direct comparison of numerical results and validate the multi-dimensional numerical models used to predict convertor net heat input and efficiency. These validation tests were designed to simulate the temperature profile of an operating Stirling convertor and resulted in a measured net heat input of 244.4 W. The methodology was applied to the multi-dimensional numerical model which resulted in a net heat input of 240.3 W. The computational methodology resulted in a value of net heat input that was 1.7 percent less than that measured during laboratory testing. The resulting computational methodology and results are discussed.

  3. Current Advances in the Computational Simulation of the Formation of Low-Mass Stars

    SciTech Connect

    Klein, R I; Inutsuka, S; Padoan, P; Tomisaka, K

    2005-10-24

    Developing a theory of low-mass star formation ({approx} 0.1 to 3 M{sub {circle_dot}}) remains one of the most elusive and important goals of theoretical astrophysics. The star-formation process is the outcome of the complex dynamics of interstellar gas involving non-linear interactions of turbulence, gravity, magnetic field and radiation. The evolution of protostellar condensations, from the moment they are assembled by turbulent flows to the time they reach stellar densities, spans an enormous range of scales, resulting in a major computational challenge for simulations. Since the previous Protostars and Planets conference, dramatic advances in the development of new numerical algorithmic techniques have been successfully implemented on large scale parallel supercomputers. Among such techniques, Adaptive Mesh Refinement and Smooth Particle Hydrodynamics have provided frameworks to simulate the process of low-mass star formation with a very large dynamic range. It is now feasible to explore the turbulent fragmentation of molecular clouds and the gravitational collapse of cores into stars self-consistently within the same calculation. The increased sophistication of these powerful methods comes with substantial caveats associated with the use of the techniques and the interpretation of the numerical results. In this review, we examine what has been accomplished in the field and present a critique of both numerical methods and scientific results. We stress that computational simulations should obey the available observational constraints and demonstrate numerical convergence. Failing this, results of large scale simulations do not advance our understanding of low-mass star formation.

  4. Identifying human disease genes: advances in molecular genetics and computational approaches.

    PubMed

    Bakhtiar, S M; Ali, A; Baig, S M; Barh, D; Miyoshi, A; Azevedo, V

    2014-07-04

    The human genome project is one of the significant achievements that have provided detailed insight into our genetic legacy. During the last two decades, biomedical investigations have gathered a considerable body of evidence by detecting more than 2000 disease genes. Despite the imperative advances in the genetic understanding of various diseases, the pathogenesis of many others remains obscure. With recent advances, the laborious methodologies used to identify DNA variations are replaced by direct sequencing of genomic DNA to detect genetic changes. The ability to perform such studies depends equally on the development of high-throughput and economical genotyping methods. Currently, basically for every disease whose origen is still unknown, genetic approaches are available which could be pedigree-dependent or -independent with the capacity to elucidate fundamental disease mechanisms. Computer algorithms and programs for linkage analysis have formed the foundation for many disease gene detection projects, similarly databases of clinical findings have been widely used to support diagnostic decisions in dysmorphology and general human disease. For every disease type, genome sequence variations, particularly single nucleotide polymorphisms are mapped by comparing the genetic makeup of case and control groups. Methods that predict the effects of polymorphisms on protein stability are useful for the identification of possible disease associations, whereas structural effects can be assessed using methods to predict stability changes in proteins using sequence and/or structural information.

  5. Advanced Computed Tomography Inspection System (ACTIS): An overview of the technology and its application

    NASA Technical Reports Server (NTRS)

    Hediger, Lisa H.

    1991-01-01

    The Advanced Computed Tomography Inspection System (ACTIS) was developed by NASA Marshall to support solid propulsion test programs. ACTIS represents a significant advance in state-of-the-art inspection systems. Its flexibility and superior technical performance have made ACTIS very popular, both within and outside the aerospace community. Through technology utilization efforts, ACTIS has been applied to inspection problems in commercial aerospace, lumber, automotive, and nuclear waste disposal industries. ACTIS has been used to inspect items of historical interest. ACTIS has consistently produced valuable results, providing information which was unattainable through conventional inspection methods. Although many successes have already been shown, the full potential of ACTIS has not yet been realized. It is currently being applied in the commercial aerospace industry by Boeing. Smaller systems, based on ACTIS technology, are becoming increasingly available. This technology has much to offer the small business and industry, especially in identifying design and process problems early in the product development cycle to prevent defects. Several options are available to businesses interested in this technology.

  6. Advanced computed tomography inspection system (ACTIS): an overview of the technology and its applications

    NASA Astrophysics Data System (ADS)

    Beshears, Ronald D.; Hediger, Lisa H.

    1994-10-01

    The Advanced Computed Tomography Inspection System (ACTIS) was developed by the Marshall Space Flight Center to support in-house solid propulsion test programs. ACTIS represents a significant advance in state-of-the-art inspection systems. Its flexibility and superior technical performance have made ACTIS very popular, both within and outside the aerospace community. Through Technology Utilization efforts, ACTIS has been applied to inspection problems in commercial aerospace, lumber, automotive, and nuclear waste disposal industries. ACTIS has even been used to inspect items of historical interest. ACTIS has consistently produced valuable results, providing information which was unattainable through conventional inspection methods. Although many successes have already been demonstrated, the full potential of ACTIS has not yet been realized. It is currently being applied in the commercial aerospace industry by Boeing Aerospace Company. Smaller systems, based on ACTIS technology are becoming increasingly available. This technology has much to offer small businesses and industry, especially in identifying design and process problems early in the product development cycle to prevent defects. Several options are available to businesses interested in pursuing this technology.

  7. Computational studies of horizontal axis wind turbines in high wind speed condition using advanced turbulence models

    NASA Astrophysics Data System (ADS)

    Benjanirat, Sarun

    Next generation horizontal-axis wind turbines (HAWTs) will operate at very high wind speeds. Existing engineering approaches for modeling the flow phenomena are based on blade element theory, and cannot adequately account for 3-D separated, unsteady flow effects. Therefore, researchers around the world are beginning to model these flows using first principles-based computational fluid dynamics (CFD) approaches. In this study, an existing first principles-based Navier-Stokes approach is being enhanced to model HAWTs at high wind speeds. The enhancements include improved grid topology, implicit time-marching algorithms, and advanced turbulence models. The advanced turbulence models include the Spalart-Allmaras one-equation model, k-epsilon, k-o and Shear Stress Transport (k-o-SST) models. These models are also integrated with detached eddy simulation (DES) models. Results are presented for a range of wind speeds, for a configuration termed National Renewable Energy Laboratory Phase VI rotor, tested at NASA Ames Research Center. Grid sensitivity studies are also presented. Additionally, effects of existing transition models on the predictions are assessed. Data presented include power/torque production, radial distribution of normal and tangential pressure forces, root bending moments, and surface pressure fields. Good agreement was obtained between the predictions and experiments for most of the conditions, particularly with the Spalart-Allmaras-DES model.

  8. Spectral computed tomography in advanced gastric cancer: Can iodine concentration non-invasively assess angiogenesis?

    PubMed Central

    Chen, Xiao-Hua; Ren, Ke; Liang, Pan; Chai, Ya-Ru; Chen, Kui-Sheng; Gao, Jian-Bo

    2017-01-01

    AIM To investigate the correlation of iodine concentration (IC) generated by spectral computed tomography (CT) with micro-vessel density (MVD) and vascular endothelial growth factor (VEGF) expression in patients with advanced gastric carcinoma (GC). METHODS Thirty-four advanced GC patients underwent abdominal enhanced CT in the gemstone spectral imaging mode. The IC of the primary lesion in the arterial phase (AP) and venous phase (VP) were measured, and were then normalized against that in the aorta to provide the normalized IC (nIC). MVD and VEGF were detected by immunohistochemical assays, using CD34 and VEGF-A antibodies, respectively. Correlations of nIC with MVD, VEGF, and clinical-pathological features were analyzed. RESULTS Both nICs correlated linearly with MVD and were higher in the primary lesion site than in the normal control site, but were not correlated with VEGF expression. After stratification by clinical-pathological subtypes, nIC-AP showed a statistically significant correlation with MVD, particularly in the group with tumors at stage T4, without nodular involvement, of a mixed Lauren type, where the tumor was located at the antrum site, and occurred in female individuals. nIC-VP showed a positive correlation with MVD in the group with the tumor at stage T4 and above, had nodular involvement, was poorly differentiated, was located at the pylorus site, of a mixed and diffused Lauren subtype, and occurred in male individuals. nIC-AP and nIC-VP showed significant differences in terms of histological differentiation and Lauren subtype. CONCLUSION The IC detected by spectral CT correlated with the MVD. nIC-AP and nIC-VP can reflect angiogenesis in different pathological subgroups of advanced GC. PMID:28321168

  9. Influence of setback and advancement osseous genioplasty on facial outcome: A computer-simulated study.

    PubMed

    Möhlhenrich, Stephan Christian; Heussen, Nicole; Kamal, Mohammad; Peters, Florian; Fritz, Ulrike; Hölzle, Frank; Modabber, Ali

    2015-12-01

    The aim of this virtual study was to investigate the influence of angular deviation and displacement distance on the overlying soft tissue during chin genioplasty. Computed tomography data from 21 patients were read using ProPlan CMF software. Twelve simulated genioplasties were performed per patient with variable osteotomy angles and displacement distances. Soft-tissue deformations and cephalometric analysis were compared. Changes in anterior and inferior soft-tissue of the chin along with resultant lower facial third area were determined. Maximum average changes in soft-tissue were obtained anterior after 10-mm advancement about 4.19 SD 0.84 mm and inferior about -1.55 SD 0.96 mm. After 10-mm setback anterior -4.63 SD 0.56 mm and inferior 0.75 SD 1.16 mm were deviations found. The anterior soft tissue showed a statistically significant change with bony displacement in both directions independent of osteotomy angle (p < 0.001) and only after a 10-mm advancement with an angle of -5° significant differences at inferior soft-tissue were noted (p = 0.0055). The average area of the total lower third of the face was 24,807.80 SD 4,091.72 mm(2) and up to 62.75% was influenced. Advanced genioplasty leads to greater changes in the overlying soft tissue, whereas the affected area is larger after setback displacement. The ratio between soft and hard tissue movements largely depends on the displacement distance.

  10. An information infrastructure for earthquake science

    NASA Astrophysics Data System (ADS)

    Jordan, T. H.; Scec/Itr Collaboration

    2003-04-01

    The Southern California Earthquake Center (SCEC), in collaboration with the San Diego Supercomputer Center, the USC Information Sciences Institute,IRIS, and the USGS, has received a large five-year grant from the NSF's ITR Program and its Geosciences Directorate to build a new information infrastructure for earthquake science. In many respects, the SCEC/ITR Project presents a microcosm of the IT efforts now being organized across the geoscience community, including the EarthScope initiative. The purpose of this presentation is to discuss the experience gained by the project thus far and lay out the challenges that lie ahead; our hope is to encourage cross-discipline collaboration in future IT advancements. Project goals have been formulated in terms of four "computational pathways" related to seismic hazard analysis (SHA). For example, Pathway 1 involves the construction of an open-source, object-oriented, and web-enabled framework for SHA computations that can incorporate a variety of earthquake forecast models, intensity-measure relationships, and site-response models, while Pathway 2 aims to utilize the predictive power of wavefield simulation in modeling time-dependent ground motion for scenario earthquakes and constructing intensity-measure relationships. The overall goal is to create a SCEC "community modeling environment" or collaboratory that will comprise the curated (on-line, documented, maintained) resources needed by researchers to develop and use these four computational pathways. Current activities include (1) the development and verification of the computational modules, (2) the standardization of data structures and interfaces needed for syntactic interoperability, (3) the development of knowledge representation and management tools, (4) the construction SCEC computational and data grid testbeds, and (5) the creation of user interfaces for knowledge-acquisition, code execution, and visualization. I will emphasize the increasing role of standardized

  11. Advanced display object selection methods for enhancing user-computer productivity

    NASA Technical Reports Server (NTRS)

    Osga, Glenn A.

    1993-01-01

    The User-Interface Technology Branch at NCCOSC RDT&E Division has been conducting a series of studies to address the suitability of commercial off-the-shelf (COTS) graphic user-interface (GUI) methods for efficiency and performance in critical naval combat systems. This paper presents an advanced selection algorithm and method developed to increase user performance when making selections on tactical displays. The method has also been applied with considerable success to a variety of cursor and pointing tasks. Typical GUI's allow user selection by: (1) moving a cursor with a pointing device such as a mouse, trackball, joystick, touchscreen; and (2) placing the cursor on the object. Examples of GUI objects are the buttons, icons, folders, scroll bars, etc. used in many personal computer and workstation applications. This paper presents an improved method of selection and the theoretical basis for the significant performance gains achieved with various input devices tested. The method is applicable to all GUI styles and display sizes, and is particularly useful for selections on small screens such as notebook computers. Considering the amount of work-hours spent pointing and clicking across all styles of available graphic user-interfaces, the cost/benefit in applying this method to graphic user-interfaces is substantial, with the potential for increasing productivity across thousands of users and applications.

  12. Building highly available control system applications with Advanced Telecom Computing Architecture and open standards

    NASA Astrophysics Data System (ADS)

    Kazakov, Artem; Furukawa, Kazuro

    2010-11-01

    Requirements for modern and future control systems for large projects like International Linear Collider demand high availability for control system components. Recently telecom industry came up with a great open hardware specification - Advanced Telecom Computing Architecture (ATCA). This specification is aimed for better reliability, availability and serviceability. Since its first market appearance in 2004, ATCA platform has shown tremendous growth and proved to be stable and well represented by a number of vendors. ATCA is an industry standard for highly available systems. On the other hand Service Availability Forum, a consortium of leading communications and computing companies, describes interaction between hardware and software. SAF defines a set of specifications such as Hardware Platform Interface, Application Interface Specification. SAF specifications provide extensive description of highly available systems, services and their interfaces. Originally aimed for telecom applications, these specifications can be used for accelerator controls software as well. This study describes benefits of using these specifications and their possible adoption to accelerator control systems. It is demonstrated how EPICS Redundant IOC was extended using Hardware Platform Interface specification, which made it possible to utilize benefits of the ATCA platform.

  13. Development of Experimental and Computational Aeroacoustic Tools for Advanced Liner Evaluation

    NASA Technical Reports Server (NTRS)

    Jones, Michael G.; Watson, Willie R.; Nark, Douglas N.; Parrott, Tony L.; Gerhold, Carl H.; Brown, Martha C.

    2006-01-01

    Acoustic liners in aircraft engine nacelles suppress radiated noise. Therefore, as air travel increases, increasingly sophisticated tools are needed to maximize noise suppression. During the last 30 years, NASA has invested significant effort in development of experimental and computational acoustic liner evaluation tools. The Curved Duct Test Rig is a 152-mm by 381- mm curved duct that supports liner evaluation at Mach numbers up to 0.3 and source SPLs up to 140 dB, in the presence of user-selected modes. The Grazing Flow Impedance Tube is a 51- mm by 63-mm duct currently being fabricated to operate at Mach numbers up to 0.6 with source SPLs up to at least 140 dB, and will replace the existing 51-mm by 51-mm duct. Together, these test rigs allow evaluation of advanced acoustic liners over a range of conditions representative of those observed in aircraft engine nacelles. Data acquired with these test ducts are processed using three aeroacoustic propagation codes. Two are based on finite element solutions to convected Helmholtz and linearized Euler equations. The third is based on a parabolic approximation to the convected Helmholtz equation. The current status of these computational tools and their associated usage with the Langley test rigs is provided.

  14. Computational fluid dynamics in the design and analysis of thermal processes: a review of recent advances.

    PubMed

    Norton, Tomás; Tiwari, Brijesh; Sun, Da Wen

    2013-01-01

    The design of thermal processes in the food industry has undergone great developments in the last two decades due to the availability of cheap computer power alongside advanced modelling techniques such as computational fluid dynamics (CFD). CFD uses numerical algorithms to solve the non-linear partial differential equations of fluid mechanics and heat transfer so that the complex mechanisms that govern many food-processing systems can be resolved. In thermal processing applications, CFD can be used to build three-dimensional models that are both spatially and temporally representative of a physical system to produce solutions with high levels of physical realism without the heavy costs associated with experimental analyses. Therefore, CFD is playing an ever growing role in the development of optimization of conventional as well as the development of new thermal processes in the food industry. This paper discusses the fundamental aspects involved in developing CFD solutions and forms a state-of-the-art review on various CFD applications in conventional as well as novel thermal processes. The challenges facing CFD modellers of thermal processes are also discussed. From this review it is evident that present-day CFD software, with its rich tapestries of mathematical physics, numerical methods and visualization techniques, is currently recognized as a formidable and pervasive technology which can permit comprehensive analyses of thermal processing.

  15. Guest Editorial Introduction to the Special Issue on 'Advanced Signal Processing Techniques and Telecommunications Network Infrastructures for Smart Grid Analysis, Monitoring, and Management'

    SciTech Connect

    Bracale, Antonio; Barros, Julio; Cacciapuoti, Angela Sara; Chang, Gary; Dall'Anese, Emiliano

    2015-06-10

    Electrical power systems are undergoing a radical change in structure, components, and operational paradigms, and are progressively approaching the new concept of smart grids (SGs). Future power distribution systems will be characterized by the simultaneous presence of various distributed resources, such as renewable energy systems (i.e., photovoltaic power plant and wind farms), storage systems, and controllable/non-controllable loads. Control and optimization architectures will enable network-wide coordination of these grid components in order to improve system efficiency and reliability and to limit greenhouse gas emissions. In this context, the energy flows will be bidirectional from large power plants to end users and vice versa; producers and consumers will continuously interact at different voltage levels to determine in advance the requests of loads and to adapt the production and demand for electricity flexibly and efficiently also taking into account the presence of storage systems.

  16. Guest Editorial Introduction to the Special Issue on 'Advanced Signal Processing Techniques and Telecommunications Network Infrastructures for Smart Grid Analysis, Monitoring, and Management'

    DOE PAGES

    Bracale, Antonio; Barros, Julio; Cacciapuoti, Angela Sara; ...

    2015-06-10

    Electrical power systems are undergoing a radical change in structure, components, and operational paradigms, and are progressively approaching the new concept of smart grids (SGs). Future power distribution systems will be characterized by the simultaneous presence of various distributed resources, such as renewable energy systems (i.e., photovoltaic power plant and wind farms), storage systems, and controllable/non-controllable loads. Control and optimization architectures will enable network-wide coordination of these grid components in order to improve system efficiency and reliability and to limit greenhouse gas emissions. In this context, the energy flows will be bidirectional from large power plants to end users andmore » vice versa; producers and consumers will continuously interact at different voltage levels to determine in advance the requests of loads and to adapt the production and demand for electricity flexibly and efficiently also taking into account the presence of storage systems.« less

  17. Exploratory Mixed-Method Study of End-User Computing within an Information Technology Infrastructure Library U.S. Army Service Delivery Environment

    ERIC Educational Resources Information Center

    Manzano, Sancho J., Jr.

    2012-01-01

    Empirical studies have been conducted on what is known as end-user computing from as early as the 1980s to present-day IT employees. There have been many studies on using quantitative instruments by Cotterman and Kumar (1989) and Rockart and Flannery (1983). Qualitative studies on end-user computing classifications have been conducted by…

  18. Current advances in molecular, biochemical, and computational modeling analysis of microalgal triacylglycerol biosynthesis.

    PubMed

    Lenka, Sangram K; Carbonaro, Nicole; Park, Rudolph; Miller, Stephen M; Thorpe, Ian; Li, Yantao

    2016-01-01

    Triacylglycerols (TAGs) are highly reduced energy storage molecules ideal for biodiesel production. Microalgal TAG biosynthesis has been studied extensively in recent years, both at the molecular level and systems level through experimental studies and computational modeling. However, discussions of the strategies and products of the experimental and modeling approaches are rarely integrated and summarized together in a way that promotes collaboration among modelers and biologists in this field. In this review, we outline advances toward understanding the cellular and molecular factors regulating TAG biosynthesis in unicellular microalgae with an emphasis on recent studies on rate-limiting steps in fatty acid and TAG synthesis, while also highlighting new insights obtained from the integration of multi-omics datasets with mathematical models. Computational methodologies such as kinetic modeling, metabolic flux analysis, and new variants of flux balance analysis are explained in detail. We discuss how these methods have been used to simulate algae growth and lipid metabolism in response to changing culture conditions and how they have been used in conjunction with experimental validations. Since emerging evidence indicates that TAG synthesis in microalgae operates through coordinated crosstalk between multiple pathways in diverse subcellular destinations including the endoplasmic reticulum and plastids, we discuss new experimental studies and models that incorporate these findings for discovering key regulatory checkpoints. Finally, we describe tools for genetic manipulation of microalgae and their potential for future rational algal strain design. This comprehensive review explores the potential synergistic impact of pathway analysis, computational approaches, and molecular genetic manipulation strategies on improving TAG production in microalgae.

  19. Distributed Data Integration Infrastructure

    SciTech Connect

    Critchlow, T; Ludaescher, B; Vouk, M; Pu, C

    2003-02-24

    The Internet is becoming the preferred method for disseminating scientific data from a variety of disciplines. This can result in information overload on the part of the scientists, who are unable to query all of the relevant sources, even if they knew where to find them, what they contained, how to interact with them, and how to interpret the results. A related issue is keeping up with current trends in information technology often taxes the end-user's expertise and time. Thus instead of benefiting from this information rich environment, scientists become experts on a small number of sources and technologies, use them almost exclusively, and develop a resistance to innovations that can enhance their productivity. Enabling information based scientific advances, in domains such as functional genomics, requires fully utilizing all available information and the latest technologies. In order to address this problem we are developing a end-user centric, domain-sensitive workflow-based infrastructure, shown in Figure 1, that will allow scientists to design complex scientific workflows that reflect the data manipulation required to perform their research without an undue burden. We are taking a three-tiered approach to designing this infrastructure utilizing (1) abstract workflow definition, construction, and automatic deployment, (2) complex agent-based workflow execution and (3) automatic wrapper generation. In order to construct a workflow, the scientist defines an abstract workflow (AWF) in terminology (semantics and context) that is familiar to him/her. This AWF includes all of the data transformations, selections, and analyses required by the scientist, but does not necessarily specify particular data sources. This abstract workflow is then compiled into an executable workflow (EWF, in our case XPDL) that is then evaluated and executed by the workflow engine. This EWF contains references to specific data source and interfaces capable of performing the desired

  20. A Scalable Tools Communication Infrastructure

    SciTech Connect

    Buntinas, Darius; Bosilca, George; Graham, Richard L; Vallee, Geoffroy R; Watson, Gregory R.

    2008-01-01

    The Scalable Tools Communication Infrastructure (STCI) is an open source collaborative effort intended to provide high-performance, scalable, resilient, and portable communications and process control services for a wide variety of user and system tools. STCI is aimed specifically at tools for ultrascale computing and uses a component architecture to simplify tailoring the infrastructure to a wide range of scenarios. This paper describes STCI's design philosophy, the various components that will be used to provide an STCI implementation for a range of ultrascale platforms, and a range of tool types. These include tools supporting parallel run-time environments, such as MPI, parallel application correctness tools and performance analysis tools, as well as system monitoring and management tools.

  1. Infrastructure Survey 2011

    ERIC Educational Resources Information Center

    Group of Eight (NJ1), 2012

    2012-01-01

    In 2011, the Group of Eight (Go8) conducted a survey on the state of its buildings and infrastructure. The survey is the third Go8 Infrastructure survey, with previous surveys being conducted in 2007 and 2009. The current survey updated some of the information collected in the previous surveys. It also collated data related to aspects of the…

  2. Smart Valley Infrastructure.

    ERIC Educational Resources Information Center

    Maule, R. William

    1994-01-01

    Discusses prototype information infrastructure projects in northern California's Silicon Valley. The strategies of the public and private telecommunications carriers vying for backbone services and industries developing end-user infrastructure technologies via office networks, set-top box networks, Internet multimedia, and "smart homes"…

  3. A Computational framework for telemedicine.

    SciTech Connect

    Foster, I.; von Laszewski, G.; Thiruvathukal, G. K.; Toonen, B.; Mathematics and Computer Science

    1998-07-01

    Emerging telemedicine applications require the ability to exploit diverse and geographically distributed resources. Highspeed networks are used to integrate advanced visualization devices, sophisticated instruments, large databases, archival storage devices, PCs, workstations, and supercomputers. This form of telemedical environment is similar to networked virtual supercomputers, also known as metacomputers. Metacomputers are already being used in many scientific application areas. In this article, we analyze requirements necessary for a telemedical computing infrastructure and compare them with requirements found in a typical metacomputing environment. We will show that metacomputing environments can be used to enable a more powerful and unified computational infrastructure for telemedicine. The Globus metacomputing toolkit can provide the necessary low level mechanisms to enable a large scale telemedical infrastructure. The Globus toolkit components are designed in a modular fashion and can be extended to support the specific requirements for telemedicine.

  4. Advanced Cyberinfrastructure Investments Addressing Earth Science Challenges

    NASA Astrophysics Data System (ADS)

    Walton, A. L.; Spengler, S. J.; Zanzerkia, E. E.

    2014-12-01

    The National Science Foundation supports infrastructure development and research into Big Data challenges as part of its long-term cyberinfrastructure strategy. This strategy highlights the critical need to leverage and partner with other agencies, resources and service providers to the U.S. research community. The current cyberinfrastructure and research activities within NSF support advanced technology development, pilot demonstrations of new capabilities for the scientific community in general, and integration and interoperability of data resources across the Geoscience community. These activities include the Data Infrastructure Building Blocks, Big Data and EarthCube programs, among others. Investments are competitively solicited; the resulting portfolio of high performance computing, advanced information systems, new software capabilities, analytics and modeling supports a range of science disciplines. This presentation provides an overview of these research programs, highlighting some of the key investments in advanced analytics, coupled modeling, and seamless collaboration. Examples related to the geosciences, computer-aided discovery and hypothesis generation are highlighted.

  5. COOPEUS - connecting research infrastructures in environmental sciences

    NASA Astrophysics Data System (ADS)

    Koop-Jakobsen, Ketil; Waldmann, Christoph; Huber, Robert

    2015-04-01

    The COOPEUS project was initiated in 2012 bringing together 10 research infrastructures (RIs) in environmental sciences from the EU and US in order to improve the discovery, access, and use of environmental information and data across scientific disciplines and across geographical borders. The COOPEUS mission is to facilitate readily accessible research infrastructure data to advance our understanding of Earth systems through an international community-driven effort, by: Bringing together both user communities and top-down directives to address evolving societal and scientific needs; Removing technical, scientific, cultural and geopolitical barriers for data use; and Coordinating the flow, integrity and preservation of information. A survey of data availability was conducted among the COOPEUS research infrastructures for the purpose of discovering impediments for open international and cross-disciplinary sharing of environmental data. The survey showed that the majority of data offered by the COOPEUS research infrastructures is available via the internet (>90%), but the accessibility to these data differ significantly among research infrastructures; only 45% offer open access on their data, whereas the remaining infrastructures offer restricted access e.g. do not release raw data or sensible data, demand user registration or require permission prior to release of data. These rules and regulations are often installed as a form of standard practice, whereas formal data policies are lacking in 40% of the infrastructures, primarily in the EU. In order to improve this situation COOPEUS has installed a common data-sharing policy, which is agreed upon by all the COOPEUS research infrastructures. To investigate the existing opportunities for improving interoperability among environmental research infrastructures, COOPEUS explored the opportunities with the GEOSS common infrastructure (GCI) by holding a hands-on workshop. Through exercises directly registering resources

  6. TRANS_MU computer code for computation of transmutant formation kinetics in advanced structural materials for fusion reactors

    NASA Astrophysics Data System (ADS)

    Markina, Natalya V.; Shimansky, Gregory A.

    A method of controlling a systematic error in transmutation computations is described for a class of problems, in which strictly a one-parental and one-residual nucleus are considered in each nuclear transformation channel. A discrete-logical algorithm is stated for the differential equations system matrix to reduce it to a block-triangular type. A computing procedure is developed determining a strict estimation of a computing error for each value of the computation results for the above named class of transmutation computation problems with some additional restrictions on the complexity of the nuclei transformations scheme. The computer code for this computing procedure - TRANS_MU - compared with an analogue approach has a number of advantages. Besides the mentioned quantitative control of a systematic and computing errors as an important feature of the code TRANS_MU, it is necessary to indicate the calculation of the contribution of each considered reaction to the transmutant accumulation and gas production. The application of the TRANS_MU computer code is shown using copper alloys as an example when the planning of irradiation experiments with fusion reactor material specimens in fission reactors, and processing the experimental results.

  7. Utilizing Computer and Multimedia Technology in Generating Choreography for the Advanced Dance Student at the High School Level.

    ERIC Educational Resources Information Center

    Griffin, Irma Amado

    This study describes a pilot program utilizing various multimedia computer programs on a MacQuadra 840 AV. The target group consisted of six advanced dance students who participated in the pilot program within the dance curriculum by creating a database of dance movement using video and still photography. The students combined desktop publishing,…

  8. Using an Advanced Computational Laboratory Experiment to Extend and Deepen Physical Chemistry Students' Understanding of Atomic Structure

    ERIC Educational Resources Information Center

    Hoffman, Gary G.

    2015-01-01

    A computational laboratory experiment is described, which involves the advanced study of an atomic system. The students use concepts and techniques typically covered in a physical chemistry course but extend those concepts and techniques to more complex situations. The students get a chance to explore the study of atomic states and perform…

  9. Computer experiments on periodic systems identification using rotor blade transient flapping-torsion responses at high advance ratio

    NASA Technical Reports Server (NTRS)

    Hohenemser, K. H.; Prelewicz, D. A.

    1974-01-01

    Systems identification methods have recently been applied to rotorcraft to estimate stability derivatives from transient flight control response data. While these applications assumed a linear constant coefficient representation of the rotorcraft, the computer experiments described in this paper used transient responses in flap-bending and torsion of a rotor blade at high advance ratio which is a rapidly time varying periodic system.

  10. [Attributes of forest infrastructure].

    PubMed

    Gao, Jun-kai; Jin, Ying-shan

    2007-06-01

    This paper discussed the origin and evolution of the conception of ecological infrastructure, the understanding of international communities about the functions of forest, the important roles of forest in China' s economic development and ecological security, and the situations and challenges to the ongoing forestry ecological restoration programs. It was suggested that forest should be defined as an essential infrastructure for national economic and social development in a modern society. The critical functions of forest infrastructure played in the transition of forestry ecological development were emphasized. Based on the synthesis of forest ecosystem features, it was considered that the attributes of forest infrastructure are distinctive, due to the fact that it is constructed by living biological material and diversified in ownership. The forestry ecological restoration program should not only follow the basic principles of infrastructural construction, but also take the special characteristics of forests into consideration in studying the managerial system of the programs. Some suggestions for the ongoing programs were put forward: 1) developing a modern concept of ecosystem where man and nature in harmony is the core, 2) formulating long-term stable investments for forestry ecological restoration programs, 3) implementing forestry ecological restoration programs based on infrastructure construction principles, and 4) managing forests according to the principles of infrastructural construction management.

  11. Research infrastructure support to address ecosystem dynamics

    NASA Astrophysics Data System (ADS)

    Los, Wouter

    2014-05-01

    Predicting the evolution of ecosystems to climate change or human pressures is a challenge. Even understanding past or current processes is complicated as a result of the many interactions and feedbacks that occur within and between components of the system. This talk will present an example of current research on changes in landscape evolution, hydrology, soil biogeochemical processes, zoological food webs, and plant community succession, and how these affect feedbacks to components of the systems, including the climate system. Multiple observations, experiments, and simulations provide a wealth of data, but not necessarily understanding. Model development on the coupled processes on different spatial and temporal scales is sensitive for variations in data and of parameter change. Fast high performance computing may help to visualize the effect of these changes and the potential stability (and reliability) of the models. This may than allow for iteration between data production and models towards stable models reducing uncertainty and improving the prediction of change. The role of research infrastructures becomes crucial is overcoming barriers for such research. Environmental infrastructures are covering physical site facilities, dedicated instrumentation and e-infrastructure. The LifeWatch infrastructure for biodiversity and ecosystem research will provide services for data integration, analysis and modeling. But it has to cooperate intensively with the other kinds of infrastructures in order to support the iteration between data production and model computation. The cooperation in the ENVRI project (Common operations of environmental research infrastructures) is one of the initiatives to foster such multidisciplinary research.

  12. Benchmarking infrastructure for mutation text mining

    PubMed Central

    2014-01-01

    Background Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. Results We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. Conclusion We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption. PMID:24568600

  13. [Biobanks European infrastructure].

    PubMed

    Kinkorová, Judita; Topolčan, Ondřej

    2016-01-01

    Biobanks are structured repositories of human tissue samples connected with specific information. They became an integral part of personalized medicine in the new millennium. At the European research area biobanks are isolated not well coordinated and connected to the network. European commission supports European infrastructure BBMRI-ERIC (Biobanks and Biomolecular Resources Research Infrastructure European Research Infrastructure Consortium), consortium of 54 members with more than 225 associated organizations, largely biobanks from over 30 countries. The aim is to support biomedical research using stored samples. Czech Republic is a member of the consortium as a national node BBMRI_CZ, consisting of five partners.

  14. Evaluation of Computed Tomography of Mock Uranium Fuel Rods at the Advanced Photon Source

    DOE PAGES

    Hunter, James F.; Brown, Donald William; Okuniewski, Maria

    2015-06-01

    This study discusses a multi-year effort to evaluate the utility of computed tomography at the Advanced Photon Source (APS) as a tool for non-destructive evaluation of uranium based fuel rods. The majority of the data presented is on mock material made with depleted uranium which mimics the x-ray attenuation characteristics of fuel rods while allowing for simpler handling. A range of data is presented including full thickness (5mm diameter) fuel rodlets, reduced thickness (1.8mm) sintering test samples, and pre/post irradiation samples (< 1mm thick). These data were taken on both a white beam (bending magnet) beamline and a high energy,more » monochromatic beamline. This data shows the utility of a synchrotron type source in the evealuation of manufacturing defects (pre-irradiation) and lays out the case for in situ CT of fuel pellet sintering. Finally, in addition data is shown from small post-irradiation samples and a case is made for post-irradiation CT of larger samples.« less

  15. Effect of surgical mandibular advancement on pharyngeal airway dimensions: a three-dimensional computed tomography study.

    PubMed

    Kochar, G D; Chakranarayan, A; Kohli, S; Kohli, V S; Khanna, V; Jayan, B; Chopra, S S; Verma, M

    2016-05-01

    The aim of this study was to quantify the changes in pharyngeal airway space (PAS) in patients with a skeletal class II malocclusion managed by bilateral sagittal split ramus osteotomy for mandibular advancement, using three-dimensional (3D) registration. The sample comprised 16 patients (mean age 21.69±2.80 years). Preoperative (T0) and postoperative (T1) computed tomography scans were recorded. Linear, cross-sectional area (CSA), and volumetric parameters of the velopharynx, oropharynx, and hypopharynx were evaluated. Parameters were compared with paired samples t-tests. Highly significant changes in dimension were measured in both sagittal and transverse planes (P<0.001). CSA measurements increased significantly between T0 and T1 (P<0.001). A significant increase in PAS volume was found at T1 compared with T0 (P<0.001). The changes in PAS were quantified using 3D reconstruction. Along the sagittal and transverse planes, the greatest increase was seen in the oropharynx (12.16% and 11.50%, respectively), followed by hypopharynx (11.00% and 9.07%) and velopharynx (8.97% and 6.73%). CSA increased by 41.69%, 34.56%, and 28.81% in the oropharynx, hypopharynx, and velopharynx, respectively. The volumetric increase was greatest in the oropharynx (49.79%) and least in the velopharynx (38.92%). These established quantifications may act as a useful guide for clinicians in the field of dental sleep medicine.

  16. Recent advances in computational fluid dynamics relevant to the modelling of pesticide flow on leaf surfaces.

    PubMed

    Glass, C Richard; Walters, Keith F A; Gaskell, Philip H; Lee, Yeaw C; Thompson, Harvey M; Emerson, David R; Gu, Xiao-Jun

    2010-01-01

    Increasing societal and governmental concern about the worldwide use of chemical pesticides is now providing strong drivers towards maximising the efficiency of pesticide utilisation and the development of alternative control techniques. There is growing recognition that the ultimate goal of achieving efficient and sustainable pesticide usage will require greater understanding of the fluid mechanical mechanisms governing the delivery to, and spreading of, pesticide droplets on target surfaces such as leaves. This has led to increasing use of computational fluid dynamics (CFD) as an important component of efficient process design with regard to pesticide delivery to the leaf surface. This perspective highlights recent advances in CFD methods for droplet spreading and film flows, which have the potential to provide accurate, predictive models for pesticide flow on leaf surfaces, and which can take account of each of the key influences of surface topography and chemistry, initial spray deposition conditions, evaporation and multiple droplet spreading interactions. The mathematical framework of these CFD methods is described briefly, and a series of new flow simulation results relevant to pesticide flows over foliage is provided. The potential benefits of employing CFD for practical process design are also discussed briefly.

  17. Evaluation of Computed Tomography of Mock Uranium Fuel Rods at the Advanced Photon Source

    SciTech Connect

    Hunter, James F.; Brown, Donald William; Okuniewski, Maria

    2015-06-01

    This study discusses a multi-year effort to evaluate the utility of computed tomography at the Advanced Photon Source (APS) as a tool for non-destructive evaluation of uranium based fuel rods. The majority of the data presented is on mock material made with depleted uranium which mimics the x-ray attenuation characteristics of fuel rods while allowing for simpler handling. A range of data is presented including full thickness (5mm diameter) fuel rodlets, reduced thickness (1.8mm) sintering test samples, and pre/post irradiation samples (< 1mm thick). These data were taken on both a white beam (bending magnet) beamline and a high energy, monochromatic beamline. This data shows the utility of a synchrotron type source in the evealuation of manufacturing defects (pre-irradiation) and lays out the case for in situ CT of fuel pellet sintering. Finally, in addition data is shown from small post-irradiation samples and a case is made for post-irradiation CT of larger samples.

  18. Advanced flight computing technologies for validation by NASA's new millennium program

    NASA Astrophysics Data System (ADS)

    Alkalai, Leon

    1996-11-01

    The New Millennium Program (NMP) consists of a series of Deep-Space and Earth Orbiting missions that are technology-driven, in contrast to the more traditional science-driven space exploration missions of the past. These flights are designed to validate technologies that will enable a new era of low-cost highly miniaturized and highly capable spacebome applications in the new millennium. In addition to the series of flight projects managed by separate flight teams, the NMP technology initiatives are managed by the following six focused technology programs: Microelectronics Systems, Autonomy, Telecommunications, Instrument Technologies and Architectures, In-Situ Instruments and Micro-electromechanical Systems, and Modular and Multifunctional Systems. Each technology program is managed as an Integrated Product Development Team (IPDT) of government, academic, and industry partners. In this paper, we will describe elements of the technology roadmap proposed by the NMP Microelectronics IPDT. Moreover, we will relate the proposed technology roadmap to existing NASA technology development programs, such as the Advanced Flight Computing (AFC) program, and the Remote Exploration and Experimentation (REE) program, which constitute part of the on-going NASA technology development pipeline. We will also describe the Microelectronics Systems technologies that have been accepted as part of the first New Millennium Deep-Space One spacecraft, which is an asteroid fly-by mission scheduled for launched in July 1998.

  19. IMPROVED COMPUTATIONAL NEUTRONICS METHODS AND VALIDATION PROTOCOLS FOR THE ADVANCED TEST REACTOR

    SciTech Connect

    David W. Nigg; Joseph W. Nielsen; Benjamin M. Chase; Ronnie K. Murray; Kevin A. Steuhm

    2012-04-01

    The Idaho National Laboratory (INL) is in the process of modernizing the various reactor physics modeling and simulation tools used to support operation and safety assurance of the Advanced Test Reactor (ATR). Key accomplishments so far have encompassed both computational as well as experimental work. A new suite of stochastic and deterministic transport theory based reactor physics codes and their supporting nuclear data libraries (HELIOS, KENO6/SCALE, NEWT/SCALE, ATTILA, and an extended implementation of MCNP5) has been installed at the INL. Corresponding models of the ATR and ATRC are now operational with all five codes, demonstrating the basic feasibility of the new code packages for their intended purpose. Of particular importance, a set of as-run core depletion HELIOS calculations for all ATR cycles since August 2009 was successfully completed during 2011. This demonstration supported a decision late in the year to proceed with the phased incorporation of the HELIOS methodology into the ATR fuel cycle management process beginning in 2012. On the experimental side of the project, new hardware was fabricated, measurement protocols were finalized, and the first four of six planned physics code validation experiments based on neutron activation spectrometry were conducted at the ATRC facility. Data analysis for the first three experiments, focused on characterization of the neutron spectrum in one of the ATR flux traps, has been completed. The six experiments will ultimately form the basis for a flexible, easily-repeatable ATR physics code validation protocol that is consistent with applicable ASTM standards.

  20. Advanced imaging findings and computer-assisted surgery of suspected synovial chondromatosis in the temporomandibular joint.

    PubMed

    Hohlweg-Majert, Bettina; Metzger, Marc C; Böhm, Joachim; Muecke, Thomas; Schulze, Dirk

    2008-11-01

    Synovial chondromatosis of the joint occurs mainly in teenagers and young adults. Only 3% of these neoplasms are located in the head and neck region. Synovial chondromatosis of the temporomandibular joint is therefore a very rare disorder. Therefore, developing a working, histological confirmation is required for differential diagnosis. In this case series, the outcome of histological investigation and imaging techniques are compared. Based on clinical symptoms, five cases of suspected synovial chondromatosis of the temporomandibular joint are presented. In each of the subjects, the diagnosis was confirmed by histology. Specific imaging features for each case are described. The tomography images were compared with the histological findings. All patients demonstrated preauricular swelling, dental midline deviation, and limited mouth opening. Computer-assisted surgery was performed. Histology disclosed synovial chondromatosis of the temporomandibular joint in four cases. The other case was found to be a developmental disorder of the tympanic bone. The diagnosis of synovial chondromatosis of the temporomandibular joint can only be based on histology. Clinical symptoms are too general and the available imaging techniques only show nonspecific tumorous destruction, infiltration, and/or residual calcified bodies, they are only for advanced cases. A rare developmental disorder of the tympanic bone--persistence of foramen of Huschke--has to be differentiated.

  1. Advances in automated deception detection in text-based computer-mediated communication

    NASA Astrophysics Data System (ADS)

    Adkins, Mark; Twitchell, Douglas P.; Burgoon, Judee K.; Nunamaker, Jay F., Jr.

    2004-08-01

    The Internet has provided criminals, terrorists, spies, and other threats to national security a means of communication. At the same time it also provides for the possibility of detecting and tracking their deceptive communication. Recent advances in natural language processing, machine learning and deception research have created an environment where automated and semi-automated deception detection of text-based computer-mediated communication (CMC, e.g. email, chat, instant messaging) is a reachable goal. This paper reviews two methods for discriminating between deceptive and non-deceptive messages in CMC. First, Document Feature Mining uses document features or cues in CMC messages combined with machine learning techniques to classify messages according to their deceptive potential. The method, which is most useful in asynchronous applications, also allows for the visualization of potential deception cues in CMC messages. Second, Speech Act Profiling, a method for quantifying and visualizing synchronous CMC, has shown promise in aiding deception detection. The methods may be combined and are intended to be a part of a suite of tools for automating deception detection.

  2. Advanced practice registered nurse usability testing of a tailored computer-mediated health communication program.

    PubMed

    Lin, Carolyn A; Neafsey, Patricia J; Anderson, Elizabeth

    2010-01-01

    This study tested the usability of a touch-screen-enabled Personal Education Program with advanced practice RNs. The Personal Education Program is designed to enhance medication adherence and reduce adverse self-medication behaviors in older adults with hypertension. An iterative research process was used, which involved the use of (1) pretrial focus groups to guide the design of system information architecture, (2) two different cycles of think-aloud trials to test the software interface, and (3) post-trial focus groups to gather feedback on the think-aloud studies. Results from this iterative usability-testing process were used to systematically modify and improve the three Personal Education Program prototype versions-the pilot, prototype 1, and prototype 2. Findings contrasting the two separate think-aloud trials showed that APRN users rated the Personal Education Program system usability, system information, and system-use satisfaction at a moderately high level between trials. In addition, errors using the interface were reduced by 76%, and the interface time was reduced by 18.5% between the two trials. The usability-testing processes used in this study ensured an interface design adapted to APRNs' needs and preferences to allow them to effectively use the computer-mediated health-communication technology in a clinical setting.

  3. Study of flutter related computational procedures for minimum weight structural sizing of advanced aircraft

    NASA Technical Reports Server (NTRS)

    Oconnell, R. F.; Hassig, H. J.; Radovcich, N. A.

    1976-01-01

    Results of a study of the development of flutter modules applicable to automated structural design of advanced aircraft configurations, such as a supersonic transport, are presented. Automated structural design is restricted to automated sizing of the elements of a given structural model. It includes a flutter optimization procedure; i.e., a procedure for arriving at a structure with minimum mass for satisfying flutter constraints. Methods of solving the flutter equation and computing the generalized aerodynamic force coefficients in the repetitive analysis environment of a flutter optimization procedure are studied, and recommended approaches are presented. Five approaches to flutter optimization are explained in detail and compared. An approach to flutter optimization incorporating some of the methods discussed is presented. Problems related to flutter optimization in a realistic design environment are discussed and an integrated approach to the entire flutter task is presented. Recommendations for further investigations are made. Results of numerical evaluations, applying the five methods of flutter optimization to the same design task, are presented.

  4. Development of the regional EPR and PACS sharing system on the infrastructure of cloud computing technology controlled by patient identifier cross reference manager.

    PubMed

    Kondoh, Hiroshi; Teramoto, Kei; Kawai, Tatsurou; Mochida, Maki; Nishimura, Motohiro

    2013-01-01

    A Newly developed Oshidori-Net2, providing medical professionals with remote access to electronic patient record systems (EPR) and PACSs of four hospitals, of different venders, using cloud computing technology and patient identifier cross reference manager. The operation was started from April 2012. The patients moved to other hospital were applied. Objective is to show the merit and demerit of the new system.

  5. Plan to mitigate infrastructure development released

    NASA Astrophysics Data System (ADS)

    Wendel, JoAnna

    2014-04-01

    Secretary of the Interior Sally Jewell released a new strategy on 10 April 2014 to advance landscape-scale, science-based management of America's public lands and wildlife. The strategy will implement mitigation policies and practices at the Department of the Interior that can encourage infrastructure development while protecting natural and cultural resources.

  6. An authentication infrastructure for today and tomorrow

    SciTech Connect

    Engert, D.E.

    1996-06-01

    The Open Software Foundation`s Distributed Computing Environment (OSF/DCE) was originally designed to provide a secure environment for distributed applications. By combining it with Kerberos Version 5 from MIT, it can be extended to provide network security as well. This combination can be used to build both an inter and intra organizational infrastructure while providing single sign-on for the user with overall improved security. The ESnet community of the Department of Energy is building just such an infrastructure. ESnet has modified these systems to improve their interoperability, while encouraging the developers to incorporate these changes and work more closely together to continue to improve the interoperability. The success of this infrastructure depends on its flexibility to meet the needs of many applications and network security requirements. The open nature of Kerberos, combined with the vendor support of OSF/DCE, provides the infrastructure for today and tomorrow.

  7. Observations on computational methodologies for use in large-scale, gradient-based, multidisciplinary design incorporating advanced CFD codes

    NASA Technical Reports Server (NTRS)

    Newman, P. A.; Hou, G. J.-W.; Jones, H. E.; Taylor, A. C., III; Korivi, V. M.

    1992-01-01

    How a combination of various computational methodologies could reduce the enormous computational costs envisioned in using advanced CFD codes in gradient based optimized multidisciplinary design (MdD) procedures is briefly outlined. Implications of these MdD requirements upon advanced CFD codes are somewhat different than those imposed by a single discipline design. A means for satisfying these MdD requirements for gradient information is presented which appear to permit: (1) some leeway in the CFD solution algorithms which can be used; (2) an extension to 3-D problems; and (3) straightforward use of other computational methodologies. Many of these observations have previously been discussed as possibilities for doing parts of the problem more efficiently; the contribution here is observing how they fit together in a mutually beneficial way.

  8. IPHE Infrastructure Workshop Proceedings

    SciTech Connect

    2010-02-01

    This proceedings contains information from the IPHE Infrastructure Workshop, a two-day interactive workshop held on February 25-26, 2010, to explore the market implementation needs for hydrogen fueling station development.

  9. Critical Infrastructure Modeling System

    SciTech Connect

    2004-10-01

    The Critical Infrastructure Modeling System (CIMS) is a 3D modeling and simulation environment designed to assist users in the analysis of dependencies within individual infrastructure and also interdependencies between multiple infrastructures. Through visual cuing and textual displays, a use can evaluate the effect of system perturbation and identify the emergent patterns that evolve. These patterns include possible outage areas from a loss of power, denial of service or access, and disruption of operations. Method of Solution: CIMS allows the user to model a system, create an overlay of information, and create 3D representative images to illustrate key infrastructure elements. A geo-referenced scene, satellite, aerial images or technical drawings can be incorporated into the scene. Scenarios of events can be scripted, and the user can also interact during run time to alter system characteristics. CIMS operates as a discrete event simulation engine feeding a 3D visualization.

  10. Green Infrastructure Modeling Toolkit

    EPA Science Inventory

    Green infrastructure, such as rain gardens, green roofs, porous pavement, cisterns, and constructed wetlands, is becoming an increasingly attractive way to recharge aquifers and reduce the amount of stormwater runoff that flows into wastewater treatment plants or into waterbodies...

  11. Large-scale virtual high-throughput screening for the identification of new battery electrolyte solvents: computing infrastructure and collective properties.

    PubMed

    Husch, Tamara; Yilmazer, Nusret Duygu; Balducci, Andrea; Korth, Martin

    2015-02-07

    A volunteer computing approach is presented for the purpose of screening a large number of molecular structures with respect to their suitability as new battery electrolyte solvents. Collective properties like melting, boiling and flash points are evaluated using COSMOtherm and quantitative structure-property relationship (QSPR) based methods, while electronic structure theory methods are used for the computation of electrochemical stability window estimators. Two application examples are presented: first, the results of a previous large-scale screening test (PCCP, 2014, 16, 7919) are re-evaluated with respect to the mentioned collective properties. As a second application example, all reasonable nitrile solvents up to 12 heavy atoms are generated and used to illustrate a suitable filter protocol for picking Pareto-optimal candidates.

  12. Clarkesville Green Infrastructure Implementation Strategy

    EPA Pesticide Factsheets

    The report outlines the 2012 technical assistance for Clarkesville, GA to develop a Green Infrastructure Implementation Strategy, which provides the basic building blocks for a green infrastructure plan:

  13. Fervor with Infrastructure: Making the Most of the Mentoring Movement.

    ERIC Educational Resources Information Center

    Freedman, Marc

    1993-01-01

    Explores mentoring, and examines the gap between claims advanced for mentoring and actual experience. Mentoring at the present often suffers from fervor without infrastructure. Principles are given for building supportive infrastructure and creating environments rich in mentors that relieve a single person from the weight of super-mentoring. (SLD)

  14. MFC Communications Infrastructure Study

    SciTech Connect

    Michael Cannon; Terry Barney; Gary Cook; George Danklefsen, Jr.; Paul Fairbourn; Susan Gihring; Lisa Stearns

    2012-01-01

    Unprecedented growth of required telecommunications services and telecommunications applications change the way the INL does business today. High speed connectivity compiled with a high demand for telephony and network services requires a robust communications infrastructure.   The current state of the MFC communication infrastructure limits growth opportunities of current and future communication infrastructure services. This limitation is largely due to equipment capacity issues, aging cabling infrastructure (external/internal fiber and copper cable) and inadequate space for telecommunication equipment. While some communication infrastructure improvements have been implemented over time projects, it has been completed without a clear overall plan and technology standard.   This document identifies critical deficiencies with the current state of the communication infrastructure in operation at the MFC facilities and provides an analysis to identify needs and deficiencies to be addressed in order to achieve target architectural standards as defined in STD-170. The intent of STD-170 is to provide a robust, flexible, long-term solution to make communications capabilities align with the INL mission and fit the various programmatic growth and expansion needs.

  15. Building safeguards infrastructure

    SciTech Connect

    Stevens, Rebecca S; Mcclelland - Kerr, John

    2009-01-01

    Much has been written in recent years about the nuclear renaissance - the rebirth of nuclear power as a clean and safe source of electricity around the world. Those who question the nuclear renaissance often cite the risk of proliferation, accidents or an attack on a facility as concerns, all of which merit serious consideration. The integration of these three areas - sometimes referred to as 3S, for safety, security and safeguards - is essential to supporting the growth of nuclear power, and the infrastructure that supports them should be strengthened. The focus of this paper will be on the role safeguards plays in the 3S concept and how to support the development of the infrastructure necessary to support safeguards. The objective of this paper has been to provide a working definition of safeguards infrastructure, and to discuss xamples of how building safeguards infrastructure is presented in several models. The guidelines outlined in the milestones document provide a clear path for establishing both the safeguards and the related infrastructures needed to support the development of nuclear power. The model employed by the INSEP program of engaging with partner states on safeguards-related topics that are of current interest to the level of nuclear development in that state provides another way of approaching the concept of building safeguards infrastructure. The Next Generation Safeguards Initiative is yet another approach that underscored five principal areas for growth, and the United States commitment to working with partners to promote this growth both at home and abroad.

  16. Recent Advances and Issues in Computers. Oryx Frontiers of Science Series.

    ERIC Educational Resources Information Center

    Gay, Martin K.

    Discussing recent issues in computer science, this book contains 11 chapters covering: (1) developments that have the potential for changing the way computers operate, including microprocessors, mass storage systems, and computing environments; (2) the national computational grid for high-bandwidth, high-speed collaboration among scientists, and…

  17. ENHANCING THE ATOMIC-LEVEL UNDERSTANDING OF CO2 MINERAL SEQUESTRATION MECHANISMS VIA ADVANCED COMPUTATIONAL MODELING

    SciTech Connect

    A.V.G. Chizmeshya; M.J. McKelvy; G.H. Wolf; R.W. Carpenter; D.A. Gormley; J.R. Diefenbacher; R. Marzke

    2006-03-01

    significantly improved our understanding of mineral carbonation. Group members at the Albany Research Center have recently shown that carbonation of olivine and serpentine, which naturally occurs over geological time (i.e., 100,000s of years), can be accelerated to near completion in hours. Further process refinement will require a synergetic science/engineering approach that emphasizes simultaneous investigation of both thermodynamic processes and the detailed microscopic, atomic-level mechanisms that govern carbonation kinetics. Our previously funded Phase I Innovative Concepts project demonstrated the value of advanced quantum-mechanical modeling as a complementary tool in bridging important gaps in our understanding of the atomic/molecular structure and reaction mechanisms that govern CO2 mineral sequestration reaction processes for the model Mg-rich lamellar hydroxide feedstock material Mg(OH)2. In the present simulation project, improved techniques and more efficient computational schemes have allowed us to expand and augment these capabilities and explore more complex Mg-rich, lamellar hydroxide-based feedstock materials, including the serpentine-based minerals. These feedstock materials are being actively investigated due to their wide availability, and low-cost CO2 mineral sequestration potential. Cutting-edge first principles quantum chemical, computational solid-state and materials simulation methodology studies proposed herein, have been strategically integrated with our new DOE supported (ASU-Argonne National Laboratory) project to investigate the mechanisms that govern mineral feedstock heat-treatment and aqueous/fluid-phase serpentine mineral carbonation in situ. This unified, synergetic theoretical and experimental approach has provided a deeper understanding of the key reaction mechanisms than either individual approach can alone. We used ab initio techniques to significantly advance our understanding of atomic-level processes at the solid/solution interface by

  18. ENHANCING THE ATOMIC-LEVEL UNDERSTANDING OF CO2 MINERAL SEQUESTRATION MECHANISMS VIA ADVANCED COMPUTATIONAL MODELING

    SciTech Connect

    A.V.G. Chizmeshya

    2003-12-19

    /NETL managed National Mineral Sequestration Working Group we have already significantly improved our understanding of mineral carbonation. Group members at the Albany Research Center have recently shown that carbonation of olivine and serpentine, which naturally occurs over geological time (i.e., 100,000s of years), can be accelerated to near completion in hours. Further process refinement will require a synergetic science/engineering approach that emphasizes simultaneous investigation of both thermodynamic processes and the detailed microscopic, atomic-level mechanisms that govern carbonation kinetics. Our previously funded Phase I Innovative Concepts project demonstrated the value of advanced quantum-mechanical modeling as a complementary tool in bridging important gaps in our understanding of the atomic/molecular structure and reaction mechanisms that govern CO{sub 2} mineral sequestration reaction processes for the model Mg-rich lamellar hydroxide feedstock material Mg(OH){sub 2}. In the present simulation project, improved techniques and more efficient computational schemes have allowed us to expand and augment these capabilities and explore more complex Mg-rich, lamellar hydroxide-based feedstock materials, including the serpentine-based minerals. These feedstock materials are being actively investigated due to their wide availability, and low-cost CO{sub 2} mineral sequestration potential. Cutting-edge first principles quantum chemical, computational solid-state and materials simulation methodology studies proposed herein, have been strategically integrated with our new DOE supported (ASU-Argonne National Laboratory) project to investigate the mechanisms that govern mineral feedstock heat-treatment and aqueous/fluid-phase serpentine mineral carbonation in situ. This unified, synergetic theoretical and experimental approach will provide a deeper understanding of the key reaction mechanisms than either individual approach can alone. Ab initio techniques will also

  19. ENHANCING THE ATOMIC-LEVEL UNDERSTANDING OF CO2 MINERAL SEQUESTRATION MECHANISMS VIA ADVANCED COMPUTATIONAL MODELING

    SciTech Connect

    A.V.G. Chizmeshya

    2002-12-19

    /NETL managed National Mineral Sequestration Working Group we have already significantly improved our understanding of mineral carbonation. Group members at the Albany Research Center have recently shown that carbonation of olivine and serpentine, which naturally occurs over geological time (i.e., 100,000s of years), can be accelerated to near completion in hours. Further process refinement will require a synergetic science/engineering approach that emphasizes simultaneous investigation of both thermodynamic processes and the detailed microscopic, atomic-level mechanisms that govern carbonation kinetics. Our previously funded Phase I Innovative Concepts project demonstrated the value of advanced quantum-mechanical modeling as a complementary tool in bridging important gaps in our understanding of the atomic/molecular structure and reaction mechanisms that govern CO{sub 2} mineral sequestration reaction processes for the model Mg-rich lamellar hydroxide feedstock material Mg(OH){sub 2}. In the present simulation project, improved techniques and more efficient computational schemes have allowed us to expand and augment these capabilities and explore more complex Mg-rich, lamellar hydroxide-based feedstock materials, including the serpentine-based minerals. These feedstock materials are being actively investigated due to their wide availability, and low-cost CO{sub 2} mineral sequestration potential. Cutting-edge first principles quantum chemical, computational solid-state and materials simulation methodology studies proposed herein, have been strategically integrated with our new DOE supported (ASU-Argonne National Laboratory) project to investigate the mechanisms that govern mineral feedstock heat-treatment and aqueous/fluid-phase serpentine mineral carbonation in situ. This unified, synergetic theoretical and experimental approach will provide a deeper understanding of the key reaction mechanisms than either individual approach can alone. Ab initio techniques will also

  20. Infrastructure for distributed enterprise simulation

    SciTech Connect

    Johnson, M.M.; Yoshimura, A.S.; Goldsby, M.E.

    1998-01-01

    Traditional discrete-event simulations employ an inherently sequential algorithm and are run on a single computer. However, the demands of many real-world problems exceed the capabilities of sequential simulation systems. Often the capacity of a computer`s primary memory limits the size of the models that can be handled, and in some cases parallel execution on multiple processors could significantly reduce the simulation time. This paper describes the development of an Infrastructure for Distributed Enterprise Simulation (IDES) - a large-scale portable parallel simulation framework developed to support Sandia National Laboratories` mission in stockpile stewardship. IDES is based on the Breathing-Time-Buckets synchronization protocol, and maps a message-based model of distributed computing onto an object-oriented programming model. IDES is portable across heterogeneous computing architectures, including single-processor systems, networks of workstations and multi-processor computers with shared or distributed memory. The system provides a simple and sufficient application programming interface that can be used by scientists to quickly model large-scale, complex enterprise systems. In the background and without involving the user, IDES is capable of making dynamic use of idle processing power available throughout the enterprise network. 16 refs., 14 figs.

  1. Integrated Graphics Operations and Analysis Lab Development of Advanced Computer Graphics Algorithms

    NASA Technical Reports Server (NTRS)

    Wheaton, Ira M.

    2011-01-01

    The focus of this project is to aid the IGOAL in researching and implementing algorithms for advanced computer graphics. First, this project focused on porting the current International Space Station (ISS) Xbox experience to the web. Previously, the ISS interior fly-around education and outreach experience only ran on an Xbox 360. One of the desires was to take this experience and make it into something that can be put on NASA s educational site for anyone to be able to access. The current code works in the Unity game engine which does have cross platform capability but is not 100% compatible. The tasks for an intern to complete this portion consisted of gaining familiarity with Unity and the current ISS Xbox code, porting the Xbox code to the web as is, and modifying the code to work well as a web application. In addition, a procedurally generated cloud algorithm will be developed. Currently, the clouds used in AGEA animations and the Xbox experiences are a texture map. The desire is to create a procedurally generated cloud algorithm to provide dynamically generated clouds for both AGEA animations and the Xbox experiences. This task consists of gaining familiarity with AGEA and the plug-in interface, developing the algorithm, creating an AGEA plug-in to implement the algorithm inside AGEA, and creating a Unity script to implement the algorithm for the Xbox. This portion of the project was unable to be completed in the time frame of the internship; however, the IGOAL will continue to work on it in the future.

  2. Development and Validation of a Fast, Accurate and Cost-Effective Aeroservoelastic Method on Advanced Parallel Computing Systems

    NASA Technical Reports Server (NTRS)

    Goodwin, Sabine A.; Raj, P.

    1999-01-01

    Progress to date towards the development and validation of a fast, accurate and cost-effective aeroelastic method for advanced parallel computing platforms such as the IBM SP2 and the SGI Origin 2000 is presented in this paper. The ENSAERO code, developed at the NASA-Ames Research Center has been selected for this effort. The code allows for the computation of aeroelastic responses by simultaneously integrating the Euler or Navier-Stokes equations and the modal structural equations of motion. To assess the computational performance and accuracy of the ENSAERO code, this paper reports the results of the Navier-Stokes simulations of the transonic flow over a flexible aeroelastic wing body configuration. In addition, a forced harmonic oscillation analysis in the frequency domain and an analysis in the time domain are done on a wing undergoing a rigid pitch and plunge motion. Finally, to demonstrate the ENSAERO flutter-analysis capability, aeroelastic Euler and Navier-Stokes computations on an L-1011 wind tunnel model including pylon, nacelle and empennage are underway. All computational solutions are compared with experimental data to assess the level of accuracy of ENSAERO. As the computations described above are performed, a meticulous log of computational performance in terms of wall clock time, execution speed, memory and disk storage is kept. Code scalability is also demonstrated by studying the impact of varying the number of processors on computational performance on the IBM SP2 and the Origin 2000 systems.

  3. The Computation Directorate at Lawrence Livermore National Laboratory

    SciTech Connect

    Cook, L

    2006-09-07

    The Computation Directorate at Lawrence Livermore National Laboratory has four major areas of work: (1) Programmatic Support -- Programs are areas which receive funding to develop solutions to problems or advance basic science in their areas (Stockpile Stewardship, Homeland Security, the Human Genome project). Computer scientists are 'matrixed' to these programs to provide computer science support. (2) Livermore Computer Center (LCC) -- Development, support and advanced planning for the large, massively parallel computers, networks and storage facilities used throughout the laboratory. (3) Research -- Computer scientists research advanced solutions for programmatic work and for external contracts and research new HPC hardware solutions. (4) Infrastructure -- Support for thousands of desktop computers and numerous LANs, labwide unclassified networks, computer security, computer-use policy.

  4. Infrastructure Retrofit Design via Composite Mechanics

    NASA Technical Reports Server (NTRS)

    Chamis, Christos, C.; Gotsis,Pascal K.

    1998-01-01

    Select applications are described to illustrate the concept for retrofitting reinforced concrete infrastructure with fiber reinforced plastic laminates. The concept is first illustrated by using an axially loaded reinforced concrete column. A reinforced concrete arch and a dome are then used to illustrate the versatility of the concept. Advanced methods such as finite element structural analysis and progressive structural fracture are then used to evaluate the retrofitting laminate adequacy. Results obtains show that retrofits can be designed to double and even triple the as-designed load of the select reinforced concrete infrastructures.

  5. NASA World Wind: Infrastructure for Spatial Data

    NASA Technical Reports Server (NTRS)

    Hogan, Patrick

    2011-01-01

    The world has great need for analysis of Earth observation data, be it climate change, carbon monitoring, disaster response, national defense or simply local resource management. To best provide for spatial and time-dependent information analysis, the world benefits from an open standards and open source infrastructure for spatial data. In the spirit of NASA's motto "for the benefit of all" NASA invites the world community to collaboratively advance this core technology. The World Wind infrastructure for spatial data both unites and challenges the world for innovative solutions analyzing spatial data while also allowing absolute command and control over any respective information exchange medium.

  6. Development of 3D multimedia with advanced computer animation tools for outreach activities related to Meteor Science and Meteoritics

    NASA Astrophysics Data System (ADS)

    Madiedo, J. M.

    2012-09-01

    Documentaries related to Astronomy and Planetary Sciences are a common and very attractive way to promote the interest of the public in these areas. These educational tools can get benefit from new advanced computer animation software and 3D technologies, as these allow making these documentaries even more attractive. However, special care must be taken in order to guarantee that the information contained in them is serious and objective. In this sense, an additional value is given when the footage is produced by the own researchers. With this aim, a new documentary produced and directed by Prof. Madiedo has been developed. The documentary, which has been entirely developed by means of advanced computer animation tools, is dedicated to several aspects of Meteor Science and Meteoritics. The main features of this outreach and education initiative are exposed here.

  7. Grand Challenges of Advanced Computing for Energy Innovation Report from the Workshop Held July 31-August 2, 2012

    SciTech Connect

    Larzelere, Alex R.; Ashby, Steven F.; Christensen, Dana C.; Crawford, Dona L.; Khaleel, Mohammad A.; John, Grosh; Stults, B. Ray; Lee, Steven L.; Hammond, Steven W.; Grover, Benjamin T.; Neely, Rob; Dudney, Lee Ann; Goldstein, Noah C.; Wells, Jack; Peltz, Jim

    2013-03-06

    On July 31-August 2 of 2012, the U.S. Department of Energy (DOE) held a workshop entitled Grand Challenges of Advanced Computing for Energy Innovation. This workshop built on three earlier workshops that clearly identified the potential for the Department and its national laboratories to enable energy innovation. The specific goal of the workshop was to identify the key challenges that the nation must overcome to apply the full benefit of taxpayer-funded advanced computing technologies to U.S. energy innovation in the ways that the country produces, moves, stores, and uses energy. Perhaps more importantly, the workshop also developed a set of recommendations to help the Department overcome those challenges. These recommendations provide an action plan for what the Department can do in the coming years to improve the nation’s energy future.

  8. Prediction of helicopter rotor discrete frequency noise: A computer program incorporating realistic blade motions and advanced acoustic formulation

    NASA Technical Reports Server (NTRS)

    Brentner, K. S.

    1986-01-01

    A computer program has been developed at the Langley Research Center to predict the discrete frequency noise of conventional and advanced helicopter rotors. The program, called WOPWOP, uses the most advanced subsonic formulation of Farassat that is less sensitive to errors and is valid for nearly all helicopter rotor geometries and flight conditions. A brief derivation of the acoustic formulation is presented along with a discussion of the numerical implementation of the formulation. The computer program uses realistic helicopter blade motion and aerodynamic loadings, input by the user, for noise calculation in the time domain. A detailed definition of all the input variables, default values, and output data is included. A comparison with experimental data shows good agreement between prediction and experiment; however, accurate aerodynamic loading is needed.

  9. Privacy and the National Information Infrastructure.

    ERIC Educational Resources Information Center

    Rotenberg, Marc

    1994-01-01

    Explains the work of Computer Professionals for Social Responsibility regarding privacy issues in the use of electronic networks; recommends principles that should be adopted for a National Information Infrastructure privacy code; discusses the need for public education; and suggests pertinent legislative proposals. (LRW)

  10. Proceedings of the topical meeting on advances in human factors research on man/computer interactions

    SciTech Connect

    Not Available

    1990-01-01

    This book discusses the following topics: expert systems and knowledge engineering-I; verification and validation of software; methods for modeling UMAN/computer performance; MAN/computer interaction problems in producing procedures -1-2; progress and problems with automation-1-2; experience with electronic presentation of procedures-2; intelligent displays and monitors; modeling user/computer interface; and computer-based human decision-making aids.

  11. A computer program for estimating the power-density spectrum of advanced continuous simulation language generated time histories

    NASA Technical Reports Server (NTRS)

    Dunn, H. J.

    1981-01-01

    A computer program for performing frequency analysis of time history data is presented. The program uses circular convolution and the fast Fourier transform to calculate power density spectrum (PDS) of time history data. The program interfaces with the advanced continuous simulation language (ACSL) so that a frequency analysis may be performed on ACSL generated simulation variables. An example of the calculation of the PDS of a Van de Pol oscillator is presented.

  12. Encouraging Advanced Second Language Speakers to Recognise Their Language Difficulties: A Personalised Computer-Based Approach

    ERIC Educational Resources Information Center

    Xu, Jing; Bull, Susan

    2010-01-01

    Despite holding advanced language qualifications, many overseas students studying at English-speaking universities still have difficulties in formulating grammatically correct sentences. This article introduces an "independent open learner model" for advanced second language speakers of English, which confronts students with the state of their…

  13. ITER Cryoplant Infrastructures

    NASA Astrophysics Data System (ADS)

    Fauve, E.; Monneret, E.; Voigt, T.; Vincent, G.; Forgeas, A.; Simon, M.

    2017-02-01

    The ITER Tokamak requires an average 75 kW of refrigeration power at 4.5 K and 600 kW of refrigeration Power at 80 K to maintain the nominal operation condition of the ITER thermal shields, superconducting magnets and cryopumps. This is produced by the ITER Cryoplant, a complex cluster of refrigeration systems including in particular three identical Liquid Helium Plants and two identical Liquid Nitrogen Plants. Beyond the equipment directly part of the Cryoplant, colossal infrastructures are required. These infrastructures account for a large part of the Cryoplants lay-out, budget and engineering efforts. It is ITER Organization responsibility to ensure that all infrastructures are adequately sized and designed to interface with the Cryoplant. This proceeding presents the overall architecture of the cryoplant. It provides order of magnitude related to the cryoplant building and utilities: electricity, cooling water, heating, ventilation and air conditioning (HVAC).

  14. Application of advanced computational procedures for modeling solar-wind interactions with Venus: Theory and computer code

    NASA Technical Reports Server (NTRS)

    Stahara, S. S.; Klenke, D.; Trudinger, B. C.; Spreiter, J. R.

    1980-01-01

    Computational procedures are developed and applied to the prediction of solar wind interaction with nonmagnetic terrestrial planet atmospheres, with particular emphasis to Venus. The theoretical method is based on a single fluid, steady, dissipationless, magnetohydrodynamic continuum model, and is appropriate for the calculation of axisymmetric, supersonic, super-Alfvenic solar wind flow past terrestrial planets. The procedures, which consist of finite difference codes to determine the gasdynamic properties and a variety of special purpose codes to determine the frozen magnetic field, streamlines, contours, plots, etc. of the flow, are organized into one computational program. Theoretical results based upon these procedures are reported for a wide variety of solar wind conditions and ionopause obstacle shapes. Plasma and magnetic field comparisons in the ionosheath are also provided with actual spacecraft data obtained by the Pioneer Venus Orbiter.

  15. Infrastructure: A technology battlefield in the 21st century

    SciTech Connect

    Drucker, H.

    1997-12-31

    A major part of technological advancement has involved the development of complex infrastructure systems, including electric power generation, transmission, and distribution networks; oil and gas pipeline systems; highway and rail networks; and telecommunication networks. Dependence on these infrastructure systems renders them attractive targets for conflict in the twenty-first century. Hostile governments, domestic and international terrorists, criminals, and mentally distressed individuals will inevitably find some part of the infrastructure an easy target for theft, for making political statements, for disruption of strategic activities, or for making a nuisance. The current situation regarding the vulnerability of the infrastructure can be summarized in three major points: (1) our dependence on technology has made our infrastructure more important and vital to our everyday lives, this in turn, makes us much more vulnerable to disruption in any infrastructure system; (2) technologies available for attacking infrastructure systems have changed substantially and have become much easier to obtain and use, easy accessibility to information on how to disrupt or destroy various infrastructure components means that almost anyone can be involved in this destructive process; (3) technologies for defending infrastructure systems and preventing damage have not kept pace with the capability for destroying such systems. A brief review of these points will illustrate the significance of infrastructure and the growing dangers to its various elements.

  16. The Moral Dimensions of Infrastructure.

    PubMed

    Epting, Shane

    2016-04-01

    Moral issues in urban planning involving technology, residents, marginalized groups, ecosystems, and future generations are complex cases, requiring solutions that go beyond the limits of contemporary moral theory. Aside from typical planning problems, there is incongruence between moral theory and some of the subjects that require moral assessment, such as urban infrastructure. Despite this incongruence, there is not a need to develop another moral theory. Instead, a supplemental measure that is compatible with existing moral positions will suffice. My primary goal in this paper is to explain the need for this supplemental measure, describe what one looks like, and show how it works with existing moral systems. The secondary goal is to show that creating a supplemental measure that provides congruency between moral systems that are designed to assess human action and non-human subjects advances the study of moral theory.

  17. Advances in Navy pharmacy information technology: accessing Micromedex via the Composite Healthcare Computer System and local area networks.

    PubMed

    Koerner, S D; Becker, F

    1999-07-01

    The pharmacy profession has long used technology to more effectively bring health care to the patient. Navy pharmacy has embraced technology advances in its daily operations, from computers to dispensing robots. Evolving from the traditional role of compounding and dispensing specialists, pharmacists are establishing themselves as vital team members in direct patient care: on the ward, in ambulatory clinics, in specialty clinics, and in other specialty patient care programs (e.g., smoking cessation). An important part of the evolution is the timely access to the most up-to-date information available. Micromedex, Inc. (Denver, Colorado), has developed a number of computer CD-ROM-based full-text pharmacy, toxicology, emergency medicine, and patient education products. Micromedex is a recognized leader with regard to total pharmaceutical information availability. This article discusses the implementation of Micromedex products within the established Composite Healthcare Computer System and the subsequent use by and effect on the international Navy pharmacy community.

  18. Recent Advances in Photonic Devices for Optical Computing and the Role of Nonlinear Optics-Part II

    NASA Technical Reports Server (NTRS)

    Abdeldayem, Hossin; Frazier, Donald O.; Witherow, William K.; Banks, Curtis E.; Paley, Mark S.

    2007-01-01

    The twentieth century has been the era of semiconductor materials and electronic technology while this millennium is expected to be the age of photonic materials and all-optical technology. Optical technology has led to countless optical devices that have become indispensable in our daily lives in storage area networks, parallel processing, optical switches, all-optical data networks, holographic storage devices, and biometric devices at airports. This chapters intends to bring some awareness to the state-of-the-art of optical technologies, which have potential for optical computing and demonstrate the role of nonlinear optics in many of these components. Our intent, in this Chapter, is to present an overview of the current status of optical computing, and a brief evaluation of the recent advances and performance of the following key components necessary to build an optical computing system: all-optical logic gates, adders, optical processors, optical storage, holographic storage, optical interconnects, spatial light modulators and optical materials.

  19. Parallel-META 2.0: enhanced metagenomic data analysis with functional annotation, high performance computing and advanced visualization.

    PubMed

    Su, Xiaoquan; Pan, Weihua; Song, Baoxing; Xu, Jian; Ning, Kang

    2014-01-01

    The metagenomic method directly sequences and analyses genome information from microbial communities. The main computational tasks for metagenomic analyses include taxonomical and functional structure analysis for all genomes in a microbial community (also referred to as a metagenomic sample). With the advancement of Next Generation Sequencing (NGS) techniques, the number of metagenomic samples and the data size for each sample are increasing rapidly. Current metagenomic analysis is both data- and computation- intensive, especially when there are many species in a metagenomic sample, and each has a large number of sequences. As such, metagenomic analyses require extensive computational power. The increasing analytical requirements further augment the challenges for computation analysis. In this work, we have proposed Parallel-META 2.0, a metagenomic analysis software package, to cope with such needs for efficient and fast analyses of taxonomical and functional structures for microbial communities. Parallel-META 2.0 is an extended and improved version of Parallel-META 1.0, which enhances the taxonomical analysis using multiple databases, improves computation efficiency by optimized parallel computing, and supports interactive visualization of results in multiple views. Furthermore, it enables functional analysis for metagenomic samples including short-reads assembly, gene prediction and functional annotation. Therefore, it could provide accurate taxonomical and functional analyses of the metagenomic samples in high-throughput manner and on large scale.

  20. Kwajalein Infrastructure Prioritization Methodology

    DTIC Science & Technology

    2012-07-01

    GROUNDS-MAINTENANCE-SERVICE- CONTRACT-GUIDE-US-Army-Center>. David, Leonard. “ SpaceX Private Rocket Shifts to Island Launch.” 12 Aug. 2005. TechMedia...Network. 11 Sept. 2011. <http://www.space.com/1422- spacex -private-rocket-shifts-island-launch.html>. Kwajalein Infrastructure Prioritization

  1. An Infrastructure Roadmap.

    ERIC Educational Resources Information Center

    Furgeson, Steven P.

    2002-01-01

    Describes how a master infrastructure plan for electrical and mechanical systems can help determine annual maintenance budgets, form annual capital-improvement budgets, take a snapshot of existing conditions, and lead to better energy management. Discusses important elements in such plans. (EV)

  2. Infrastructure Survey 2009

    ERIC Educational Resources Information Center

    Group of Eight (NJ1), 2010

    2010-01-01

    In 2008 the Group of Eight (Go8) released a first report on the state of its buildings and infrastructure, based on a survey undertaken in 2007. A further survey was undertaken in 2009, updating some information about the assessed quality, value and condition of buildings and use of space. It also collated data related to aspects of the estate not…

  3. An Infrastructure Museum

    ERIC Educational Resources Information Center

    Roman, Harry T.

    2013-01-01

    This article invites teachers to let their students' imaginations soar as they become part of a team that will design a whole new kind of living technological museum, a facility that celebrates the world of infrastructure. In this activity, a new two-story building will be built, occupying a vacant corner parcel of land, approximately 150…

  4. Surface Computer System Architecture for the Advanced Unmanned Search System (AUSS)

    DTIC Science & Technology

    1992-12-01

    called the data docker , is planned, which is basically a single board, 80286-based machine from Ampro Computers, Inc., intended to receive the vehicle on...the data docker computer, which is networked into the control van system. The drive is packaged in a carrier that trakes the drive act like a plug-in...and data docker computers. The FS and data docker machines require only infrequent use of a keyboard and monitor, and therefore, the switchbox is an

  5. An Overview of the Advanced CompuTational Software (ACTS)Collection

    SciTech Connect

    Drummond, Leroy A.; Marques, Osni A.

    2005-02-02

    The ACTS Collection brings together a number of general-purpose computational tools that were developed by independent research projects mostly funded and supported by the U.S. Department of Energy. These tools tackle a number of common computational issues found in many applications, mainly implementation of numerical algorithms, and support for code development, execution and optimization. In this article, we introduce the numerical tools in the collection and their functionalities, present a model for developing more complex computational applications on top of ACTS tools, and summarize applications that use these tools. Lastly, we present a vision of the ACTS project for deployment of the ACTS Collection by the computational sciences community.

  6. 78 FR 59927 - Next Generation Risk Assessment: Incorporation of Recent Advances in Molecular, Computational...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-30

    ... Systems Biology [External Review Draft] AGENCY: Environmental Protection Agency (EPA). ACTION: Notice of... Molecular, Computational, and Systems Biology '' (EPA/600/R-13/214A). EPA is also announcing that Eastern..., Computational, and Systems Biology '' is available primarily via the Internet on the NCEA home page under...

  7. BOOK REVIEW: Advanced Topics in Computational Partial Differential Equations: Numerical Methods and Diffpack Programming

    NASA Astrophysics Data System (ADS)

    Katsaounis, T. D.

    2005-02-01

    The scope of this book is to present well known simple and advanced numerical methods for solving partial differential equations (PDEs) and how to implement these methods using the programming environment of the software package Diffpack. A basic background in PDEs and numerical methods is required by the potential reader. Further, a basic knowledge of the finite element method and its implementation in one and two space dimensions is required. The authors claim that no prior knowledge of the package Diffpack is required, which is true, but the reader should be at least familiar with an object oriented programming language like C++ in order to better comprehend the programming environment of Diffpack. Certainly, a prior knowledge or usage of Diffpack would be a great advantage to the reader. The book consists of 15 chapters, each one written by one or more authors. Each chapter is basically divided into two parts: the first part is about mathematical models described by PDEs and numerical methods to solve these models and the second part describes how to implement the numerical methods using the programming environment of Diffpack. Each chapter closes with a list of references on its subject. The first nine chapters cover well known numerical methods for solving the basic types of PDEs. Further, programming techniques on the serial as well as on the parallel implementation of numerical methods are also included in these chapters. The last five chapters are dedicated to applications, modelled by PDEs, in a variety of fields. The first chapter is an introduction to parallel processing. It covers fundamentals of parallel processing in a simple and concrete way and no prior knowledge of the subject is required. Examples of parallel implementation of basic linear algebra operations are presented using the Message Passing Interface (MPI) programming environment. Here, some knowledge of MPI routines is required by the reader. Examples solving in parallel simple PDEs using

  8. Critical Infrastructure for Ocean Research and Societal Needs in 2030

    SciTech Connect

    National Research Council

    2011-04-22

    coordinated national plan for making future strategic investments becomes an imperative to address societal needs. Such a plan should be based upon known priorities and should be reviewed every 5-10 years to optimize the federal investment. The committee examined the past 20 years of technological advances and ocean infrastructure investments (such as the rise in use of self-propelled, uncrewed, underwater autonomous vehicles), assessed infrastructure that would be required to address future ocean research questions, and characterized ocean infrastructure trends for 2030. One conclusion was that ships will continue to be essential, especially because they provide a platform for enabling other infrastructure autonomous and remotely operated vehicles; samplers and sensors; moorings and cabled systems; and perhaps most importantly, the human assets of scientists, technical staff, and students. A comprehensive, long-term research fleet plan should be implemented in order to retain access to the sea. The current report also calls for continuing U.S. capability to access fully and partially ice-covered seas; supporting innovation, particularly the development of biogeochemical sensors; enhancing computing and modeling capacity and capability; establishing broadly accessible data management facilities; and increasing interdisciplinary education and promoting a technically-skilled workforce. The committee also provided a framework for prioritizing future investment in ocean infrastructure. They recommend that development, maintenance, or replacement of ocean research infrastructure assets should be prioritized in terms of societal benefit, with particular consideration given to usefulness for addressing important science questions; affordability, efficiency, and longevity; and ability to contribute to other missions or applications. These criteria are the foundation for prioritizing ocean research infrastructure investments by estimating the economic costs and benefits of each

  9. Infrastructure for Training and Partnershipes: California Water and Coastal Ocean Resources

    NASA Technical Reports Server (NTRS)

    Siegel, David A.; Dozier, Jeffrey; Gautier, Catherine; Davis, Frank; Dickey, Tommy; Dunne, Thomas; Frew, James; Keller, Arturo; MacIntyre, Sally; Melack, John

    2000-01-01

    The purpose of this project was to advance the existing ICESS/Bren School computing infrastructure to allow scientists, students, and research trainees the opportunity to interact with environmental data and simulations in near-real time. Improvements made with the funding from this project have helped to strengthen the research efforts within both units, fostered graduate research training, and helped fortify partnerships with government and industry. With this funding, we were able to expand our computational environment in which computer resources, software, and data sets are shared by ICESS/Bren School faculty researchers in all areas of Earth system science. All of the graduate and undergraduate students associated with the Donald Bren School of Environmental Science and Management and the Institute for Computational Earth System Science have benefited from the infrastructure upgrades accomplished by this project. Additionally, the upgrades fostered a significant number of research projects (attached is a list of the projects that benefited from the upgrades). As originally proposed, funding for this project provided the following infrastructure upgrades: 1) a modem file management system capable of interoperating UNIX and NT file systems that can scale to 6.7 TB, 2) a Qualstar 40-slot tape library with two AIT tape drives and Legato Networker backup/archive software, 3) previously unavailable import/export capability for data sets on Zip, Jaz, DAT, 8mm, CD, and DLT media in addition to a 622Mb/s Internet 2 connection, 4) network switches capable of 100 Mbps to 128 desktop workstations, 5) Portable Batch System (PBS) computational task scheduler, and vi) two Compaq/Digital Alpha XP1000 compute servers each with 1.5 GB of RAM along with an SGI Origin 2000 (purchased partially using funds from this project along with funding from various other sources) to be used for very large computations, as required for simulation of mesoscale meteorology or climate.

  10. Effecting IT infrastructure culture change: management by processes and metrics

    NASA Technical Reports Server (NTRS)

    Miller, R. L.

    2001-01-01

    This talk describes the processes and metrics used by Jet Propulsion Laboratory to bring about the required IT infrastructure culture change to update and certify, as Y2K compliant, thousands of computers and millions of lines of code.

  11. Computation Directorate Annual Report 2003

    SciTech Connect

    Crawford, D L; McGraw, J R; Ashby, S F; McCoy, M G; Michels, T C; Eltgroth, P G

    2004-03-12

    Big computers are icons: symbols of the culture, and of the larger computing infrastructure that exists at Lawrence Livermore. Through the collective effort of Laboratory personnel, they enable scientific discovery and engineering development on an unprecedented scale. For more than three decades, the Computation Directorate has supplied the big computers that enable the science necessary for Laboratory missions and programs. Livermore supercomputing is uniquely mission driven. The high-fidelity weapon simulation capabilities essential to the Stockpile Stewardship Program compel major advances in weapons codes and science, compute power, and computational infrastructure. Computation's activities align with this vital mission of the Department of Energy. Increasingly, non-weapons Laboratory programs also rely on computer simulation. World-class achievements have been accomplished by LLNL specialists working in multi-disciplinary research and development teams. In these teams, Computation personnel employ a wide array of skills, from desktop support expertise, to complex applications development, to advanced research. Computation's skilled professionals make the Directorate the success that it has become. These individuals know the importance of the work they do and the many ways it contributes to Laboratory missions. They make appropriate and timely decisions that move the entire organization forward. They make Computation a leader in helping LLNL achieve its programmatic milestones. I dedicate this inaugural Annual Report to the people of Computation in recognition of their continuing contributions. I am proud that we perform our work securely and safely. Despite increased cyber attacks on our computing infrastructure from the Internet, advanced cyber security practices ensure that our computing environment remains secure. Through Integrated Safety Management (ISM) and diligent oversight, we address safety issues promptly and aggressively. The safety of our

  12. Early, computer-Aided Design/Computer-Aided Modeling Planned, Le Fort I Advancement With Internal Distractors to Treat Severe Maxillary Hypoplasia in Cleft Lip and Palate.

    PubMed

    Chang, Catherine S; Swanson, Jordan; Yu, Jason; Taylor, Jesse A

    2017-04-11

    Traditionally, maxillary hypoplasia in the setting of cleft lip and palate is treated via orthognathic surgery at skeletal maturity, which condemns these patients to abnormal facial proportions during adolescence. The authors sought to determine the safety profile of computer-aided design/computer-aided modeling (CAD/CAM) planned, Le Fort I distraction osteogenesis with internal distractors in select patients presenting at a young age with severe maxillary retrusion. The authors retrospectively reviewed our "early" Le Fort I distraction osteogenesis experience-patients performed for severe maxillary retrusion (≥12 mm underjet), after canine eruption but prior to skeletal maturity-at a single institution. Patient demographics, cleft characteristics, CAD/CAM operative plans, surgical complications, postoperative imaging, and outcomes were analyzed. Four patients were reviewed, with a median age of 12.8 years at surgery (range 8.6-16.1 years). Overall mean advancement was 17.95 + 2.9 mm (range 13.7-19.9 mm) with mean SNA improved 18.4° to 87.4 ± 5.7°. Similarly, ANB improved 17.7° to a postoperative mean of 2.4 ± 3.1°. Mean follow-up was 100.7 weeks, with 3 of 4 patients in a Class I occlusion with moderate-term follow-up; 1 of 4 will need an additional maxillary advancement due to pseudo-relapse. In conclusion, Le Fort I distraction osteogenesis with internal distractors is a safe procedure to treat severe maxillary hypoplasia after canine eruption but before skeletal maturity. Short-term follow-up demonstrates safety of the procedure and relative stability of the advancement. Pseudo-relapse is a risk of the procedure that must be discussed at length with patients and families.

  13. Structural health monitoring of civil infrastructure.

    PubMed

    Brownjohn, J M W

    2007-02-15

    Structural health monitoring (SHM) is a term increasingly used in the last decade to describe a range of systems implemented on full-scale civil infrastructures and whose purposes are to assist and inform operators about continued 'fitness for purpose' of structures under gradual or sudden changes to their state, to learn about either or both of the load and response mechanisms. Arguably, various forms of SHM have been employed in civil infrastructure for at least half a century, but it is only in the last decade or two that computer-based systems are being designed for the purpose of assisting owners/operators of ageing infrastructure with timely information for their continued safe and economic operation. This paper describes the motivations for and recent history of SHM applications to various forms of civil infrastructure and provides case studies on specific types of structure. It ends with a discussion of the present state-of-the-art and future developments in terms of instrumentation, data acquisition, communication systems and data mining and presentation procedures for diagnosis of infrastructural 'health'.

  14. Computer architecture for efficient algorithmic executions in real-time systems: New technology for avionics systems and advanced space vehicles

    NASA Technical Reports Server (NTRS)

    Carroll, Chester C.; Youngblood, John N.; Saha, Aindam

    1987-01-01

    Improvements and advances in the development of computer architecture now provide innovative technology for the recasting of traditional sequential solutions into high-performance, low-cost, parallel system to increase system performance. Research conducted in development of specialized computer architecture for the algorithmic execution of an avionics system, guidance and control problem in real time is described. A comprehensive treatment of both the hardware and software structures of a customized computer which performs real-time computation of guidance commands with updated estimates of target motion and time-to-go is presented. An optimal, real-time allocation algorithm was developed which maps the algorithmic tasks onto the processing elements. This allocation is based on the critical path analysis. The final stage is the design and development of the hardware structures suitable for the efficient execution of the allocated task graph. The processing element is designed for rapid execution of the allocated tasks. Fault tolerance is a key feature of the overall architecture. Parallel numerical integration techniques, tasks definitions, and allocation algorithms are discussed. The parallel implementation is analytically verified and the experimental results are presented. The design of the data-driven computer architecture, customized for the execution of the particular algorithm, is discussed.

  15. A Computational Future for Preventing HIV in Minority Communities: How Advanced Technology Can Improve Implementation of Effective Programs

    PubMed Central

    Brown, C Hendricks; Mohr, David C.; Gallo, Carlos G.; Mader, Christopher; Palinkas, Lawrence; Wingood, Gina; Prado, Guillermo; Kellam, Sheppard G.; Pantin, Hilda; Poduska, Jeanne; Gibbons, Robert; McManus, John; Ogihara, Mitsunori; Valente, Thomas; Wulczyn, Fred; Czaja, Sara; Sutcliffe, Geoff; Villamar, Juan; Jacobs, Christopher

    2013-01-01

    African Americans and Hispanics in the U.S. have much higher rates of HIV than non-minorities. There is now strong evidence that a range of behavioral interventions are efficacious in reducing sexual risk behavior in these populations. While a handful of these programs are just beginning to be disseminated widely, we still have not implemented effective programs to a level that would reduce the population incidence of HIV for minorities. We propose that innovative approaches involving computational technologies be explored for their use in both developing new interventions as well as in supporting wide-scale implementation of effective behavioral interventions. Mobile technologies have a place in both of these activities. First, mobile technologies can be used in sensing contexts and interacting to the unique preferences and needs of individuals at times where intervention to reduce risk would be most impactful. Secondly, mobile technologies can be used to improve the delivery of interventions by facilitators and their agencies. Systems science methods, including social network analysis, agent based models, computational linguistics, intelligent data analysis, and systems and software engineering all have strategic roles that can bring about advances in HIV prevention in minority communities. Using an existing mobile technology for depression and three effective HIV prevention programs, we illustrate how eight areas in the intervention/implementation process can use innovative computational approaches to advance intervention adoption, fidelity, and sustainability. PMID:23673892

  16. A computational future for preventing HIV in minority communities: how advanced technology can improve implementation of effective programs.

    PubMed

    Brown, C Hendricks; Mohr, David C; Gallo, Carlos G; Mader, Christopher; Palinkas, Lawrence; Wingood, Gina; Prado, Guillermo; Kellam, Sheppard G; Pantin, Hilda; Poduska, Jeanne; Gibbons, Robert; McManus, John; Ogihara, Mitsunori; Valente, Thomas; Wulczyn, Fred; Czaja, Sara; Sutcliffe, Geoff; Villamar, Juan; Jacobs, Christopher

    2013-06-01

    African Americans and Hispanics in the United States have much higher rates of HIV than non-minorities. There is now strong evidence that a range of behavioral interventions are efficacious in reducing sexual risk behavior in these populations. Although a handful of these programs are just beginning to be disseminated widely, we still have not implemented effective programs to a level that would reduce the population incidence of HIV for minorities. We proposed that innovative approaches involving computational technologies be explored for their use in both developing new interventions and in supporting wide-scale implementation of effective behavioral interventions. Mobile technologies have a place in both of these activities. First, mobile technologies can be used in sensing contexts and interacting to the unique preferences and needs of individuals at times where intervention to reduce risk would be most impactful. Second, mobile technologies can be used to improve the delivery of interventions by facilitators and their agencies. Systems science methods including social network analysis, agent-based models, computational linguistics, intelligent data analysis, and systems and software engineering all have strategic roles that can bring about advances in HIV prevention in minority communities. Using an existing mobile technology for depression and 3 effective HIV prevention programs, we illustrated how 8 areas in the intervention/implementation process can use innovative computational approaches to advance intervention adoption, fidelity, and sustainability.

  17. Verification, Validation and Credibility Assessment of a Computational Model of the Advanced Resistive Exercise Device (ARED)

    NASA Technical Reports Server (NTRS)

    Werner, C. R.; Humphreys, B. T.; Mulugeta, L.

    2014-01-01

    The Advanced Resistive Exercise Device (ARED) is the resistive exercise device used by astronauts on the International Space Station (ISS) to mitigate bone loss and muscle atrophy due to extended exposure to microgravity (micro g). The Digital Astronaut Project (DAP) has developed a multi-body dynamics model of biomechanics models for use in spaceflight exercise physiology research and operations. In an effort to advance model maturity and credibility of the ARED model, the DAP performed verification, validation and credibility (VV and C) assessment of the analyses of the model in accordance to NASA-STD-7009 'Standards for Models and Simulations'.

  18. DRIHM: Distributed Research Infrastructure for HydroMeteorology

    NASA Astrophysics Data System (ADS)

    Parodi, A.

    2012-04-01

    Predicting weather and climate and its impacts on the environment, including hazards such as floods and landslides, is still one of the main challenges of the 21st century with significant societal and economic implications. At the heart of this challenge lies the ability to have easy access to hydrometeorological data and models, and facilitate the collaboration between meteorologists, hydrologists, and Earth science experts for accelerated scientific advances in hydrometeorological research (HMR). to face these problems the DRIHM (Distributed Research Infrastructure for Hydro-Meteorology) project intends to develop a prototype e-Science environment to facilitate this collaboration and provide end-to-end HMR services (models, datasets and post-processing tools) at the European level, with the ability to expand to global scale. The objectives of DRIHM are to lead the definition of a common long-term strategy, to foster the development of new HMR models and observational archives for the study of severe hydrometeorological events, to promote the execution and analysis of high-end simulations, and to support the dissemination of predictive models as decision analysis tools. DRIHM combines the European expertise in HMR, in Grid and High Performance Computing (HPC). Joint research activities will improve the efficient use of the European e-Infrastructures, notably Grid and HPC, for HMR modelling and observational databases, model evaluation tool sets and access to HMR model results. Networking activities will disseminate DRIHM results at the European and global levels in order to increase the cohesion of European and possibly worldwide HMR communities and increase the awareness of ICT potential for HMR. Service activities will deploy the end-to-end DRIHM services and tools in support of HMR networks and virtual organizations on top of the existing European e-Infrastructures.

  19. Advanced computational methods for nodal diffusion, Monte Carlo, and S{sub n} problems. Final Report

    SciTech Connect

    1994-12-31

    The work addresses basic computational difficulties that arise in the numerical simulation of neutral particle radiation transport: discretized radiation transport problems, iterative methods, selection of parameters, and extension of current algorithms.

  20. Study of flutter related computational procedures for minimum weight structural sizing of advanced aircraft, supplemental data

    NASA Technical Reports Server (NTRS)

    Oconnell, R. F.; Hassig, H. J.; Radovcich, N. A.

    1975-01-01

    Computational aspects of (1) flutter optimization (minimization of structural mass subject to specified flutter requirements), (2) methods for solving the flutter equation, and (3) efficient methods for computing generalized aerodynamic force coefficients in the repetitive analysis environment of computer-aided structural design are discussed. Specific areas included: a two-dimensional Regula Falsi approach to solving the generalized flutter equation; method of incremented flutter analysis and its applications; the use of velocity potential influence coefficients in a five-matrix product formulation of the generalized aerodynamic force coefficients; options for computational operations required to generate generalized aerodynamic force coefficients; theoretical considerations related to optimization with one or more flutter constraints; and expressions for derivatives of flutter-related quantities with respect to design variables.

  1. Geophysical outlook. Part IV. New vector super computers promote seismic advancements

    SciTech Connect

    Nelson, H.R. Jr.

    1982-01-01

    Some major oil companies are beginning to test the use of vector computers to process the huge volumes of seismic data acquired by modern prospecting techniques. To take advantage of the parallel-processing techniques offered by the vector mode of analysis, users must completely restructure the seismic-data processing packages. The most important application of vector computers, to date, has been in numerical reservoir modeling.

  2. Advanced Methods for the Computer-Aided Diagnosis of Lesions in Digital Mammograms

    DTIC Science & Technology

    2000-07-01

    classification of mammographic mass lesions. Radiology 213: 200, 1999. " Nishikawa R, Giger ML, Yarusso L, Kupinski M, Baehr A, Venta L,: Computer-aided...detection of mass lesions in digital mammography using radial gradient index filtering. Radiology 213: 229, 1999. " Maloney M, Huo Z, Giger ML, Venta L...Nishikawa R, Huo Z, Jiang Y, Venta L, Doi K: Computer-aided diagnosis (CAD) in breast imaging. Radiology 213: 507, 1999. -Final Report DAMD 17-96-1-6058 19

  3. Development of Advanced Computational Aeroelasticity Tools at NASA Langley Research Center

    NASA Technical Reports Server (NTRS)

    Bartels, R. E.

    2008-01-01

    NASA Langley Research Center has continued to develop its long standing computational tools to address new challenges in aircraft and launch vehicle design. This paper discusses the application and development of those computational aeroelastic tools. Four topic areas will be discussed: 1) Modeling structural and flow field nonlinearities; 2) Integrated and modular approaches to nonlinear multidisciplinary analysis; 3) Simulating flight dynamics of flexible vehicles; and 4) Applications that support both aeronautics and space exploration.

  4. Next Generation Risk Assessment: Incorporation of Recent Advances in Molecular, Computational, and Systems Biology (Final Report)

    EPA Science Inventory

    Cover of the Next Generation of Risk Assessment Final report This final report, "Next Generation Risk Assessment: Recent Advances in Molec...

  5. In Situ Nuclear Characterization Infrastructure

    SciTech Connect

    James A. Smith; J. Rory Kennedy

    2011-11-01

    To be able to evolve microstructure with a prescribed in situ process, an effective measurement infrastructure must exist. This interdisciplinary infrastructure needs to be developed in parallel with in situ sensor technology. This paper discusses the essential elements in an effective infrastructure.

  6. Investigation of advanced counterrotation blade configuration concepts for high speed turboprop systems. Task 4: Advanced fan section aerodynamic analysis computer program user's manual

    NASA Technical Reports Server (NTRS)

    Crook, Andrew J.; Delaney, Robert A.

    1992-01-01

    The computer program user's manual for the ADPACAPES (Advanced Ducted Propfan Analysis Code-Average Passage Engine Simulation) program is included. The objective of the computer program is development of a three-dimensional Euler/Navier-Stokes flow analysis for fan section/engine geometries containing multiple blade rows and multiple spanwise flow splitters. An existing procedure developed by Dr. J. J. Adamczyk and associates at the NASA Lewis Research Center was modified to accept multiple spanwise splitter geometries and simulate engine core conditions. The numerical solution is based upon a finite volume technique with a four stage Runge-Kutta time marching procedure. Multiple blade row solutions are based upon the average-passage system of equations. The numerical solutions are performed on an H-type grid system, with meshes meeting the requirement of maintaining a common axisymmetric mesh for each blade row grid. The analysis was run on several geometry configurations ranging from one to five blade rows and from one to four radial flow splitters. The efficiency of the solution procedure was shown to be the same as the original analysis.

  7. Advanced computational simulation for design and manufacturing of lightweight material components for automotive applications

    SciTech Connect

    Simunovic, S.; Aramayo, G.A.; Zacharia, T.; Toridis, T.G.; Bandak, F.; Ragland, C.L.

    1997-04-01

    Computational vehicle models for the analysis of lightweight material performance in automobiles have been developed through collaboration between Oak Ridge National Laboratory, the National Highway Transportation Safety Administration, and George Washington University. The vehicle models have been verified against experimental data obtained from vehicle collisions. The crashed vehicles were analyzed, and the main impact energy dissipation mechanisms were identified and characterized. Important structural parts were extracted and digitized and directly compared with simulation results. High-performance computing played a key role in the model development because it allowed for rapid computational simulations and model modifications. The deformation of the computational model shows a very good agreement with the experiments. This report documents the modifications made to the computational model and relates them to the observations and findings on the test vehicle. Procedural guidelines are also provided that the authors believe need to be followed to create realistic models of passenger vehicles that could be used to evaluate the performance of lightweight materials in automotive structural components.

  8. Using Computer-Assisted Argumentation Mapping to develop effective argumentation skills in high school advanced placement physics

    NASA Astrophysics Data System (ADS)

    Heglund, Brian

    Educators recognize the importance of reasoning ability for development of critical thinking skills, conceptual change, metacognition, and participation in 21st century society. There is a recognized need for students to improve their skills of argumentation, however, argumentation is not explicitly taught outside logic and philosophy---subjects that are not part of the K-12 curriculum. One potential way of supporting the development of argumentation skills in the K-12 context is through incorporating Computer-Assisted Argument Mapping to evaluate arguments. This quasi-experimental study tested the effects of such argument mapping software and was informed by the following two research questions: 1. To what extent does the collaborative use of Computer-Assisted Argumentation Mapping to evaluate competing theories influence the critical thinking skill of argument evaluation, metacognitive awareness, and conceptual knowledge acquisition in high school Advanced Placement physics, compared to the more traditional method of text tables that does not employ Computer-Assisted Argumentation Mapping? 2. What are the student perceptions of the pros and cons of argument evaluation in the high school Advanced Placement physics environment? This study examined changes in critical thinking skills, including argumentation evaluation skills, as well as metacognitive awareness and conceptual knowledge, in two groups: a treatment group using Computer-Assisted Argumentation Mapping to evaluate physics arguments, and a comparison group using text tables to evaluate physics arguments. Quantitative and qualitative methods for collecting and analyzing data were used to answer the research questions. Quantitative data indicated no significant difference between the experimental groups, and qualitative data suggested students perceived pros and cons of argument evaluation in the high school Advanced Placement physics environment, such as self-reported sense of improvement in argument

  9. Advances in I/O, Speedup, and Universality on Colossus, an Unconventional Computer

    NASA Astrophysics Data System (ADS)

    Wells, Benjamin

    Colossus, the first electronic digital (and very unconventional) computer, was not a stored-program general purpose computer in the modern sense, although there are printed claims to the contrary. At least one of these asserts Colossus was a Turing machine. Certainly, an appropriate Turing machine can simulate the operation of Colossus. That is hardly an argument for generality of computation. But this is: a universal Turing machine could have been implemented on a clustering of the ten Colossus machines installed at Bletchley Park, England, by the end of WWII in 1945. Along with the presentation of this result, several improvements in input, output, and speed, within the hardware capability and specification of Colossus are discussed.

  10. Inspection of advanced computational lithography logic reticles using a 193-nm inspection system

    NASA Astrophysics Data System (ADS)

    Yu, Ching-Fang; Lin, Mei-Chun; Lai, Mei-Tsu; Hsu, Luke T. H.; Chin, Angus; Lee, S. C.; Yen, Anthony; Wang, Jim; Chen, Ellison; Wu, David; Broadbent, William H.; Huang, William; Zhu, Zinggang

    2010-09-01

    We report inspection results of early 22-nm logic reticles designed with both conventional and computational lithography methods. Inspection is performed using a state-of-the-art 193-nm reticle inspection system in the reticleplane inspection mode (RPI) where both rule-based sensitivity control (RSC) and a newer modelbased sensitivity control (MSC) method are tested. The evaluation includes defect detection performance using several special test reticles designed with both conventional and computational lithography methods; the reticles contain a variety of programmed critical defects which are measured based on wafer print impact. Also included are inspection results from several full-field product reticles designed with both conventional and computational lithography methods to determine if low nuisance-defect counts can be achieved. These early reticles are largely single-die and all inspections are performed in the die-to-database inspection mode only.

  11. Designing a single board computers for space using the most advanced processor and mitigation technologies

    NASA Astrophysics Data System (ADS)

    Longden, L.; Thibodeau, C.; Hiliman, R.; Layton, P.; Dowd, M.

    2002-12-01

    As high-end computing becomes more of a necessity in space, there currently exists a large gap between what is available to satellite manufacturers and the state of the commercial processor industry. As a result, Maxwell Technologies has developed a Super Computer for Space that utilizes the latest commercial Silicon-on-Insulator PowerPC processors and state-of- the-art memory modules to achieve space-qualified performance that is from 10 to 1000 times that of current technology. In addition, Maxwell's Super Computer for Space (SCS750) SBC is capable of executing up to 1800+ millions of instruction per second (MIPS), while guaranteeing upset rates for the entire board of less then 1 every 1000 years. Presented is a brief synopsis of Maxwell's design approach and radiation mitigation techniques and radiation test results employed on Maxwell's next generation SBC.

  12. An Infrastructure for Continuous Intake Individualized Education [and] Infrastructure for Learning Management.

    ERIC Educational Resources Information Center

    Matthews, Don

    At Humber College (HC) in Toronto, Ontario, Canada, the Digital Electronics (DE) program utilizes a computerized learning infrastructure called Computer Managed Learning (CML). The program, which has been under development for several years, is flexible enough to build a unique program of studies for each individual student and allows for the…

  13. High performance computing and communications: Advancing the frontiers of information technology

    SciTech Connect

    1997-12-31

    This report, which supplements the President`s Fiscal Year 1997 Budget, describes the interagency High Performance Computing and Communications (HPCC) Program. The HPCC Program will celebrate its fifth anniversary in October 1996 with an impressive array of accomplishments to its credit. Over its five-year history, the HPCC Program has focused on developing high performance computing and communications technologies that can be applied to computation-intensive applications. Major highlights for FY 1996: (1) High performance computing systems enable practical solutions to complex problems with accuracies not possible five years ago; (2) HPCC-funded research in very large scale networking techniques has been instrumental in the evolution of the Internet, which continues exponential growth in size, speed, and availability of information; (3) The combination of hardware capability measured in gigaflop/s, networking technology measured in gigabit/s, and new computational science techniques for modeling phenomena has demonstrated that very large scale accurate scientific calculations can be executed across heterogeneous parallel processing systems located thousands of miles apart; (4) Federal investments in HPCC software R and D support researchers who pioneered the development of parallel languages and compilers, high performance mathematical, engineering, and scientific libraries, and software tools--technologies that allow scientists to use powerful parallel systems to focus on Federal agency mission applications; and (5) HPCC support for virtual environments has enabled the development of immersive technologies, where researchers can explore and manipulate multi-dimensional scientific and engineering problems. Educational programs fostered by the HPCC Program have brought into classrooms new science and engineering curricula designed to teach computational science. This document contains a small sample of the significant HPCC Program accomplishments in FY 1996.

  14. Development of Computational Approaches for Simulation and Advanced Controls for Hybrid Combustion-Gasification Chemical Looping

    SciTech Connect

    Joshi, Abhinaya; Lou, Xinsheng; Neuschaefer, Carl; Chaudry, Majid; Quinn, Joseph

    2012-07-31

    This document provides the results of the project through September 2009. The Phase I project has recently been extended from September 2009 to March 2011. The project extension will begin work on Chemical Looping (CL) Prototype modeling and advanced control design exploration in preparation for a scale-up phase. The results to date include: successful development of dual loop chemical looping process models and dynamic simulation software tools, development and test of several advanced control concepts and applications for Chemical Looping transport control and investigation of several sensor concepts and establishment of two feasible sensor candidates recommended for further prototype development and controls integration. There are three sections in this summary and conclusions. Section 1 presents the project scope and objectives. Section 2 highlights the detailed accomplishments by project task area. Section 3 provides conclusions to date and recommendations for future work.

  15. Detecting Nano-Scale Vibrations in Rotating Devices by Using Advanced Computational Methods

    PubMed Central

    del Toro, Raúl M.; Haber, Rodolfo E.; Schmittdiel, Michael C.

    2010-01-01

    This paper presents a computational method for detecting vibrations related to eccentricity in ultra precision rotation devices used for nano-scale manufacturing. The vibration is indirectly measured via a frequency domain analysis of the signal from a piezoelectric sensor attached to the stationary component of the rotating device. The algorithm searches for particular harmonic sequences associated with the eccentricity of the device rotation axis. The detected sequence is quantified and serves as input to a regression model that estimates the eccentricity. A case study presents the application of the computational algorithm during precision manufacturing processes. PMID:22399918

  16. Advanced reliability modeling of fault-tolerant computer-based systems

    NASA Technical Reports Server (NTRS)

    Bavuso, S. J.

    1982-01-01

    Two methodologies for the reliability assessment of fault tolerant digital computer based systems are discussed. The computer-aided reliability estimation 3 (CARE 3) and gate logic software simulation (GLOSS) are assessment technologies that were developed to mitigate a serious weakness in the design and evaluation process of ultrareliable digital systems. The weak link is based on the unavailability of a sufficiently powerful modeling technique for comparing the stochastic attributes of one system against others. Some of the more interesting attributes are reliability, system survival, safety, and mission success.

  17. Structural analysis of advanced polymeric foams by means of high resolution X-ray computed tomography

    NASA Astrophysics Data System (ADS)

    Nacucchi, M.; De Pascalis, F.; Scatto, M.; Capodieci, L.; Albertoni, R.

    2016-06-01

    Advanced polymeric foams with enhanced thermal insulation and mechanical properties are used in a wide range of industrial applications. The properties of a foam strongly depend upon its cell structure. Traditionally, their microstructure has been studied using 2D imaging systems based on optical or electron microscopy, with the obvious disadvantage that only the surface of the sample can be analysed. To overcome this shortcoming, the adoption of X-ray micro-tomography imaging is here suggested to allow for a complete 3D, non-destructive analysis of advanced polymeric foams. Unlike metallic foams, the resolution of the reconstructed structural features is hampered by the low contrast in the images due to weak X-ray absorption in the polymer. In this work an advanced methodology based on high-resolution and low-contrast techniques is used to perform quantitative analyses on both closed and open cells foams. Local structural features of individual cells such as equivalent diameter, sphericity, anisotropy and orientation are statistically evaluated. In addition, thickness and length of the struts are determined, underlining the key role played by the achieved resolution. In perspective, the quantitative description of these structural features will be used to evaluate the results of in situ mechanical and thermal test on foam samples.

  18. Methodological Advances in Political Gaming: The One-Person Computer Interactive, Quasi-Rigid Rule Game.

    ERIC Educational Resources Information Center

    Shubik, Martin

    The main problem in computer gaming research is the initial decision of choosing the type of gaming method to be used. Free-form games lead to exciting open-ended confrontations that generate much information. However, they do not easily lend themselves to analysis because they generate far too much information and their results are seldom…

  19. CNC Turning Center Advanced Operations. Computer Numerical Control Operator/Programmer. 444-332.

    ERIC Educational Resources Information Center

    Skowronski, Steven D.; Tatum, Kenneth

    This student guide provides materials for a course designed to introduce the student to the operations and functions of a two-axis computer numerical control (CNC) turning center. The course consists of seven units. Unit 1 presents course expectations and syllabus, covers safety precautions, and describes the CNC turning center components, CNC…

  20. Collaborative Learning: Cognitive and Computational Approaches. Advances in Learning and Instruction Series.

    ERIC Educational Resources Information Center

    Dillenbourg, Pierre, Ed.

    Intended to illustrate the benefits of collaboration between scientists from psychology and computer science, namely machine learning, this book contains the following chapters, most of which are co-authored by scholars from both sides: (1) "Introduction: What Do You Mean by 'Collaborative Learning'?" (Pierre Dillenbourg); (2)…