ERIC Educational Resources Information Center
Linn, Marcia C.
1995-01-01
Describes a framework called scaffolded knowledge integration and illustrates how it guided the design of two successful course enhancements in the field of computer science and engineering: the LISP Knowledge Integration Environment and the spatial reasoning environment. (101 references) (Author/MKR)
Execution environment for intelligent real-time control systems
NASA Technical Reports Server (NTRS)
Sztipanovits, Janos
1987-01-01
Modern telerobot control technology requires the integration of symbolic and non-symbolic programming techniques, different models of parallel computations, and various programming paradigms. The Multigraph Architecture, which has been developed for the implementation of intelligent real-time control systems is described. The layered architecture includes specific computational models, integrated execution environment and various high-level tools. A special feature of the architecture is the tight coupling between the symbolic and non-symbolic computations. It supports not only a data interface, but also the integration of the control structures in a parallel computing environment.
DOT National Transportation Integrated Search
1997-04-01
The Land Use, Air Quality, and Transportation Integrated Modeling Environment (LATIME) represents an integrated approach to computer modeling and simulation of land use allocation, travel demand, and mobile source emissions for the Albuquerque, New M...
Integrating Computers into the Problem-Solving Process.
ERIC Educational Resources Information Center
Lowther, Deborah L.; Morrison, Gary R.
2003-01-01
Asserts that within the context of problem-based learning environments, professors can encourage students to use computers as problem-solving tools. The ten-step Integrating Technology for InQuiry (NteQ) model guides professors through the process of integrating computers into problem-based learning activities. (SWM)
Principled design for an integrated computational environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Disessa, A.A.
Boxer is a computer language designed to be the base of an integrated computational environment providing a broad array of functionality -- from text editing to programming -- for naive and novice users. It stands in the line of Lisp inspired languages (Lisp, Logo, Scheme), but differs from these in achieving much of its understandability from pervasive use of a spatial metaphor reinforced through suitable graphics. This paper describes a set of learnability and understandability issues first and then uses them to motivate design decisions made concerning Boxer and the environment in which it is embedded.
NASA Astrophysics Data System (ADS)
Linn, Marcia C.
1995-06-01
Designing effective curricula for complex topics and incorporating technological tools is an evolving process. One important way to foster effective design is to synthesize successful practices. This paper describes a framework called scaffolded knowledge integration and illustrates how it guided the design of two successful course enhancements in the field of computer science and engineering. One course enhancement, the LISP Knowledge Integration Environment, improved learning and resulted in more gender-equitable outcomes. The second course enhancement, the spatial reasoning environment, addressed spatial reasoning in an introductory engineering course. This enhancement minimized the importance of prior knowledge of spatial reasoning and helped students develop a more comprehensive repertoire of spatial reasoning strategies. Taken together, the instructional research programs reinforce the value of the scaffolded knowledge integration framework and suggest directions for future curriculum reformers.
Design and implementation of space physics multi-model application integration based on web
NASA Astrophysics Data System (ADS)
Jiang, Wenping; Zou, Ziming
With the development of research on space environment and space science, how to develop network online computing environment of space weather, space environment and space physics models for Chinese scientific community is becoming more and more important in recent years. Currently, There are two software modes on space physics multi-model application integrated system (SPMAIS) such as C/S and B/S. the C/S mode which is traditional and stand-alone, demands a team or workshop from many disciplines and specialties to build their own multi-model application integrated system, that requires the client must be deployed in different physical regions when user visits the integrated system. Thus, this requirement brings two shortcomings: reducing the efficiency of researchers who use the models to compute; inconvenience of accessing the data. Therefore, it is necessary to create a shared network resource access environment which could help users to visit the computing resources of space physics models through the terminal quickly for conducting space science research and forecasting spatial environment. The SPMAIS develops high-performance, first-principles in B/S mode based on computational models of the space environment and uses these models to predict "Space Weather", to understand space mission data and to further our understanding of the solar system. the main goal of space physics multi-model application integration system (SPMAIS) is to provide an easily and convenient user-driven online models operating environment. up to now, the SPMAIS have contained dozens of space environment models , including international AP8/AE8 IGRF T96 models and solar proton prediction model geomagnetic transmission model etc. which are developed by Chinese scientists. another function of SPMAIS is to integrate space observation data sets which offers input data for models online high-speed computing. In this paper, service-oriented architecture (SOA) concept that divides system into independent modules according to different business needs is applied to solve the problem of the independence of the physical space between multiple models. The classic MVC(Model View Controller) software design pattern is concerned to build the architecture of space physics multi-model application integrated system. The JSP+servlet+javabean technology is used to integrate the web application programs of space physics multi-model. It solves the problem of multi-user requesting the same job of model computing and effectively balances each server computing tasks. In addition, we also complete follow tasks: establishing standard graphical user interface based on Java Applet application program; Designing the interface between model computing and model computing results visualization; Realizing three-dimensional network visualization without plug-ins; Using Java3D technology to achieve a three-dimensional network scene interaction; Improved ability to interact with web pages and dynamic execution capabilities, including rendering three-dimensional graphics, fonts and color control. Through the design and implementation of the SPMAIS based on Web, we provide an online computing and application runtime environment of space physics multi-model. The practical application improves that researchers could be benefit from our system in space physics research and engineering applications.
NASA Technical Reports Server (NTRS)
Eckhardt, Dave E., Jr.; Jipping, Michael J.; Wild, Chris J.; Zeil, Steven J.; Roberts, Cathy C.
1993-01-01
A study of computer engineering tool integration using the Portable Common Tool Environment (PCTE) Public Interface Standard is presented. Over a 10-week time frame, three existing software products were encapsulated to work in the Emeraude environment, an implementation of the PCTE version 1.5 standard. The software products used were a computer-aided software engineering (CASE) design tool, a software reuse tool, and a computer architecture design and analysis tool. The tool set was then demonstrated to work in a coordinated design process in the Emeraude environment. The project and the features of PCTE used are described, experience with the use of Emeraude environment over the project time frame is summarized, and several related areas for future research are summarized.
An Approach to Integrate a Space-Time GIS Data Model with High Performance Computers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Dali; Zhao, Ziliang; Shaw, Shih-Lung
2011-01-01
In this paper, we describe an approach to integrate a Space-Time GIS data model on a high performance computing platform. The Space-Time GIS data model has been developed on a desktop computing environment. We use the Space-Time GIS data model to generate GIS module, which organizes a series of remote sensing data. We are in the process of porting the GIS module into an HPC environment, in which the GIS modules handle large dataset directly via parallel file system. Although it is an ongoing project, authors hope this effort can inspire further discussions on the integration of GIS on highmore » performance computing platforms.« less
ERIC Educational Resources Information Center
Zillesen, P. G. van Schaick; And Others
Instructional feedback given to the learners during computer simulation sessions may be greatly improved by integrating educational computer simulation programs with hypermedia-based computer-assisted learning (CAL) materials. A prototype of a learning environment of this type called BRINE PURIFICATION was developed for use in corporate training…
Grethe, Jeffrey S; Ross, Edward; Little, David; Sanders, Brian; Gupta, Amarnath; Astakhov, Vadim
2009-01-01
This paper presents current progress in the development of semantic data integration environment which is a part of the Biomedical Informatics Research Network (BIRN; http://www.nbirn.net) project. BIRN is sponsored by the National Center for Research Resources (NCRR), a component of the National Institutes of Health (NIH). A goal is the development of a cyberinfrastructure for biomedical research that supports advance data acquisition, data storage, data management, data integration, data mining, data visualization, and other computing and information processing services over the Internet. Each participating institution maintains storage of their experimental or computationally derived data. Mediator-based data integration system performs semantic integration over the databases to enable researchers to perform analyses based on larger and broader datasets than would be available from any single institution's data. This paper describes recent revision of the system architecture, implementation, and capabilities of the semantically based data integration environment for BIRN.
ERIC Educational Resources Information Center
Shaqour, Ali Zuhdi H.
2005-01-01
This study introduces a "Technology Integration Model" for a learning environment utilizing constructivist learning principles and integrating new technologies namely computers and the Internet into pre-service teacher training programs. The technology integrated programs and learning environments may assist learners to gain experiences…
Multidisciplinary High-Fidelity Analysis and Optimization of Aerospace Vehicles. Part 1; Formulation
NASA Technical Reports Server (NTRS)
Walsh, J. L.; Townsend, J. C.; Salas, A. O.; Samareh, J. A.; Mukhopadhyay, V.; Barthelemy, J.-F.
2000-01-01
An objective of the High Performance Computing and Communication Program at the NASA Langley Research Center is to demonstrate multidisciplinary shape and sizing optimization of a complete aerospace vehicle configuration by using high-fidelity, finite element structural analysis and computational fluid dynamics aerodynamic analysis in a distributed, heterogeneous computing environment that includes high performance parallel computing. A software system has been designed and implemented to integrate a set of existing discipline analysis codes, some of them computationally intensive, into a distributed computational environment for the design of a highspeed civil transport configuration. The paper describes the engineering aspects of formulating the optimization by integrating these analysis codes and associated interface codes into the system. The discipline codes are integrated by using the Java programming language and a Common Object Request Broker Architecture (CORBA) compliant software product. A companion paper presents currently available results.
Applications integration in a hybrid cloud computing environment: modelling and platform
NASA Astrophysics Data System (ADS)
Li, Qing; Wang, Ze-yuan; Li, Wei-hua; Li, Jun; Wang, Cheng; Du, Rui-yang
2013-08-01
With the development of application services providers and cloud computing, more and more small- and medium-sized business enterprises use software services and even infrastructure services provided by professional information service companies to replace all or part of their information systems (ISs). These information service companies provide applications, such as data storage, computing processes, document sharing and even management information system services as public resources to support the business process management of their customers. However, no cloud computing service vendor can satisfy the full functional IS requirements of an enterprise. As a result, enterprises often have to simultaneously use systems distributed in different clouds and their intra enterprise ISs. Thus, this article presents a framework to integrate applications deployed in public clouds and intra ISs. A run-time platform is developed and a cross-computing environment process modelling technique is also developed to improve the feasibility of ISs under hybrid cloud computing environments.
Integrated Engineering Information Technology, FY93 accommplishments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harris, R.N.; Miller, D.K.; Neugebauer, G.L.
1994-03-01
The Integrated Engineering Information Technology (IEIT) project is providing a comprehensive, easy-to-use computer network solution or communicating with coworkers both inside and outside Sandia National Laboratories. IEIT capabilities include computer networking, electronic mail, mechanical design, and data management. These network-based tools have one fundamental purpose: to help create a concurrent engineering environment that will enable Sandia organizations to excel in today`s increasingly competitive business environment.
Simonyan, Vahan; Chumakov, Konstantin; Dingerdissen, Hayley; Faison, William; Goldweber, Scott; Golikov, Anton; Gulzar, Naila; Karagiannis, Konstantinos; Vinh Nguyen Lam, Phuc; Maudru, Thomas; Muravitskaja, Olesja; Osipova, Ekaterina; Pan, Yang; Pschenichnov, Alexey; Rostovtsev, Alexandre; Santana-Quintero, Luis; Smith, Krista; Thompson, Elaine E.; Tkachenko, Valery; Torcivia-Rodriguez, John; Wan, Quan; Wang, Jing; Wu, Tsung-Jung; Wilson, Carolyn; Mazumder, Raja
2016-01-01
The High-performance Integrated Virtual Environment (HIVE) is a distributed storage and compute environment designed primarily to handle next-generation sequencing (NGS) data. This multicomponent cloud infrastructure provides secure web access for authorized users to deposit, retrieve, annotate and compute on NGS data, and to analyse the outcomes using web interface visual environments appropriately built in collaboration with research and regulatory scientists and other end users. Unlike many massively parallel computing environments, HIVE uses a cloud control server which virtualizes services, not processes. It is both very robust and flexible due to the abstraction layer introduced between computational requests and operating system processes. The novel paradigm of moving computations to the data, instead of moving data to computational nodes, has proven to be significantly less taxing for both hardware and network infrastructure. The honeycomb data model developed for HIVE integrates metadata into an object-oriented model. Its distinction from other object-oriented databases is in the additional implementation of a unified application program interface to search, view and manipulate data of all types. This model simplifies the introduction of new data types, thereby minimizing the need for database restructuring and streamlining the development of new integrated information systems. The honeycomb model employs a highly secure hierarchical access control and permission system, allowing determination of data access privileges in a finely granular manner without flooding the security subsystem with a multiplicity of rules. HIVE infrastructure will allow engineers and scientists to perform NGS analysis in a manner that is both efficient and secure. HIVE is actively supported in public and private domains, and project collaborations are welcomed. Database URL: https://hive.biochemistry.gwu.edu PMID:26989153
Simonyan, Vahan; Chumakov, Konstantin; Dingerdissen, Hayley; Faison, William; Goldweber, Scott; Golikov, Anton; Gulzar, Naila; Karagiannis, Konstantinos; Vinh Nguyen Lam, Phuc; Maudru, Thomas; Muravitskaja, Olesja; Osipova, Ekaterina; Pan, Yang; Pschenichnov, Alexey; Rostovtsev, Alexandre; Santana-Quintero, Luis; Smith, Krista; Thompson, Elaine E; Tkachenko, Valery; Torcivia-Rodriguez, John; Voskanian, Alin; Wan, Quan; Wang, Jing; Wu, Tsung-Jung; Wilson, Carolyn; Mazumder, Raja
2016-01-01
The High-performance Integrated Virtual Environment (HIVE) is a distributed storage and compute environment designed primarily to handle next-generation sequencing (NGS) data. This multicomponent cloud infrastructure provides secure web access for authorized users to deposit, retrieve, annotate and compute on NGS data, and to analyse the outcomes using web interface visual environments appropriately built in collaboration with research and regulatory scientists and other end users. Unlike many massively parallel computing environments, HIVE uses a cloud control server which virtualizes services, not processes. It is both very robust and flexible due to the abstraction layer introduced between computational requests and operating system processes. The novel paradigm of moving computations to the data, instead of moving data to computational nodes, has proven to be significantly less taxing for both hardware and network infrastructure.The honeycomb data model developed for HIVE integrates metadata into an object-oriented model. Its distinction from other object-oriented databases is in the additional implementation of a unified application program interface to search, view and manipulate data of all types. This model simplifies the introduction of new data types, thereby minimizing the need for database restructuring and streamlining the development of new integrated information systems. The honeycomb model employs a highly secure hierarchical access control and permission system, allowing determination of data access privileges in a finely granular manner without flooding the security subsystem with a multiplicity of rules. HIVE infrastructure will allow engineers and scientists to perform NGS analysis in a manner that is both efficient and secure. HIVE is actively supported in public and private domains, and project collaborations are welcomed. Database URL: https://hive.biochemistry.gwu.edu. © The Author(s) 2016. Published by Oxford University Press.
One approach for evaluating the Distributed Computing Design System (DCDS)
NASA Technical Reports Server (NTRS)
Ellis, J. T.
1985-01-01
The Distributed Computer Design System (DCDS) provides an integrated environment to support the life cycle of developing real-time distributed computing systems. The primary focus of DCDS is to significantly increase system reliability and software development productivity, and to minimize schedule and cost risk. DCDS consists of integrated methodologies, languages, and tools to support the life cycle of developing distributed software and systems. Smooth and well-defined transistions from phase to phase, language to language, and tool to tool provide a unique and unified environment. An approach to evaluating DCDS highlights its benefits.
BioVLAB-MMIA: a cloud environment for microRNA and mRNA integrated analysis (MMIA) on Amazon EC2.
Lee, Hyungro; Yang, Youngik; Chae, Heejoon; Nam, Seungyoon; Choi, Donghoon; Tangchaisin, Patanachai; Herath, Chathura; Marru, Suresh; Nephew, Kenneth P; Kim, Sun
2012-09-01
MicroRNAs, by regulating the expression of hundreds of target genes, play critical roles in developmental biology and the etiology of numerous diseases, including cancer. As a vast amount of microRNA expression profile data are now publicly available, the integration of microRNA expression data sets with gene expression profiles is a key research problem in life science research. However, the ability to conduct genome-wide microRNA-mRNA (gene) integration currently requires sophisticated, high-end informatics tools, significant expertise in bioinformatics and computer science to carry out the complex integration analysis. In addition, increased computing infrastructure capabilities are essential in order to accommodate large data sets. In this study, we have extended the BioVLAB cloud workbench to develop an environment for the integrated analysis of microRNA and mRNA expression data, named BioVLAB-MMIA. The workbench facilitates computations on the Amazon EC2 and S3 resources orchestrated by the XBaya Workflow Suite. The advantages of BioVLAB-MMIA over the web-based MMIA system include: 1) readily expanded as new computational tools become available; 2) easily modifiable by re-configuring graphic icons in the workflow; 3) on-demand cloud computing resources can be used on an "as needed" basis; 4) distributed orchestration supports complex and long running workflows asynchronously. We believe that BioVLAB-MMIA will be an easy-to-use computing environment for researchers who plan to perform genome-wide microRNA-mRNA (gene) integrated analysis tasks.
Principles of Faithful Execution in the implementation of trusted objects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tarman, Thomas David; Campbell, Philip LaRoche; Pierson, Lyndon George
2003-09-01
We begin with the following definitions: Definition: A trusted volume is the computing machinery (including communication lines) within which data is assumed to be physically protected from an adversary. A trusted volume provides both integrity and privacy. Definition: Program integrity consists of the protection necessary to enable the detection of changes in the bits comprising a program as specified by the developer, for the entire time that the program is outside a trusted volume. For ease of discussion we consider program integrity to be the aggregation of two elements: instruction integrity (detection of changes in the bits within an instructionmore » or block of instructions), and sequence integrity (detection of changes in the locations of instructions within a program). Definition: Faithful Execution (FE) is a type of software protection that begins when the software leaves the control of the developer and ends within the trusted volume of a target processor. That is, FE provides program integrity, even while the program is in execution. (As we will show below, FE schemes are a function of trusted volume size.) FE is a necessary quality for computing. Without it we cannot trust computations. In the early days of computing FE came for free since the software never left a trusted volume. At that time the execution environment was the same as the development environment. In some circles that environment was referred to as a ''closed shop:'' all of the software that was used there was developed there. When an organization bought a large computer from a vendor the organization would run its own operating system on that computer, use only its own editors, only its own compilers, only its own debuggers, and so on. However, with the continuing maturity of computing technology, FE becomes increasingly difficult to achieve« less
Global Estimates of Errors in Quantum Computation by the Feynman-Vernon Formalism
NASA Astrophysics Data System (ADS)
Aurell, Erik
2018-06-01
The operation of a quantum computer is considered as a general quantum operation on a mixed state on many qubits followed by a measurement. The general quantum operation is further represented as a Feynman-Vernon double path integral over the histories of the qubits and of an environment, and afterward tracing out the environment. The qubit histories are taken to be paths on the two-sphere S^2 as in Klauder's coherent-state path integral of spin, and the environment is assumed to consist of harmonic oscillators initially in thermal equilibrium, and linearly coupled to to qubit operators \\hat{S}_z. The environment can then be integrated out to give a Feynman-Vernon influence action coupling the forward and backward histories of the qubits. This representation allows to derive in a simple way estimates that the total error of operation of a quantum computer without error correction scales linearly with the number of qubits and the time of operation. It also allows to discuss Kitaev's toric code interacting with an environment in the same manner.
Global Estimates of Errors in Quantum Computation by the Feynman-Vernon Formalism
NASA Astrophysics Data System (ADS)
Aurell, Erik
2018-04-01
The operation of a quantum computer is considered as a general quantum operation on a mixed state on many qubits followed by a measurement. The general quantum operation is further represented as a Feynman-Vernon double path integral over the histories of the qubits and of an environment, and afterward tracing out the environment. The qubit histories are taken to be paths on the two-sphere S^2 as in Klauder's coherent-state path integral of spin, and the environment is assumed to consist of harmonic oscillators initially in thermal equilibrium, and linearly coupled to to qubit operators \\hat{S}_z . The environment can then be integrated out to give a Feynman-Vernon influence action coupling the forward and backward histories of the qubits. This representation allows to derive in a simple way estimates that the total error of operation of a quantum computer without error correction scales linearly with the number of qubits and the time of operation. It also allows to discuss Kitaev's toric code interacting with an environment in the same manner.
Sainath, Kamalesh; Teixeira, Fernando L; Donderici, Burkay
2014-01-01
We develop a general-purpose formulation, based on two-dimensional spectral integrals, for computing electromagnetic fields produced by arbitrarily oriented dipoles in planar-stratified environments, where each layer may exhibit arbitrary and independent anisotropy in both its (complex) permittivity and permeability tensors. Among the salient features of our formulation are (i) computation of eigenmodes (characteristic plane waves) supported in arbitrarily anisotropic media in a numerically robust fashion, (ii) implementation of an hp-adaptive refinement for the numerical integration to evaluate the radiation and weakly evanescent spectra contributions, and (iii) development of an adaptive extension of an integral convergence acceleration technique to compute the strongly evanescent spectrum contribution. While other semianalytic techniques exist to solve this problem, none have full applicability to media exhibiting arbitrary double anisotropies in each layer, where one must account for the whole range of possible phenomena (e.g., mode coupling at interfaces and nonreciprocal mode propagation). Brute-force numerical methods can tackle this problem but only at a much higher computational cost. The present formulation provides an efficient and robust technique for field computation in arbitrary planar-stratified environments. We demonstrate the formulation for a number of problems related to geophysical exploration.
Computers and Individualized Instruction: Moving to Alternative Learning Environments.
ERIC Educational Resources Information Center
Robbat, Richard J.
The overall focus of this booklet is on planning for change that allows for integration of computers into articulated learning environments that will enhance the learning goal of students. The first chapter presents four major themes to increase the likelihood of combining computers and individualized instruction in schools: (1) a revitalized form…
Heterogeneity in Health Care Computing Environments
Sengupta, Soumitra
1989-01-01
This paper discusses issues of heterogeneity in computer systems, networks, databases, and presentation techniques, and the problems it creates in developing integrated medical information systems. The need for institutional, comprehensive goals are emphasized. Using the Columbia-Presbyterian Medical Center's computing environment as the case study, various steps to solve the heterogeneity problem are presented.
Using an architectural approach to integrate heterogeneous, distributed software components
NASA Technical Reports Server (NTRS)
Callahan, John R.; Purtilo, James M.
1995-01-01
Many computer programs cannot be easily integrated because their components are distributed and heterogeneous, i.e., they are implemented in diverse programming languages, use different data representation formats, or their runtime environments are incompatible. In many cases, programs are integrated by modifying their components or interposing mechanisms that handle communication and conversion tasks. For example, remote procedure call (RPC) helps integrate heterogeneous, distributed programs. When configuring such programs, however, mechanisms like RPC must be used explicitly by software developers in order to integrate collections of diverse components. Each collection may require a unique integration solution. This paper describes improvements to the concepts of software packaging and some of our experiences in constructing complex software systems from a wide variety of components in different execution environments. Software packaging is a process that automatically determines how to integrate a diverse collection of computer programs based on the types of components involved and the capabilities of available translators and adapters in an environment. Software packaging provides a context that relates such mechanisms to software integration processes and reduces the cost of configuring applications whose components are distributed or implemented in different programming languages. Our software packaging tool subsumes traditional integration tools like UNIX make by providing a rule-based approach to software integration that is independent of execution environments.
Integration of High-Performance Computing into Cloud Computing Services
NASA Astrophysics Data System (ADS)
Vouk, Mladen A.; Sills, Eric; Dreher, Patrick
High-Performance Computing (HPC) projects span a spectrum of computer hardware implementations ranging from peta-flop supercomputers, high-end tera-flop facilities running a variety of operating systems and applications, to mid-range and smaller computational clusters used for HPC application development, pilot runs and prototype staging clusters. What they all have in common is that they operate as a stand-alone system rather than a scalable and shared user re-configurable resource. The advent of cloud computing has changed the traditional HPC implementation. In this article, we will discuss a very successful production-level architecture and policy framework for supporting HPC services within a more general cloud computing infrastructure. This integrated environment, called Virtual Computing Lab (VCL), has been operating at NC State since fall 2004. Nearly 8,500,000 HPC CPU-Hrs were delivered by this environment to NC State faculty and students during 2009. In addition, we present and discuss operational data that show that integration of HPC and non-HPC (or general VCL) services in a cloud can substantially reduce the cost of delivering cloud services (down to cents per CPU hour).
Adopting Cloud Computing in the Pakistan Navy
2015-06-01
administrative aspect is required to operate optimally, provide synchronized delivery of cloud services, and integrate multi-provider cloud environment...AND ABBREVIATIONS ANSI American National Standards Institute AWS Amazon web services CIA Confidentiality Integrity Availability CIO Chief...also adopted cloud computing as an integral component of military operations conducted either locally or remotely. With the use of 2 cloud services
Enabling Earth Science Through Cloud Computing
NASA Technical Reports Server (NTRS)
Hardman, Sean; Riofrio, Andres; Shams, Khawaja; Freeborn, Dana; Springer, Paul; Chafin, Brian
2012-01-01
Cloud Computing holds tremendous potential for missions across the National Aeronautics and Space Administration. Several flight missions are already benefiting from an investment in cloud computing for mission critical pipelines and services through faster processing time, higher availability, and drastically lower costs available on cloud systems. However, these processes do not currently extend to general scientific algorithms relevant to earth science missions. The members of the Airborne Cloud Computing Environment task at the Jet Propulsion Laboratory have worked closely with the Carbon in Arctic Reservoirs Vulnerability Experiment (CARVE) mission to integrate cloud computing into their science data processing pipeline. This paper details the efforts involved in deploying a science data system for the CARVE mission, evaluating and integrating cloud computing solutions with the system and porting their science algorithms for execution in a cloud environment.
Integrating Grid Services into the Cray XT4 Environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
NERSC; Cholia, Shreyas; Lin, Hwa-Chun Wendy
2009-05-01
The 38640 core Cray XT4"Franklin" system at the National Energy Research Scientific Computing Center (NERSC) is a massively parallel resource available to Department of Energy researchers that also provides on-demand grid computing to the Open Science Grid. The integration of grid services on Franklin presented various challenges, including fundamental differences between the interactive and compute nodes, a stripped down compute-node operating system without dynamic library support, a shared-root environment and idiosyncratic application launching. Inour work, we describe how we resolved these challenges on a running, general-purpose production system to provide on-demand compute, storage, accounting and monitoring services through generic gridmore » interfaces that mask the underlying system-specific details for the end user.« less
Goscinski, Wojtek J.; McIntosh, Paul; Felzmann, Ulrich; Maksimenko, Anton; Hall, Christopher J.; Gureyev, Timur; Thompson, Darren; Janke, Andrew; Galloway, Graham; Killeen, Neil E. B.; Raniga, Parnesh; Kaluza, Owen; Ng, Amanda; Poudel, Govinda; Barnes, David G.; Nguyen, Toan; Bonnington, Paul; Egan, Gary F.
2014-01-01
The Multi-modal Australian ScienceS Imaging and Visualization Environment (MASSIVE) is a national imaging and visualization facility established by Monash University, the Australian Synchrotron, the Commonwealth Scientific Industrial Research Organization (CSIRO), and the Victorian Partnership for Advanced Computing (VPAC), with funding from the National Computational Infrastructure and the Victorian Government. The MASSIVE facility provides hardware, software, and expertise to drive research in the biomedical sciences, particularly advanced brain imaging research using synchrotron x-ray and infrared imaging, functional and structural magnetic resonance imaging (MRI), x-ray computer tomography (CT), electron microscopy and optical microscopy. The development of MASSIVE has been based on best practice in system integration methodologies, frameworks, and architectures. The facility has: (i) integrated multiple different neuroimaging analysis software components, (ii) enabled cross-platform and cross-modality integration of neuroinformatics tools, and (iii) brought together neuroimaging databases and analysis workflows. MASSIVE is now operational as a nationally distributed and integrated facility for neuroinfomatics and brain imaging research. PMID:24734019
National electronic medical records integration on cloud computing system.
Mirza, Hebah; El-Masri, Samir
2013-01-01
Few Healthcare providers have an advanced level of Electronic Medical Record (EMR) adoption. Others have a low level and most have no EMR at all. Cloud computing technology is a new emerging technology that has been used in other industry and showed a great success. Despite the great features of Cloud computing, they haven't been utilized fairly yet in healthcare industry. This study presents an innovative Healthcare Cloud Computing system for Integrating Electronic Health Record (EHR). The proposed Cloud system applies the Cloud Computing technology on EHR system, to present a comprehensive EHR integrated environment.
The Computing and Data Grid Approach: Infrastructure for Distributed Science Applications
NASA Technical Reports Server (NTRS)
Johnston, William E.
2002-01-01
With the advent of Grids - infrastructure for using and managing widely distributed computing and data resources in the science environment - there is now an opportunity to provide a standard, large-scale, computing, data, instrument, and collaboration environment for science that spans many different projects and provides the required infrastructure and services in a relatively uniform and supportable way. Grid technology has evolved over the past several years to provide the services and infrastructure needed for building 'virtual' systems and organizations. We argue that Grid technology provides an excellent basis for the creation of the integrated environments that can combine the resources needed to support the large- scale science projects located at multiple laboratories and universities. We present some science case studies that indicate that a paradigm shift in the process of science will come about as a result of Grids providing transparent and secure access to advanced and integrated information and technologies infrastructure: powerful computing systems, large-scale data archives, scientific instruments, and collaboration tools. These changes will be in the form of services that can be integrated with the user's work environment, and that enable uniform and highly capable access to these computers, data, and instruments, regardless of the location or exact nature of these resources. These services will integrate transient-use resources like computing systems, scientific instruments, and data caches (e.g., as they are needed to perform a simulation or analyze data from a single experiment); persistent-use resources. such as databases, data catalogues, and archives, and; collaborators, whose involvement will continue for the lifetime of a project or longer. While we largely address large-scale science in this paper, Grids, particularly when combined with Web Services, will address a broad spectrum of science scenarios. both large and small scale.
Color graphics, interactive processing, and the supercomputer
NASA Technical Reports Server (NTRS)
Smith-Taylor, Rudeen
1987-01-01
The development of a common graphics environment for the NASA Langley Research Center user community and the integration of a supercomputer into this environment is examined. The initial computer hardware, the software graphics packages, and their configurations are described. The addition of improved computer graphics capability to the supercomputer, and the utilization of the graphic software and hardware are discussed. Consideration is given to the interactive processing system which supports the computer in an interactive debugging, processing, and graphics environment.
An Integrated Development Environment for Adiabatic Quantum Programming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humble, Travis S; McCaskey, Alex; Bennink, Ryan S
2014-01-01
Adiabatic quantum computing is a promising route to the computational power afforded by quantum information processing. The recent availability of adiabatic hardware raises the question of how well quantum programs perform. Benchmarking behavior is challenging since the multiple steps to synthesize an adiabatic quantum program are highly tunable. We present an adiabatic quantum programming environment called JADE that provides control over all the steps taken during program development. JADE captures the workflow needed to rigorously benchmark performance while also allowing a variety of problem types, programming techniques, and processor configurations. We have also integrated JADE with a quantum simulation enginemore » that enables program profiling using numerical calculation. The computational engine supports plug-ins for simulation methodologies tailored to various metrics and computing resources. We present the design, integration, and deployment of JADE and discuss its use for benchmarking adiabatic quantum programs.« less
NASA Astrophysics Data System (ADS)
Evans, B. J. K.; Foster, C.; Minchin, S. A.; Pugh, T.; Lewis, A.; Wyborn, L. A.; Evans, B. J.; Uhlherr, A.
2014-12-01
The National Computational Infrastructure (NCI) has established a powerful in-situ computational environment to enable both high performance computing and data-intensive science across a wide spectrum of national environmental data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress in addressing harmonisation of the underlying data collections for future transdisciplinary research that enable accurate climate projections. NCI makes available 10+ PB major data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the national scientific records), major research communities, and collaborating overseas organisations. The data is accessible within an integrated HPC-HPD environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large scale and high-bandwidth Lustre filesystems. This computational environment supports a catalogue of integrated reusable software and workflows from earth system and ecosystem modelling, weather research, satellite and other observed data processing and analysis. To enable transdisciplinary research on this scale, data needs to be harmonised so that researchers can readily apply techniques and software across the corpus of data available and not be constrained to work within artificial disciplinary boundaries. Future challenges will involve the further integration and analysis of this data across the social sciences to facilitate the impacts across the societal domain, including timely analysis to more accurately predict and forecast future climate and environmental state.
ERIC Educational Resources Information Center
Beale, Ivan L.
2005-01-01
Computer assisted learning (CAL) can involve a computerised intelligent learning environment, defined as an environment capable of automatically, dynamically and continuously adapting to the learning context. One aspect of this adaptive capability involves automatic adjustment of instructional procedures in response to each learner's performance,…
An Implemented Strategy for Campus Connectivity and Cooperative Computing.
ERIC Educational Resources Information Center
Halaris, Antony S.; Sloan, Lynda W.
1989-01-01
ConnectPac, a software package developed at Iona College to allow a computer user to access all services from a single personal computer, is described. ConnectPac uses mainframe computing to support a campus computing network, integrating personal and centralized computing into a menu-driven user environment. (Author/MLW)
NASA Technical Reports Server (NTRS)
1980-01-01
The requirements implementation strategy for first level development of the Integrated Programs for Aerospace Vehicle Design (IPAD) computing system is presented. The capabilities of first level IPAD are sufficient to demonstrated management of engineering data on two computers (CDC CYBER 170/720 and DEC VAX 11/780 computers) using the IPAD system in a distributed network environment.
jAMVLE, a New Integrated Molecular Visualization Learning Environment
ERIC Educational Resources Information Center
Bottomley, Steven; Chandler, David; Morgan, Eleanor; Helmerhorst, Erik
2006-01-01
A new computer-based molecular visualization tool has been developed for teaching, and learning, molecular structure. This java-based jmol Amalgamated Molecular Visualization Learning Environment (jAMVLE) is platform-independent, integrated, and interactive. It has an overall graphical user interface that is intuitive and easy to use. The…
Methods for design and evaluation of integrated hardware-software systems for concurrent computation
NASA Technical Reports Server (NTRS)
Pratt, T. W.
1985-01-01
Research activities and publications are briefly summarized. The major tasks reviewed are: (1) VAX implementation of the PISCES parallel programming environment; (2) Apollo workstation network implementation of the PISCES environment; (3) FLEX implementation of the PISCES environment; (4) sparse matrix iterative solver in PSICES Fortran; (5) image processing application of PISCES; and (6) a formal model of concurrent computation being developed.
ERIC Educational Resources Information Center
Colker, Larry
Viewing computers in various forms as developmentally appropriate objects for children, this discussion provides a framework for integrating conceptions of computers and conceptions of play. Several instances are cited from the literature in which explicit analogies have been made between computers and playthings or play environments.…
Singularity: Scientific containers for mobility of compute.
Kurtzer, Gregory M; Sochat, Vanessa; Bauer, Michael W
2017-01-01
Here we present Singularity, software developed to bring containers and reproducibility to scientific computing. Using Singularity containers, developers can work in reproducible environments of their choosing and design, and these complete environments can easily be copied and executed on other platforms. Singularity is an open source initiative that harnesses the expertise of system and software engineers and researchers alike, and integrates seamlessly into common workflows for both of these groups. As its primary use case, Singularity brings mobility of computing to both users and HPC centers, providing a secure means to capture and distribute software and compute environments. This ability to create and deploy reproducible environments across these centers, a previously unmet need, makes Singularity a game changing development for computational science.
Singularity: Scientific containers for mobility of compute
Kurtzer, Gregory M.; Bauer, Michael W.
2017-01-01
Here we present Singularity, software developed to bring containers and reproducibility to scientific computing. Using Singularity containers, developers can work in reproducible environments of their choosing and design, and these complete environments can easily be copied and executed on other platforms. Singularity is an open source initiative that harnesses the expertise of system and software engineers and researchers alike, and integrates seamlessly into common workflows for both of these groups. As its primary use case, Singularity brings mobility of computing to both users and HPC centers, providing a secure means to capture and distribute software and compute environments. This ability to create and deploy reproducible environments across these centers, a previously unmet need, makes Singularity a game changing development for computational science. PMID:28494014
Code of Federal Regulations, 2014 CFR
2014-10-01
... PROGRAMS ENVIRONMENT, ENERGY AND WATER EFFICIENCY, RENEWABLE ENERGY TECHNOLOGIES, OCCUPATIONAL SAFETY, AND... an integrated computer display and be capable of operation off of an integrated battery or other...
NASA Technical Reports Server (NTRS)
Walsh, J. L.; Weston, R. P.; Samareh, J. A.; Mason, B. H.; Green, L. L.; Biedron, R. T.
2000-01-01
An objective of the High Performance Computing and Communication Program at the NASA Langley Research Center is to demonstrate multidisciplinary shape and sizing optimization of a complete aerospace vehicle configuration by using high-fidelity finite-element structural analysis and computational fluid dynamics aerodynamic analysis in a distributed, heterogeneous computing environment that includes high performance parallel computing. A software system has been designed and implemented to integrate a set of existing discipline analysis codes, some of them computationally intensive, into a distributed computational environment for the design of a high-speed civil transport configuration. The paper describes both the preliminary results from implementing and validating the multidisciplinary analysis and the results from an aerodynamic optimization. The discipline codes are integrated by using the Java programming language and a Common Object Request Broker Architecture compliant software product. A companion paper describes the formulation of the multidisciplinary analysis and optimization system.
NASA Technical Reports Server (NTRS)
McGalliard, James
2008-01-01
A viewgraph describing the use of multiple frameworks by NASA, GSA, and U.S. Government agencies is presented. The contents include: 1) Federal Systems Integration and Management Center (FEDSIM) and NASA Center for Computational Sciences (NCCS) Environment; 2) Ruling Frameworks; 3) Implications; and 4) Reconciling Multiple Frameworks.
ERIC Educational Resources Information Center
Wood, Eileen; Specht, Jacqueline; Willoughby, Teena; Mueller, Julie
2008-01-01
The purpose of this study was to assess the educators' perspectives on the introduction of computer technology in the early childhood education environment. Fifty early childhood educators completed a survey and participated in focus groups. Parallels existed between the individually completed survey data and the focus group discussions. The…
Scheduling multimedia services in cloud computing environment
NASA Astrophysics Data System (ADS)
Liu, Yunchang; Li, Chunlin; Luo, Youlong; Shao, Yanling; Zhang, Jing
2018-02-01
Currently, security is a critical factor for multimedia services running in the cloud computing environment. As an effective mechanism, trust can improve security level and mitigate attacks within cloud computing environments. Unfortunately, existing scheduling strategy for multimedia service in the cloud computing environment do not integrate trust mechanism when making scheduling decisions. In this paper, we propose a scheduling scheme for multimedia services in multi clouds. At first, a novel scheduling architecture is presented. Then, We build a trust model including both subjective trust and objective trust to evaluate the trust degree of multimedia service providers. By employing Bayesian theory, the subjective trust degree between multimedia service providers and users is obtained. According to the attributes of QoS, the objective trust degree of multimedia service providers is calculated. Finally, a scheduling algorithm integrating trust of entities is proposed by considering the deadline, cost and trust requirements of multimedia services. The scheduling algorithm heuristically hunts for reasonable resource allocations and satisfies the requirement of trust and meets deadlines for the multimedia services. Detailed simulated experiments demonstrate the effectiveness and feasibility of the proposed trust scheduling scheme.
The Effects of Integrating Social Learning Environment with Online Learning
ERIC Educational Resources Information Center
Raspopovic, Miroslava; Cvetanovic, Svetlana; Medan, Ivana; Ljubojevic, Danijela
2017-01-01
The aim of this paper is to present the learning and teaching styles using the Social Learning Environment (SLE), which was developed based on the computer supported collaborative learning approach. To avoid burdening learners with multiple platforms and tools, SLE was designed and developed in order to integrate existing systems, institutional…
Integration of Wireless Technologies in Smart University Campus Environment: Framework Architecture
ERIC Educational Resources Information Center
Khamayseh, Yaser; Mardini, Wail; Aljawarneh, Shadi; Yassein, Muneer Bani
2015-01-01
In this paper, the authors are particularly interested in enhancing the education process by integrating new tools to the teaching environments. This enhancement is part of an emerging concept, called smart campus. Smart University Campus will come up with a new ubiquitous computing and communication field and change people's lives radically by…
NASA Astrophysics Data System (ADS)
Mullen, Katharine M.
Human-technology integration is the replacement of human parts and extension of human capabilities with engineered devices and substrates. Its result is hybrid biological-artificial systems. We discuss here four categories of products furthering human-technology integration: wearable computers, pervasive computing environments, engineered tissues and organs, and prosthetics, and introduce examples of currently realized systems in each category. We then note that realization of a completely artificial sytem via the path of human-technology integration presents the prospect of empirical confirmation of an aware artificially embodied system.
Project #OA-FY14-0126, January 15, 2014. The EPA OIG is starting fieldwork on the Council of the Inspectors General on Integrity and Efficiency (CIGIE) Cloud Computing Initiative – Status of Cloud-Computing Environments Within the Federal Government.
Computational System For Rapid CFD Analysis In Engineering
NASA Technical Reports Server (NTRS)
Barson, Steven L.; Ascoli, Edward P.; Decroix, Michelle E.; Sindir, Munir M.
1995-01-01
Computational system comprising modular hardware and software sub-systems developed to accelerate and facilitate use of techniques of computational fluid dynamics (CFD) in engineering environment. Addresses integration of all aspects of CFD analysis process, including definition of hardware surfaces, generation of computational grids, CFD flow solution, and postprocessing. Incorporates interfaces for integration of all hardware and software tools needed to perform complete CFD analysis. Includes tools for efficient definition of flow geometry, generation of computational grids, computation of flows on grids, and postprocessing of flow data. System accepts geometric input from any of three basic sources: computer-aided design (CAD), computer-aided engineering (CAE), or definition by user.
Numerical methods for engine-airframe integration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murthy, S.N.B.; Paynter, G.C.
1986-01-01
Various papers on numerical methods for engine-airframe integration are presented. The individual topics considered include: scientific computing environment for the 1980s, overview of prediction of complex turbulent flows, numerical solutions of the compressible Navier-Stokes equations, elements of computational engine/airframe integrations, computational requirements for efficient engine installation, application of CAE and CFD techniques to complete tactical missile design, CFD applications to engine/airframe integration, and application of a second-generation low-order panel methods to powerplant installation studies. Also addressed are: three-dimensional flow analysis of turboprop inlet and nacelle configurations, application of computational methods to the design of large turbofan engine nacelles, comparison ofmore » full potential and Euler solution algorithms for aeropropulsive flow field computations, subsonic/transonic, supersonic nozzle flows and nozzle integration, subsonic/transonic prediction capabilities for nozzle/afterbody configurations, three-dimensional viscous design methodology of supersonic inlet systems for advanced technology aircraft, and a user's technology assessment.« less
NASA Technical Reports Server (NTRS)
Hwang, James; Campbell, Perry; Ross, Mike; Price, Charles R.; Barron, Don
1989-01-01
An integrated operating environment was designed to incorporate three general purpose robots, sensors, and end effectors, including Force/Torque Sensors, Tactile Array sensors, Tactile force sensors, and Force-sensing grippers. The design and implementation of: (1) the teleoperation of a general purpose PUMA robot; (2) an integrated sensor hardware/software system; (3) the force-sensing gripper control; (4) the host computer system for dual Robotic Research arms; and (5) the Ethernet integration are described.
An overview of computer viruses in a research environment
NASA Technical Reports Server (NTRS)
Bishop, Matt
1991-01-01
The threat of attack by computer viruses is in reality a very small part of a much more general threat, specifically threats aimed at subverting computer security. Here, computer viruses are examined as a malicious logic in a research and development environment. A relation is drawn between the viruses and various models of security and integrity. Current research techniques aimed at controlling the threats posed to computer systems by threatening viruses in particular and malicious logic in general are examined. Finally, a brief examination of the vulnerabilities of research and development systems that malicious logic and computer viruses may exploit is undertaken.
Mobility in hospital work: towards a pervasive computing hospital environment.
Morán, Elisa B; Tentori, Monica; González, Víctor M; Favela, Jesus; Martínez-Garcia, Ana I
2007-01-01
Handheld computers are increasingly being used by hospital workers. With the integration of wireless networks into hospital information systems, handheld computers can provide the basis for a pervasive computing hospital environment; to develop this designers need empirical information to understand how hospital workers interact with information while moving around. To characterise the medical phenomena we report the results of a workplace study conducted in a hospital. We found that individuals spend about half of their time at their base location, where most of their interactions occur. On average, our informants spent 23% of their time performing information management tasks, followed by coordination (17.08%), clinical case assessment (15.35%) and direct patient care (12.6%). We discuss how our results offer insights for the design of pervasive computing technology, and directions for further research and development in this field such as transferring information between heterogeneous devices and integration of the physical and digital domains.
IMAGE: A Design Integration Framework Applied to the High Speed Civil Transport
NASA Technical Reports Server (NTRS)
Hale, Mark A.; Craig, James I.
1993-01-01
Effective design of the High Speed Civil Transport requires the systematic application of design resources throughout a product's life-cycle. Information obtained from the use of these resources is used for the decision-making processes of Concurrent Engineering. Integrated computing environments facilitate the acquisition, organization, and use of required information. State-of-the-art computing technologies provide the basis for the Intelligent Multi-disciplinary Aircraft Generation Environment (IMAGE) described in this paper. IMAGE builds upon existing agent technologies by adding a new component called a model. With the addition of a model, the agent can provide accountable resource utilization in the presence of increasing design fidelity. The development of a zeroth-order agent is used to illustrate agent fundamentals. Using a CATIA(TM)-based agent from previous work, a High Speed Civil Transport visualization system linking CATIA, FLOPS, and ASTROS will be shown. These examples illustrate the important role of the agent technologies used to implement IMAGE, and together they demonstrate that IMAGE can provide an integrated computing environment for the design of the High Speed Civil Transport.
ERIC Educational Resources Information Center
Denda, Kayo; Smulewitz, Gracemary
2004-01-01
In the contemporary library environment, the presence of the Internet and the infrastructure of the integrated library system suggest an integrated internal organization. The article describes the example of Douglass Rationalization, a team-based collaborative project to refocus the collection of Rutgers' Douglass Library, taking advantage of the…
ERIC Educational Resources Information Center
Masson, Steve; Vazquez-Abad, Jesus
2006-01-01
This paper proposes a new way to integrate history of science in science education to promote conceptual change by introducing the notion of historical microworld, which is a computer-based interactive learning environment respecting historic conceptions. In this definition, "interactive" means that the user can act upon the virtual environment by…
Micro-video display with ocular tracking and interactive voice control
NASA Technical Reports Server (NTRS)
Miller, James E.
1993-01-01
In certain space-restricted environments, many of the benefits resulting from computer technology have been foregone because of the size, weight, inconvenience, and lack of mobility associated with existing computer interface devices. Accordingly, an effort to develop a highly miniaturized and 'wearable' computer display and control interface device, referred to as the Sensory Integrated Data Interface (SIDI), is underway. The system incorporates a micro-video display that provides data display and ocular tracking on a lightweight headset. Software commands are implemented by conjunctive eye movement and voice commands of the operator. In this initial prototyping effort, various 'off-the-shelf' components have been integrated into a desktop computer and with a customized menu-tree software application to demonstrate feasibility and conceptual capabilities. When fully developed as a customized system, the interface device will allow mobile, 'hand-free' operation of portable computer equipment. It will thus allow integration of information technology applications into those restrictive environments, both military and industrial, that have not yet taken advantage of the computer revolution. This effort is Phase 1 of Small Business Innovative Research (SBIR) Topic number N90-331 sponsored by the Naval Undersea Warfare Center Division, Newport. The prime contractor is Foster-Miller, Inc. of Waltham, MA.
Welter, Petra; Riesmeier, Jörg; Fischer, Benedikt; Grouls, Christoph; Kuhl, Christiane; Deserno, Thomas M
2011-01-01
It is widely accepted that content-based image retrieval (CBIR) can be extremely useful for computer-aided diagnosis (CAD). However, CBIR has not been established in clinical practice yet. As a widely unattended gap of integration, a unified data concept for CBIR-based CAD results and reporting is lacking. Picture archiving and communication systems and the workflow of radiologists must be considered for successful data integration to be achieved. We suggest that CBIR systems applied to CAD should integrate their results in a picture archiving and communication systems environment such as Digital Imaging and Communications in Medicine (DICOM) structured reporting documents. A sample DICOM structured reporting template adaptable to CBIR and an appropriate integration scheme is presented. The proposed CBIR data concept may foster the promulgation of CBIR systems in clinical environments and, thereby, improve the diagnostic process.
Riesmeier, Jörg; Fischer, Benedikt; Grouls, Christoph; Kuhl, Christiane; Deserno (né Lehmann), Thomas M
2011-01-01
It is widely accepted that content-based image retrieval (CBIR) can be extremely useful for computer-aided diagnosis (CAD). However, CBIR has not been established in clinical practice yet. As a widely unattended gap of integration, a unified data concept for CBIR-based CAD results and reporting is lacking. Picture archiving and communication systems and the workflow of radiologists must be considered for successful data integration to be achieved. We suggest that CBIR systems applied to CAD should integrate their results in a picture archiving and communication systems environment such as Digital Imaging and Communications in Medicine (DICOM) structured reporting documents. A sample DICOM structured reporting template adaptable to CBIR and an appropriate integration scheme is presented. The proposed CBIR data concept may foster the promulgation of CBIR systems in clinical environments and, thereby, improve the diagnostic process. PMID:21672913
Computational Tools and Facilities for the Next-Generation Analysis and Design Environment
NASA Technical Reports Server (NTRS)
Noor, Ahmed K. (Compiler); Malone, John B. (Compiler)
1997-01-01
This document contains presentations from the joint UVA/NASA Workshop on Computational Tools and Facilities for the Next-Generation Analysis and Design Environment held at the Virginia Consortium of Engineering and Science Universities in Hampton, Virginia on September 17-18, 1996. The presentations focused on the computational tools and facilities for analysis and design of engineering systems, including, real-time simulations, immersive systems, collaborative engineering environment, Web-based tools and interactive media for technical training. Workshop attendees represented NASA, commercial software developers, the aerospace industry, government labs, and academia. The workshop objectives were to assess the level of maturity of a number of computational tools and facilities and their potential for application to the next-generation integrated design environment.
Forman, Bruce H.; Eccles, Randy; Piggins, Judith; Raila, Wayne; Estey, Greg; Barnett, G. Octo
1990-01-01
We have developed a visually oriented, computer-controlled learning environment designed for use by students of gross anatomy. The goals of this module are to reinforce the concepts of organ relationships and topography by using computed axial tomographic (CAT) images accessed from a videodisc integrated with color graphics and to introduce students to cross-sectional radiographic anatomy. We chose to build the program around CAT scan images because they not only provide excellent structural detail but also offer an anatomic orientation (transverse) that complements that used in the dissection laboratory (basically a layer-by-layer, anterior-to-posterior, or coronal approach). Our system, built using a Microsoft Windows-386 based authoring environment which we designed and implemented, integrates text, video images, and graphics into a single screen display. The program allows both user browsing of information, facilitated by hypertext links, and didactic sessions including mini-quizzes for self-assessment.
Using Learning Analytics for Preserving Academic Integrity
ERIC Educational Resources Information Center
Amigud, Alexander; Arnedo-Moreno, Joan; Daradoumis, Thanasis; Guerrero-Roldan, Ana-Elena
2017-01-01
This paper presents the results of integrating learning analytics into the assessment process to enhance academic integrity in the e-learning environment. The goal of this research is to evaluate the computational-based approach to academic integrity. The machine-learning based framework learns students' patterns of language use from data,…
Integrating labview into a distributed computing environment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kasemir, K. U.; Pieck, M.; Dalesio, L. R.
2001-01-01
Being easy to learn and well suited for a selfcontained desktop laboratory setup, many casual programmers prefer to use the National Instruments Lab-VIEW environment to develop their logic. An ActiveX interface is presented that allows integration into a plant-wide distributed environment based on the Experimental Physics and Industrial Control System (EPICS). This paper discusses the design decisions and provides performance information, especially considering requirements for the Spallation Neutron Source (SNS) diagnostics system.
Current Issues for Higher Education Information Resources Management.
ERIC Educational Resources Information Center
CAUSE/EFFECT, 1996
1996-01-01
Issues identified as important to the future of information resources management and use in higher education include information policy in a networked environment, distributed computing, integrating information resources and college planning, benchmarking information technology, integrated digital libraries, technology integration in teaching,…
Web-Based Integrated Research Environment for Aerodynamic Analyses and Design
NASA Astrophysics Data System (ADS)
Ahn, Jae Wan; Kim, Jin-Ho; Kim, Chongam; Cho, Jung-Hyun; Hur, Cinyoung; Kim, Yoonhee; Kang, Sang-Hyun; Kim, Byungsoo; Moon, Jong Bae; Cho, Kum Won
e-AIRS[1,2], an abbreviation of ‘e-Science Aerospace Integrated Research System,' is a virtual organization designed to support aerodynamic flow analyses in aerospace engineering using the e-Science environment. As the first step toward a virtual aerospace engineering organization, e-AIRS intends to give a full support of aerodynamic research process. Currently, e-AIRS can handle both the computational and experimental aerodynamic research on the e-Science infrastructure. In detail, users can conduct a full CFD (Computational Fluid Dynamics) research process, request wind tunnel experiment, perform comparative analysis between computational prediction and experimental measurement, and finally, collaborate with other researchers using the web portal. The present paper describes those services and the internal architecture of the e-AIRS system.
ERIC Educational Resources Information Center
Boyd, Sally
This booklet outlines the information gained from five case studies in New Zealand primary schools on how the use of computers was integrated into the school environment and the curriculum. Principals, teachers, and information technology (IT) coordinators were interviewed about students' use of computers. Information on the equipment available…
Hemispherical reflectance model for passive images in an outdoor environment.
Kim, Charles C; Thai, Bea; Yamaoka, Neil; Aboutalib, Omar
2015-05-01
We present a hemispherical reflectance model for simulating passive images in an outdoor environment where illumination is provided by natural sources such as the sun and the clouds. While the bidirectional reflectance distribution function (BRDF) accurately produces radiance from any objects after the illumination, using the BRDF in calculating radiance requires double integration. Replacing the BRDF by hemispherical reflectance under the natural sources transforms the double integration into a multiplication. This reduces both storage space and computation time. We present the formalism for the radiance of the scene using hemispherical reflectance instead of BRDF. This enables us to generate passive images in an outdoor environment taking advantage of the computational and storage efficiencies. We show some examples for illustration.
NASA Technical Reports Server (NTRS)
Mckay, C. W.; Bown, R. L.
1985-01-01
The space station data management system involves networks of computing resources that must work cooperatively and reliably over an indefinite life span. This program requires a long schedule of modular growth and an even longer period of maintenance and operation. The development and operation of space station computing resources will involve a spectrum of systems and software life cycle activities distributed across a variety of hosts, an integration, verification, and validation host with test bed, and distributed targets. The requirement for the early establishment and use of an apporopriate Computer Systems and Software Engineering Support Environment is identified. This environment will support the Research and Development Productivity challenges presented by the space station computing system.
On the generalized VIP time integral methodology for transient thermal problems
NASA Technical Reports Server (NTRS)
Mei, Youping; Chen, Xiaoqin; Tamma, Kumar K.; Sha, Desong
1993-01-01
The paper describes the development and applicability of a generalized VIrtual-Pulse (VIP) time integral method of computation for thermal problems. Unlike past approaches for general heat transfer computations, and with the advent of high speed computing technology and the importance of parallel computations for efficient use of computing environments, a major motivation via the developments described in this paper is the need for developing explicit computational procedures with improved accuracy and stability characteristics. As a consequence, a new and effective VIP methodology is described which inherits these improved characteristics. Numerical illustrative examples are provided to demonstrate the developments and validate the results obtained for thermal problems.
A computational- And storage-cloud for integration of biodiversity collections
Matsunaga, A.; Thompson, A.; Figueiredo, R. J.; Germain-Aubrey, C.C; Collins, M.; Beeman, R.S; Macfadden, B.J.; Riccardi, G.; Soltis, P.S; Page, L. M.; Fortes, J.A.B
2013-01-01
A core mission of the Integrated Digitized Biocollections (iDigBio) project is the building and deployment of a cloud computing environment customized to support the digitization workflow and integration of data from all U.S. nonfederal biocollections. iDigBio chose to use cloud computing technologies to deliver a cyberinfrastructure that is flexible, agile, resilient, and scalable to meet the needs of the biodiversity community. In this context, this paper describes the integration of open source cloud middleware, applications, and third party services using standard formats, protocols, and services. In addition, this paper demonstrates the value of the digitized information from collections in a broader scenario involving multiple disciplines.
Web-Based Learning in the Computer-Aided Design Curriculum.
ERIC Educational Resources Information Center
Sung, Wen-Tsai; Ou, S. C.
2002-01-01
Applies principles of constructivism and virtual reality (VR) to computer-aided design (CAD) curriculum, particularly engineering, by integrating network, VR and CAD technologies into a Web-based learning environment that expands traditional two-dimensional computer graphics into a three-dimensional real-time simulation that enhances user…
A collaborative molecular modeling environment using a virtual tunneling service.
Lee, Jun; Kim, Jee-In; Kang, Lin-Woo
2012-01-01
Collaborative researches of three-dimensional molecular modeling can be limited by different time zones and locations. A networked virtual environment can be utilized to overcome the problem caused by the temporal and spatial differences. However, traditional approaches did not sufficiently consider integration of different computing environments, which were characterized by types of applications, roles of users, and so on. We propose a collaborative molecular modeling environment to integrate different molecule modeling systems using a virtual tunneling service. We integrated Co-Coot, which is a collaborative crystallographic object-oriented toolkit, with VRMMS, which is a virtual reality molecular modeling system, through a collaborative tunneling system. The proposed system showed reliable quantitative and qualitative results through pilot experiments.
Toolpack mathematical software development environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osterweil, L.
1982-07-21
The purpose of this research project was to produce a well integrated set of tools for the support of numerical computation. The project entailed the specification, design and implementation of both a diversity of tools and an innovative tool integration mechanism. This large configuration of tightly integrated tools comprises an environment for numerical software development, and has been named Toolpack/IST (Integrated System of Tools). Following the creation of this environment in prototype form, the environment software was readied for widespread distribution by transitioning it to a development organization for systematization, documentation and distribution. It is expected that public release ofmore » Toolpack/IST will begin imminently and will provide a basis for evaluation of the innovative software approaches taken as well as a uniform set of development tools for the numerical software community.« less
The Direct Lighting Computation in Global Illumination Methods
NASA Astrophysics Data System (ADS)
Wang, Changyaw Allen
1994-01-01
Creating realistic images is a computationally expensive process, but it is very important for applications such as interior design, product design, education, virtual reality, and movie special effects. To generate realistic images, state-of-art rendering techniques are employed to simulate global illumination, which accounts for the interreflection of light among objects. In this document, we formalize the global illumination problem into a eight -dimensional integral and discuss various methods that can accelerate the process of approximating this integral. We focus on the direct lighting computation, which accounts for the light reaching the viewer from the emitting sources after exactly one reflection, Monte Carlo sampling methods, and light source simplification. Results include a new sample generation method, a framework for the prediction of the total number of samples used in a solution, and a generalized Monte Carlo approach for computing the direct lighting from an environment which for the first time makes ray tracing feasible for highly complex environments.
Computer aided design environment for the analysis and design of multi-body flexible structures
NASA Technical Reports Server (NTRS)
Ramakrishnan, Jayant V.; Singh, Ramen P.
1989-01-01
A computer aided design environment consisting of the programs NASTRAN, TREETOPS and MATLAB is presented in this paper. With links for data transfer between these programs, the integrated design of multi-body flexible structures is significantly enhanced. The CAD environment is used to model the Space Shuttle/Pinhole Occulater Facility. Then a controller is designed and evaluated in the nonlinear time history sense. Recent enhancements and ongoing research to add more capabilities are also described.
ERIC Educational Resources Information Center
Van Laere, Evelien; Rosiers, Kirsten; Van Avermaet, Piet; Slembrouck, Stef; van Braak, Johan
2017-01-01
Computer-based learning environments (CBLEs) have the potential to integrate the linguistic diversity present in classrooms as a resourceful tool in pupils' learning process. Particularly for pupils who speak a language at home other than the language which is used at school, more understanding is needed on how CBLEs offering multilingual content…
Computational toxicology using the OpenTox application programming interface and Bioclipse
2011-01-01
Background Toxicity is a complex phenomenon involving the potential adverse effect on a range of biological functions. Predicting toxicity involves using a combination of experimental data (endpoints) and computational methods to generate a set of predictive models. Such models rely strongly on being able to integrate information from many sources. The required integration of biological and chemical information sources requires, however, a common language to express our knowledge ontologically, and interoperating services to build reliable predictive toxicology applications. Findings This article describes progress in extending the integrative bio- and cheminformatics platform Bioclipse to interoperate with OpenTox, a semantic web framework which supports open data exchange and toxicology model building. The Bioclipse workbench environment enables functionality from OpenTox web services and easy access to OpenTox resources for evaluating toxicity properties of query molecules. Relevant cases and interfaces based on ten neurotoxins are described to demonstrate the capabilities provided to the user. The integration takes advantage of semantic web technologies, thereby providing an open and simplifying communication standard. Additionally, the use of ontologies ensures proper interoperation and reliable integration of toxicity information from both experimental and computational sources. Conclusions A novel computational toxicity assessment platform was generated from integration of two open science platforms related to toxicology: Bioclipse, that combines a rich scriptable and graphical workbench environment for integration of diverse sets of information sources, and OpenTox, a platform for interoperable toxicology data and computational services. The combination provides improved reliability and operability for handling large data sets by the use of the Open Standards from the OpenTox Application Programming Interface. This enables simultaneous access to a variety of distributed predictive toxicology databases, and algorithm and model resources, taking advantage of the Bioclipse workbench handling the technical layers. PMID:22075173
Using E-Learning and ICT Courses in Educational Environment: A Review
ERIC Educational Resources Information Center
Salehi, Hadi; Shojaee, Mohammad; Sattar, Susan
2015-01-01
With the quick emergence of computers and related technology, Electronic-learning (E-learning) and Information Communication and Technology (ICT) have been extensively utilized in the education and training field. Miscellaneous methods of integrating computer technology and the context in which computers are used have affected student learning in…
2007-09-01
example, an application developed in Sun’s Netbeans [2007] integrated development environment (IDE) uses Swing class object for graphical user... Netbeans Version 5.5.1 [Computer Software]. Santa Clara, CA: Sun Microsystems. Process Modeler Version 7.0 [Computer Software]. Santa Clara, Ca
Parallel processing for scientific computations
NASA Technical Reports Server (NTRS)
Alkhatib, Hasan S.
1995-01-01
The scope of this project dealt with the investigation of the requirements to support distributed computing of scientific computations over a cluster of cooperative workstations. Various experiments on computations for the solution of simultaneous linear equations were performed in the early phase of the project to gain experience in the general nature and requirements of scientific applications. A specification of a distributed integrated computing environment, DICE, based on a distributed shared memory communication paradigm has been developed and evaluated. The distributed shared memory model facilitates porting existing parallel algorithms that have been designed for shared memory multiprocessor systems to the new environment. The potential of this new environment is to provide supercomputing capability through the utilization of the aggregate power of workstations cooperating in a cluster interconnected via a local area network. Workstations, generally, do not have the computing power to tackle complex scientific applications, making them primarily useful for visualization, data reduction, and filtering as far as complex scientific applications are concerned. There is a tremendous amount of computing power that is left unused in a network of workstations. Very often a workstation is simply sitting idle on a desk. A set of tools can be developed to take advantage of this potential computing power to create a platform suitable for large scientific computations. The integration of several workstations into a logical cluster of distributed, cooperative, computing stations presents an alternative to shared memory multiprocessor systems. In this project we designed and evaluated such a system.
Compression-based integral curve data reuse framework for flow visualization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong, Fan; Bi, Chongke; Guo, Hanqi
Currently, by default, integral curves are repeatedly re-computed in different flow visualization applications, such as FTLE field computation, source-destination queries, etc., leading to unnecessary resource cost. We present a compression-based data reuse framework for integral curves, to greatly reduce their retrieval cost, especially in a resource-limited environment. In our design, a hierarchical and hybrid compression scheme is proposed to balance three objectives, including high compression ratio, controllable error, and low decompression cost. Specifically, we use and combine digitized curve sparse representation, floating-point data compression, and octree space partitioning to adaptively achieve the objectives. Results have shown that our data reusemore » framework could acquire tens of times acceleration in the resource-limited environment compared to on-the-fly particle tracing, and keep controllable information loss. Moreover, our method could provide fast integral curve retrieval for more complex data, such as unstructured mesh data.« less
Real-Time Hardware-in-the-Loop Simulation of Ares I Launch Vehicle
NASA Technical Reports Server (NTRS)
Tobbe, Patrick; Matras, Alex; Walker, David; Wilson, Heath; Fulton, Chris; Alday, Nathan; Betts, Kevin; Hughes, Ryan; Turbe, Michael
2009-01-01
The Ares Real-Time Environment for Modeling, Integration, and Simulation (ARTEMIS) has been developed for use by the Ares I launch vehicle System Integration Laboratory at the Marshall Space Flight Center. The primary purpose of the Ares System Integration Laboratory is to test the vehicle avionics hardware and software in a hardware - in-the-loop environment to certify that the integrated system is prepared for flight. ARTEMIS has been designed to be the real-time simulation backbone to stimulate all required Ares components for verification testing. ARTE_VIIS provides high -fidelity dynamics, actuator, and sensor models to simulate an accurate flight trajectory in order to ensure realistic test conditions. ARTEMIS has been designed to take advantage of the advances in underlying computational power now available to support hardware-in-the-loop testing to achieve real-time simulation with unprecedented model fidelity. A modular realtime design relying on a fully distributed computing architecture has been implemented.
NASA Technical Reports Server (NTRS)
Belcastro, Celeste M.; Fischl, Robert; Kam, Moshe
1992-01-01
This paper presents a strategy for dynamically monitoring digital controllers in the laboratory for susceptibility to electromagnetic disturbances that compromise control integrity. The integrity of digital control systems operating in harsh electromagnetic environments can be compromised by upsets caused by induced transient electrical signals. Digital system upset is a functional error mode that involves no component damage, can occur simultaneously in all channels of a redundant control computer, and is software dependent. The motivation for this work is the need to develop tools and techniques that can be used in the laboratory to validate and/or certify critical aircraft controllers operating in electromagnetically adverse environments that result from lightning, high-intensity radiated fields (HIRF), and nuclear electromagnetic pulses (NEMP). The detection strategy presented in this paper provides dynamic monitoring of a given control computer for degraded functional integrity resulting from redundancy management errors, control calculation errors, and control correctness/effectiveness errors. In particular, this paper discusses the use of Kalman filtering, data fusion, and statistical decision theory in monitoring a given digital controller for control calculation errors.
Noel, Jean-Paul; Blanke, Olaf; Serino, Andrea
2018-06-06
Integrating information across sensory systems is a critical step toward building a cohesive representation of the environment and one's body, and as illustrated by numerous illusions, scaffolds subjective experience of the world and self. In the last years, classic principles of multisensory integration elucidated in the subcortex have been translated into the language of statistical inference understood by the neocortical mantle. Most importantly, a mechanistic systems-level description of multisensory computations via probabilistic population coding and divisive normalization is actively being put forward. In parallel, by describing and understanding bodily illusions, researchers have suggested multisensory integration of bodily inputs within the peripersonal space as a key mechanism in bodily self-consciousness. Importantly, certain aspects of bodily self-consciousness, although still very much a minority, have been recently casted under the light of modern computational understandings of multisensory integration. In doing so, we argue, the field of bodily self-consciousness may borrow mechanistic descriptions regarding the neural implementation of inference computations outlined by the multisensory field. This computational approach, leveraged on the understanding of multisensory processes generally, promises to advance scientific comprehension regarding one of the most mysterious questions puzzling humankind, that is, how our brain creates the experience of a self in interaction with the environment. © 2018 The Authors. Annals of the New York Academy of Sciences published by Wiley Periodicals, Inc. on behalf of New York Academy of Sciences.
ERIC Educational Resources Information Center
Hassani, Kaveh; Nahvi, Ali; Ahmadi, Ali
2016-01-01
In this paper, we present an intelligent architecture, called intelligent virtual environment for language learning, with embedded pedagogical agents for improving listening and speaking skills of non-native English language learners. The proposed architecture integrates virtual environments into the Intelligent Computer-Assisted Language…
Neuroimaging Study Designs, Computational Analyses and Data Provenance Using the LONI Pipeline
Dinov, Ivo; Lozev, Kamen; Petrosyan, Petros; Liu, Zhizhong; Eggert, Paul; Pierce, Jonathan; Zamanyan, Alen; Chakrapani, Shruthi; Van Horn, John; Parker, D. Stott; Magsipoc, Rico; Leung, Kelvin; Gutman, Boris; Woods, Roger; Toga, Arthur
2010-01-01
Modern computational neuroscience employs diverse software tools and multidisciplinary expertise to analyze heterogeneous brain data. The classical problems of gathering meaningful data, fitting specific models, and discovering appropriate analysis and visualization tools give way to a new class of computational challenges—management of large and incongruous data, integration and interoperability of computational resources, and data provenance. We designed, implemented and validated a new paradigm for addressing these challenges in the neuroimaging field. Our solution is based on the LONI Pipeline environment [3], [4], a graphical workflow environment for constructing and executing complex data processing protocols. We developed study-design, database and visual language programming functionalities within the LONI Pipeline that enable the construction of complete, elaborate and robust graphical workflows for analyzing neuroimaging and other data. These workflows facilitate open sharing and communication of data and metadata, concrete processing protocols, result validation, and study replication among different investigators and research groups. The LONI Pipeline features include distributed grid-enabled infrastructure, virtualized execution environment, efficient integration, data provenance, validation and distribution of new computational tools, automated data format conversion, and an intuitive graphical user interface. We demonstrate the new LONI Pipeline features using large scale neuroimaging studies based on data from the International Consortium for Brain Mapping [5] and the Alzheimer's Disease Neuroimaging Initiative [6]. User guides, forums, instructions and downloads of the LONI Pipeline environment are available at http://pipeline.loni.ucla.edu. PMID:20927408
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trujillo, Angelina Michelle
Strategy, Planning, Acquiring- very large scale computing platforms come and go and planning for immensely scalable machines often precedes actual procurement by 3 years. Procurement can be another year or more. Integration- After Acquisition, machines must be integrated into the computing environments at LANL. Connection to scalable storage via large scale storage networking, assuring correct and secure operations. Management and Utilization – Ongoing operations, maintenance, and trouble shooting of the hardware and systems software at massive scale is required.
Ubiquitous computing in the military environment
NASA Astrophysics Data System (ADS)
Scholtz, Jean
2001-08-01
Increasingly people work and live on the move. To support this mobile lifestyle, especially as our work becomes more intensely information-based, companies are producing various portable and embedded information devices. The late Mark Weiser coined the term, 'ubiquitous computing' to describe an environment where computers have disappeared and are integrated into physical objects. Much industry research today is concerned with ubiquitous computing in the work and home environments. A ubiquitous computing environment would facilitate mobility by allowing information users to easily access and use information anytime, anywhere. As war fighters are inherently mobile, the question is what effect a ubiquitous computing environment would have on current military operations and doctrine. And, if ubiquitous computing is viewed as beneficial for the military, what research would be necessary to achieve a military ubiquitous computing environment? What is a vision for the use of mobile information access in a battle space? Are there different requirements for civilian and military users of this technology? What are those differences? Are there opportunities for research that will support both worlds? What type of research has been supported by the military and what areas need to be investigated? Although we don't yet have all the answers to these questions, this paper discusses the issues and presents the work we are doing to address these issues.
Human interaction with wearable computer systems: a look at glasses-mounted displays
NASA Astrophysics Data System (ADS)
Revels, Allen R.; Quill, Laurie L.; Kancler, David E.; Masquelier, Barbara L.
1998-09-01
With the advancement of technology and the information explosion, integration of the two into performance aiding systems can have a significant impact on operational and maintenance environments. The Department of Defense and commercial industry have made great strides in digitizing and automating technical manuals and data to be presented on performance aiding systems. These performance aides are computerized interactive systems that provide procedures on how to operate and maintain fielded systems. The idea is to provide the end-user a system which is compatible with their work environment. The purpose of this paper is to show, historically, the progression of wearable computer aiding systems for maintenance environments, and then highlight the work accomplished in the design and development of glasses- mounted displays (GMD). The paper reviews work performed over the last seven years, then highlights, through review of a usability study, the advances made with GMDs. The use of portable computing systems, such as laptop and notebook, computers, does not necessarily increase the accessibility of the displayed information while accomplishing a given task in a hands-busy, mobile work environment. The use of a GMD increases accessibility of the information by placing it in eye sight of the user without obstructing the surrounding environment. Although the potential utility for this type of display is great, hardware and human integration must be refined. Results from the usability study show the usefulness and usability of the GMD in a mobile, hands-free environment.
NASA Technical Reports Server (NTRS)
Brown, Robert L.; Doyle, Dee; Haines, Richard F.; Slocum, Michael
1989-01-01
As part of the Telescience Testbed Pilot Program, the Universities Space Research Association/ Research Institute for Advanced Computer Science (USRA/RIACS) proposed to support remote communication by providing a network of human/machine interfaces, computer resources, and experimental equipment which allows: remote science, collaboration, technical exchange, and multimedia communication. The telescience workstation is intended to provide a local computing environment for telescience. The purpose of the program are as follows: (1) to provide a suitable environment to integrate existing and new software for a telescience workstation; (2) to provide a suitable environment to develop new software in support of telescience activities; (3) to provide an interoperable environment so that a wide variety of workstations may be used in the telescience program; (4) to provide a supportive infrastructure and a common software base; and (5) to advance, apply, and evaluate the telescience technolgy base. A prototype telescience computing environment designed to bring practicing scientists in domains other than their computer science into a modern style of doing their computing was created and deployed. This environment, the Telescience Windowing Environment, Phase 1 (TeleWEn-1), met some, but not all of the goals stated above. The TeleWEn-1 provided a window-based workstation environment and a set of tools for text editing, document preparation, electronic mail, multimedia mail, raster manipulation, and system management.
A Collaborative Molecular Modeling Environment Using a Virtual Tunneling Service
Lee, Jun; Kim, Jee-In; Kang, Lin-Woo
2012-01-01
Collaborative researches of three-dimensional molecular modeling can be limited by different time zones and locations. A networked virtual environment can be utilized to overcome the problem caused by the temporal and spatial differences. However, traditional approaches did not sufficiently consider integration of different computing environments, which were characterized by types of applications, roles of users, and so on. We propose a collaborative molecular modeling environment to integrate different molecule modeling systems using a virtual tunneling service. We integrated Co-Coot, which is a collaborative crystallographic object-oriented toolkit, with VRMMS, which is a virtual reality molecular modeling system, through a collaborative tunneling system. The proposed system showed reliable quantitative and qualitative results through pilot experiments. PMID:22927721
Integrated environmental modeling: A vision and roadmap for the future
Integrated environmental modeling (IEM) is inspired by modern environmental problems, decisions, and policies and enabled by transdisciplinary science and computer capabilities that allow the environment to be considered in a holistic way. The problems are characterized by the ex...
Integrating Computers into the Accounting Curriculum Using an IBM PC Network. Final Report.
ERIC Educational Resources Information Center
Shaoul, Jean
Noting the increased use of microcomputers in commerce and the accounting profession, the Department of Accounting and Finance at the University of Manchester recognized the importance of integrating microcomputers into the accounting curriculum and requested and received a grant to develop an integrated study environment in which students would…
The SIETTE Automatic Assessment Environment
ERIC Educational Resources Information Center
Conejo, Ricardo; Guzmán, Eduardo; Trella, Monica
2016-01-01
This article describes the evolution and current state of the domain-independent Siette assessment environment. Siette supports different assessment methods--including classical test theory, item response theory, and computer adaptive testing--and integrates them with multidimensional student models used by intelligent educational systems.…
ERIC Educational Resources Information Center
Kinnebrew, John S.; Segedy, James R.; Biswas, Gautam
2017-01-01
Research in computer-based learning environments has long recognized the vital role of adaptivity in promoting effective, individualized learning among students. Adaptive scaffolding capabilities are particularly important in open-ended learning environments, which provide students with opportunities for solving authentic and complex problems, and…
The Effects of Participation, Performance, and Interest in a Game-Based Writing Environment
ERIC Educational Resources Information Center
Liao, Calvin C. Y.; Chang, Wan-Chen; Chan, Tak-Wai
2018-01-01
We have observed that many computer-supported writing environments based on pedagogical strategies have only been designed to incorporate the cognitive aspects, but motivational aspects should also be included. Hence, we theorize that integrating game-based learning into the writing environment may be a practical approach that can facilitate…
Does the Medium Dictate the Message? Cultivating E-Communication in an Asynchronous Environment.
ERIC Educational Resources Information Center
Kiernan, Mary; Thomas, Pete; Woodroffe, Mark
Virtual learning environments (VLEs) are often perceived by education establishments as an opportunity to widen access without traditional overheads. An integral part of most VLEs is asynchronous computer conferencing and on-line moderators must help students migrate quickly to the new virtual environment to minimize learning disruption. This…
Construction of integrated case environments.
Losavio, Francisca; Matteo, Alfredo; Pérez, María
2003-01-01
The main goal of Computer-Aided Software Engineering (CASE) technology is to improve the entire software system development process. The CASE approach is not merely a technology; it involves a fundamental change in the process of software development. The tendency of the CASE approach, technically speaking, is the integration of tools that assist in the application of specific methods. In this sense, the environment architecture, which includes the platform and the system's hardware and software, constitutes the base of the CASE environment. The problem of tools integration has been proposed for two decades. Current integration efforts emphasize the interoperability of tools, especially in distributed environments. In this work we use the Brown approach. The environment resulting from the application of this model is called a federative environment, focusing on the fact that this architecture pays special attention to the connections among the components of the environment. This approach is now being used in component-based design. This paper describes a concrete experience in civil engineering and architecture fields, for the construction of an integrated CASE environment. A generic architectural framework based on an intermediary architectural pattern is applied to achieve the integration of the different tools. This intermediary represents the control perspective of the PAC (Presentation-Abstraction-Control) style, which has been implemented as a Mediator pattern and it has been used in the interactive systems domain. In addition, a process is given to construct the integrated CASE.
Ubiquitous Wireless Smart Sensing and Control
NASA Technical Reports Server (NTRS)
Wagner, Raymond
2013-01-01
Need new technologies to reliably and safely have humans interact within sensored environments (integrated user interfaces, physical and cognitive augmentation, training, and human-systems integration tools). Areas of focus include: radio frequency identification (RFID), motion tracking, wireless communication, wearable computing, adaptive training and decision support systems, and tele-operations. The challenge is developing effective, low cost/mass/volume/power integrated monitoring systems to assess and control system, environmental, and operator health; and accurately determining and controlling the physical, chemical, and biological environments of the areas and associated environmental control systems.
Ubiquitous Wireless Smart Sensing and Control. Pumps and Pipes JSC: Uniquely Houston
NASA Technical Reports Server (NTRS)
Wagner, Raymond
2013-01-01
Need new technologies to reliably and safely have humans interact within sensored environments (integrated user interfaces, physical and cognitive augmentation, training, and human-systems integration tools).Areas of focus include: radio frequency identification (RFID), motion tracking, wireless communication, wearable computing, adaptive training and decision support systems, and tele-operations. The challenge is developing effective, low cost/mass/volume/power integrated monitoring systems to assess and control system, environmental, and operator health; and accurately determining and controlling the physical, chemical, and biological environments of the areas and associated environmental control systems.
Comprehensive Solar-Terrestrial Environment Model (COSTEM) for Space Weather Predictions
2007-07-01
research in data assimilation methodologies applicable to the space environment, as well as "threat adaptive" grid computing technologies, where we...SWMF is tested by(SWMF) [29, 43] was designed in 2001 and has sse et xriig mlil ope been developed to integrate and couple several system tests...its components. The night on several computer/compiler platforms. main design goals of the SWMF were to minimizedocumented. mai deigngoas o th SWF
The Specification of an Integrated Computer-Aided Ship Design Process in an Academic Environment.
1984-06-01
complicated. The intuition .-nd ex:perience of a good designer are qualities that cannot yet ;e programmed into even the most capable computer. Comitters...between themselves. These application routines, while very capable in their own right, lack the qualities which would make them more usable in the...academic environment. These qualities include thorough documentation, both substantive derivations and descriptive user’s guides, user friendliness and
Applications of the pipeline environment for visual informatics and genomics computations
2011-01-01
Background Contemporary informatics and genomics research require efficient, flexible and robust management of large heterogeneous data, advanced computational tools, powerful visualization, reliable hardware infrastructure, interoperability of computational resources, and detailed data and analysis-protocol provenance. The Pipeline is a client-server distributed computational environment that facilitates the visual graphical construction, execution, monitoring, validation and dissemination of advanced data analysis protocols. Results This paper reports on the applications of the LONI Pipeline environment to address two informatics challenges - graphical management of diverse genomics tools, and the interoperability of informatics software. Specifically, this manuscript presents the concrete details of deploying general informatics suites and individual software tools to new hardware infrastructures, the design, validation and execution of new visual analysis protocols via the Pipeline graphical interface, and integration of diverse informatics tools via the Pipeline eXtensible Markup Language syntax. We demonstrate each of these processes using several established informatics packages (e.g., miBLAST, EMBOSS, mrFAST, GWASS, MAQ, SAMtools, Bowtie) for basic local sequence alignment and search, molecular biology data analysis, and genome-wide association studies. These examples demonstrate the power of the Pipeline graphical workflow environment to enable integration of bioinformatics resources which provide a well-defined syntax for dynamic specification of the input/output parameters and the run-time execution controls. Conclusions The LONI Pipeline environment http://pipeline.loni.ucla.edu provides a flexible graphical infrastructure for efficient biomedical computing and distributed informatics research. The interactive Pipeline resource manager enables the utilization and interoperability of diverse types of informatics resources. The Pipeline client-server model provides computational power to a broad spectrum of informatics investigators - experienced developers and novice users, user with or without access to advanced computational-resources (e.g., Grid, data), as well as basic and translational scientists. The open development, validation and dissemination of computational networks (pipeline workflows) facilitates the sharing of knowledge, tools, protocols and best practices, and enables the unbiased validation and replication of scientific findings by the entire community. PMID:21791102
Rapid Prototyping of Computer-Based Presentations Using NEAT, Version 1.1.
ERIC Educational Resources Information Center
Muldner, Tomasz
NEAT (iNtegrated Environment for Authoring in ToolBook) provides templates and various facilities for the rapid prototyping of computer-based presentations, a capability that is lacking in current authoring systems. NEAT is a specialized authoring system that can be used by authors who have a limited knowledge of computer systems and no…
Development and Assessment of a Chemistry-Based Computer Video Game as a Learning Tool
ERIC Educational Resources Information Center
Martinez-Hernandez, Kermin Joel
2010-01-01
The chemistry-based computer video game is a multidisciplinary collaboration between chemistry and computer graphics and technology fields developed to explore the use of video games as a possible learning tool. This innovative approach aims to integrate elements of commercial video game and authentic chemistry context environments into a learning…
ERIC Educational Resources Information Center
Swan, Karen; Kratcoski, Annette; Mazzer, Pat; Schenker, Jason
2005-01-01
This article describes an ongoing situated professional development program in which teachers bring their intact classes for an extended stay in a ubiquitous computing environment equipped with a variety of state-of-the-art computing devices. The experience is unique in that it not only situates teacher learning about technology integration in…
Formulation of a strategy for monitoring control integrity in critical digital control systems
NASA Technical Reports Server (NTRS)
Belcastro, Celeste M.; Fischl, Robert; Kam, Moshe
1991-01-01
Advanced aircraft will require flight critical computer systems for stability augmentation as well as guidance and control that must perform reliably in adverse, as well as nominal, operating environments. Digital system upset is a functional error mode that can occur in electromagnetically harsh environments, involves no component damage, can occur simultaneously in all channels of a redundant control computer, and is software dependent. A strategy is presented for dynamic upset detection to be used in the evaluation of critical digital controllers during the design and/or validation phases of development. Critical controllers must be able to be used in adverse environments that result from disturbances caused by an electromagnetic source such as lightning, high intensity radiated field (HIRF), and nuclear electromagnetic pulses (NEMP). The upset detection strategy presented provides dynamic monitoring of a given control computer for degraded functional integrity that can result from redundancy management errors and control command calculation error that could occur in an electromagnetically harsh operating environment. The use is discussed of Kalman filtering, data fusion, and decision theory in monitoring a given digital controller for control calculation errors, redundancy management errors, and control effectiveness.
DEVELOPMENT OF DNA MICROARRAYS FOR ECOLOGICAL EXPOSURE ASSESSMENT
EPA/ORD is moving forward with a computational toxicology initiative in FY 04 which aims to integrate genomics and computational methods to provide a mechanistic basis for prediction of exposure and effects of chemical stressors in the environment.
The goal of the presen...
Tool Integration Framework for Bio-Informatics
2007-04-01
Java NetBeans [11] based Integrated Development Environment (IDE) for developing modules and packaging computational tools. The framework is extremely...integrate an Eclipse front-end for Desktop Integration. Eclipse was chosen over Netbeans owing to a higher acceptance, better infrastructure...5.0. This version of Dashboard ran with NetBeans IDE 3.6 requiring Java Runtime 1.4 on a machine with Windows XP. The toolchain is executed by
ENFIN--A European network for integrative systems biology.
Kahlem, Pascal; Clegg, Andrew; Reisinger, Florian; Xenarios, Ioannis; Hermjakob, Henning; Orengo, Christine; Birney, Ewan
2009-11-01
Integration of biological data of various types and the development of adapted bioinformatics tools represent critical objectives to enable research at the systems level. The European Network of Excellence ENFIN is engaged in developing an adapted infrastructure to connect databases, and platforms to enable both the generation of new bioinformatics tools and the experimental validation of computational predictions. With the aim of bridging the gap existing between standard wet laboratories and bioinformatics, the ENFIN Network runs integrative research projects to bring the latest computational techniques to bear directly on questions dedicated to systems biology in the wet laboratory environment. The Network maintains internally close collaboration between experimental and computational research, enabling a permanent cycling of experimental validation and improvement of computational prediction methods. The computational work includes the development of a database infrastructure (EnCORE), bioinformatics analysis methods and a novel platform for protein function analysis FuncNet.
Shared direct memory access on the Explorer 2-LX
NASA Technical Reports Server (NTRS)
Musgrave, Jeffrey L.
1990-01-01
Advances in Expert System technology and Artificial Intelligence have provided a framework for applying automated Intelligence to the solution of problems which were generally perceived as intractable using more classical approaches. As a result, hybrid architectures and parallel processing capability have become more common in computing environments. The Texas Instruments Explorer II-LX is an example of a machine which combines a symbolic processing environment, and a computationally oriented environment in a single chassis for integrated problem solutions. This user's manual is an attempt to make these capabilities more accessible to a wider range of engineers and programmers with problems well suited to solution in such an environment.
An Interactive, Web-based High Performance Modeling Environment for Computational Epidemiology.
Deodhar, Suruchi; Bisset, Keith R; Chen, Jiangzhuo; Ma, Yifei; Marathe, Madhav V
2014-07-01
We present an integrated interactive modeling environment to support public health epidemiology. The environment combines a high resolution individual-based model with a user-friendly web-based interface that allows analysts to access the models and the analytics back-end remotely from a desktop or a mobile device. The environment is based on a loosely-coupled service-oriented-architecture that allows analysts to explore various counter factual scenarios. As the modeling tools for public health epidemiology are getting more sophisticated, it is becoming increasingly hard for non-computational scientists to effectively use the systems that incorporate such models. Thus an important design consideration for an integrated modeling environment is to improve ease of use such that experimental simulations can be driven by the users. This is achieved by designing intuitive and user-friendly interfaces that allow users to design and analyze a computational experiment and steer the experiment based on the state of the system. A key feature of a system that supports this design goal is the ability to start, stop, pause and roll-back the disease propagation and intervention application process interactively. An analyst can access the state of the system at any point in time and formulate dynamic interventions based on additional information obtained through state assessment. In addition, the environment provides automated services for experiment set-up and management, thus reducing the overall time for conducting end-to-end experimental studies. We illustrate the applicability of the system by describing computational experiments based on realistic pandemic planning scenarios. The experiments are designed to demonstrate the system's capability and enhanced user productivity.
An Interactive, Web-based High Performance Modeling Environment for Computational Epidemiology
Deodhar, Suruchi; Bisset, Keith R.; Chen, Jiangzhuo; Ma, Yifei; Marathe, Madhav V.
2014-01-01
We present an integrated interactive modeling environment to support public health epidemiology. The environment combines a high resolution individual-based model with a user-friendly web-based interface that allows analysts to access the models and the analytics back-end remotely from a desktop or a mobile device. The environment is based on a loosely-coupled service-oriented-architecture that allows analysts to explore various counter factual scenarios. As the modeling tools for public health epidemiology are getting more sophisticated, it is becoming increasingly hard for non-computational scientists to effectively use the systems that incorporate such models. Thus an important design consideration for an integrated modeling environment is to improve ease of use such that experimental simulations can be driven by the users. This is achieved by designing intuitive and user-friendly interfaces that allow users to design and analyze a computational experiment and steer the experiment based on the state of the system. A key feature of a system that supports this design goal is the ability to start, stop, pause and roll-back the disease propagation and intervention application process interactively. An analyst can access the state of the system at any point in time and formulate dynamic interventions based on additional information obtained through state assessment. In addition, the environment provides automated services for experiment set-up and management, thus reducing the overall time for conducting end-to-end experimental studies. We illustrate the applicability of the system by describing computational experiments based on realistic pandemic planning scenarios. The experiments are designed to demonstrate the system's capability and enhanced user productivity. PMID:25530914
Design requirements for ubiquitous computing environments for healthcare professionals.
Bång, Magnus; Larsson, Anders; Eriksson, Henrik
2004-01-01
Ubiquitous computing environments can support clinical administrative routines in new ways. The aim of such computing approaches is to enhance routine physical work, thus it is important to identify specific design requirements. We studied healthcare professionals in an emergency room and developed the computer-augmented environment NOSTOS to support teamwork in that setting. NOSTOS uses digital pens and paper-based media as the primary input interface for data capture and as a means of controlling the system. NOSTOS also includes a digital desk, walk-up displays, and sensor technology that allow the system to track documents and activities in the workplace. We propose a set of requirements and discuss the value of tangible user interfaces for healthcare personnel. Our results suggest that the key requirements are flexibility in terms of system usage and seamless integration between digital and physical components. We also discuss how ubiquitous computing approaches like NOSTOS can be beneficial in the medical workplace.
Design and implementation of spatial knowledge grid for integrated spatial analysis
NASA Astrophysics Data System (ADS)
Liu, Xiangnan; Guan, Li; Wang, Ping
2006-10-01
Supported by spatial information grid(SIG), the spatial knowledge grid (SKG) for integrated spatial analysis utilizes the middleware technology in constructing the spatial information grid computation environment and spatial information service system, develops spatial entity oriented spatial data organization technology, carries out the profound computation of the spatial structure and spatial process pattern on the basis of Grid GIS infrastructure, spatial data grid and spatial information grid (specialized definition). At the same time, it realizes the complex spatial pattern expression and the spatial function process simulation by taking the spatial intelligent agent as the core to establish space initiative computation. Moreover through the establishment of virtual geographical environment with man-machine interactivity and blending, complex spatial modeling, network cooperation work and spatial community decision knowledge driven are achieved. The framework of SKG is discussed systematically in this paper. Its implement flow and the key technology with examples of overlay analysis are proposed as well.
About Distributed Simulation-based Optimization of Forming Processes using a Grid Architecture
NASA Astrophysics Data System (ADS)
Grauer, Manfred; Barth, Thomas
2004-06-01
Permanently increasing complexity of products and their manufacturing processes combined with a shorter "time-to-market" leads to more and more use of simulation and optimization software systems for product design. Finding a "good" design of a product implies the solution of computationally expensive optimization problems based on the results of simulation. Due to the computational load caused by the solution of these problems, the requirements on the Information&Telecommunication (IT) infrastructure of an enterprise or research facility are shifting from stand-alone resources towards the integration of software and hardware resources in a distributed environment for high-performance computing. Resources can either comprise software systems, hardware systems, or communication networks. An appropriate IT-infrastructure must provide the means to integrate all these resources and enable their use even across a network to cope with requirements from geographically distributed scenarios, e.g. in computational engineering and/or collaborative engineering. Integrating expert's knowledge into the optimization process is inevitable in order to reduce the complexity caused by the number of design variables and the high dimensionality of the design space. Hence, utilization of knowledge-based systems must be supported by providing data management facilities as a basis for knowledge extraction from product data. In this paper, the focus is put on a distributed problem solving environment (PSE) capable of providing access to a variety of necessary resources and services. A distributed approach integrating simulation and optimization on a network of workstations and cluster systems is presented. For geometry generation the CAD-system CATIA is used which is coupled with the FEM-simulation system INDEED for simulation of sheet-metal forming processes and the problem solving environment OpTiX for distributed optimization.
Naver: a PC-cluster-based VR system
NASA Astrophysics Data System (ADS)
Park, ChangHoon; Ko, HeeDong; Kim, TaiYun
2003-04-01
In this paper, we present a new framework NAVER for virtual reality application. The NAVER is based on a cluster of low-cost personal computers. The goal of NAVER is to provide flexible, extensible, scalable and re-configurable framework for the virtual environments defined as the integration of 3D virtual space and external modules. External modules are various input or output devices and applications on the remote hosts. From the view of system, personal computers are divided into three servers according to its specific functions: Render Server, Device Server and Control Server. While Device Server contains external modules requiring event-based communication for the integration, Control Server contains external modules requiring synchronous communication every frame. And, the Render Server consists of 5 managers: Scenario Manager, Event Manager, Command Manager, Interaction Manager and Sync Manager. These managers support the declaration and operation of virtual environment and the integration with external modules on remote servers.
Program For Generating Interactive Displays
NASA Technical Reports Server (NTRS)
Costenbader, Jay; Moleski, Walt; Szczur, Martha; Howell, David; Engelberg, Norm; Li, Tin P.; Misra, Dharitri; Miller, Philip; Neve, Leif; Wolf, Karl;
1991-01-01
Sun/Unix version of Transportable Applications Environment Plus (TAE+) computer program provides integrated, portable software environment for developing and running interactive window, text, and graphical-object-based application software systems. Enables programmer or nonprogrammer to construct easily custom software interface between user and application program and to move resulting interface program and its application program to different computers. Plus viewed as productivity tool for application developers and application end users, who benefit from resultant consistent and well-designed user interface sheltering them from intricacies of computer. Available in form suitable for following six different groups of computers: DEC VAX station and other VMS VAX computers, Macintosh II computers running AUX, Apollo Domain Series 3000, DEC VAX and reduced-instruction-set-computer workstations running Ultrix, Sun 3- and 4-series workstations running Sun OS and IBM RT/PC and PS/2 compute
Another Program For Generating Interactive Graphics
NASA Technical Reports Server (NTRS)
Costenbader, Jay; Moleski, Walt; Szczur, Martha; Howell, David; Engelberg, Norm; Li, Tin P.; Misra, Dharitri; Miller, Philip; Neve, Leif; Wolf, Karl;
1991-01-01
VAX/Ultrix version of Transportable Applications Environment Plus (TAE+) computer program provides integrated, portable software environment for developing and running interactive window, text, and graphical-object-based application software systems. Enables programmer or nonprogrammer to construct easily custom software interface between user and application program and to move resulting interface program and its application program to different computers. When used throughout company for wide range of applications, makes both application program and computer seem transparent, with noticeable improvements in learning curve. Available in form suitable for following six different groups of computers: DEC VAX station and other VMS VAX computers, Macintosh II computers running AUX, Apollo Domain Series 3000, DEC VAX and reduced-instruction-set-computer workstations running Ultrix, Sun 3- and 4-series workstations running Sun OS and IBM RT/PC's and PS/2 computers running AIX, and HP 9000 S
Analysis of methods. [information systems evolution environment
NASA Technical Reports Server (NTRS)
Mayer, Richard J. (Editor); Ackley, Keith A.; Wells, M. Sue; Mayer, Paula S. D.; Blinn, Thomas M.; Decker, Louis P.; Toland, Joel A.; Crump, J. Wesley; Menzel, Christopher P.; Bodenmiller, Charles A.
1991-01-01
Information is one of an organization's most important assets. For this reason the development and maintenance of an integrated information system environment is one of the most important functions within a large organization. The Integrated Information Systems Evolution Environment (IISEE) project has as one of its primary goals a computerized solution to the difficulties involved in the development of integrated information systems. To develop such an environment a thorough understanding of the enterprise's information needs and requirements is of paramount importance. This document is the current release of the research performed by the Integrated Development Support Environment (IDSE) Research Team in support of the IISEE project. Research indicates that an integral part of any information system environment would be multiple modeling methods to support the management of the organization's information. Automated tool support for these methods is necessary to facilitate their use in an integrated environment. An integrated environment makes it necessary to maintain an integrated database which contains the different kinds of models developed under the various methodologies. In addition, to speed the process of development of models, a procedure or technique is needed to allow automatic translation from one methodology's representation to another while maintaining the integrity of both. The purpose for the analysis of the modeling methods included in this document is to examine these methods with the goal being to include them in an integrated development support environment. To accomplish this and to develop a method for allowing intra-methodology and inter-methodology model element reuse, a thorough understanding of multiple modeling methodologies is necessary. Currently the IDSE Research Team is investigating the family of Integrated Computer Aided Manufacturing (ICAM) DEFinition (IDEF) languages IDEF(0), IDEF(1), and IDEF(1x), as well as ENALIM, Entity Relationship, Data Flow Diagrams, and Structure Charts, for inclusion in an integrated development support environment.
Logic integration of mRNA signals by an RNAi-based molecular computer.
Xie, Zhen; Liu, Siyuan John; Bleris, Leonidas; Benenson, Yaakov
2010-05-01
Synthetic in vivo molecular 'computers' could rewire biological processes by establishing programmable, non-native pathways between molecular signals and biological responses. Multiple molecular computer prototypes have been shown to work in simple buffered solutions. Many of those prototypes were made of DNA strands and performed computations using cycles of annealing-digestion or strand displacement. We have previously introduced RNA interference (RNAi)-based computing as a way of implementing complex molecular logic in vivo. Because it also relies on nucleic acids for its operation, RNAi computing could benefit from the tools developed for DNA systems. However, these tools must be harnessed to produce bioactive components and be adapted for harsh operating environments that reflect in vivo conditions. In a step toward this goal, we report the construction and implementation of biosensors that 'transduce' mRNA levels into bioactive, small interfering RNA molecules via RNA strand exchange in a cell-free Drosophila embryo lysate, a step beyond simple buffered environments. We further integrate the sensors with our RNAi 'computational' module to evaluate two-input logic functions on mRNA concentrations. Our results show how RNA strand exchange can expand the utility of RNAi computing and point toward the possibility of using strand exchange in a native biological setting.
NASA Technical Reports Server (NTRS)
Logan, Terry G.
1994-01-01
The purpose of this study is to investigate the performance of the integral equation computations using numerical source field-panel method in a massively parallel processing (MPP) environment. A comparative study of computational performance of the MPP CM-5 computer and conventional Cray-YMP supercomputer for a three-dimensional flow problem is made. A serial FORTRAN code is converted into a parallel CM-FORTRAN code. Some performance results are obtained on CM-5 with 32, 62, 128 nodes along with those on Cray-YMP with a single processor. The comparison of the performance indicates that the parallel CM-FORTRAN code near or out-performs the equivalent serial FORTRAN code for some cases.
NASA Technical Reports Server (NTRS)
Benyo, Theresa L.
2002-01-01
Integration of a supersonic inlet simulation with a computer aided design (CAD) system is demonstrated. The integration is performed using the Project Integration Architecture (PIA). PIA provides a common environment for wrapping many types of applications. Accessing geometry data from CAD files is accomplished by incorporating appropriate function calls from the Computational Analysis Programming Interface (CAPRI). CAPRI is a CAD vendor neutral programming interface that aids in acquiring geometry data directly from CAD files. The benefits of wrapping a supersonic inlet simulation into PIA using CAPRI are; direct access of geometry data, accurate capture of geometry data, automatic conversion of data units, CAD vendor neutral operation, and on-line interactive history capture. This paper describes the PIA and the CAPRI wrapper and details the supersonic inlet simulation demonstration.
ERIC Educational Resources Information Center
Akgün, Ergün; Akkoyunlu, Buket
2013-01-01
Along with the integration of network and communication innovations into education, those technology enriched learning environments gained importance both qualitatively and operationally. Using network and communication innovations in the education field, provides diffusion of information and global accessibility, and also allows physically…
Integrating Computing Resources: A Shared Distributed Architecture for Academics and Administrators.
ERIC Educational Resources Information Center
Beltrametti, Monica; English, Will
1994-01-01
Development and implementation of a shared distributed computing architecture at the University of Alberta (Canada) are described. Aspects discussed include design of the architecture, users' views of the electronic environment, technical and managerial challenges, and the campuswide human infrastructures needed to manage such an integrated…
Integration of Computers into an EFL Reading Classroom
ERIC Educational Resources Information Center
Lim, Kang-Mi; Shen, Hui Zhong
2006-01-01
This study examined the impact of Computer Assisted Language Learning (CALL) on Korean TAFE (Technical and Further Education) college students in an English as a Foreign Language (EFL) reading classroom in terms of their perceptions of learning environment and their reading performance. The study compared CALL and traditional reading classes over…
Education of Engineering Students within a Multimedia/Hypermedia Environment--A Review.
ERIC Educational Resources Information Center
Anderl, R.; Vogel, U. R.
This paper summarizes the activities of the Darmstadt University Department of Computer Integrated Design (Germany) related to: (1) distributed lectures (i.e., lectures distributed online through computer networks), including equipment used and ensuring sound and video quality; (2) lectures on demand, including providing access through the World…
Learning about Locomotion Patterns from Visualizations: Effects of Presentation Format and Realism
ERIC Educational Resources Information Center
Imhof, Birgit; Scheiter, Katharina; Gerjets, Peter
2011-01-01
The rapid development of computer graphics technology has made possible an easy integration of dynamic visualizations into computer-based learning environments. This study examines the relative effectiveness of dynamic visualizations, compared either to sequentially or simultaneously presented static visualizations. Moreover, the degree of realism…
Research into software executives for space operations support
NASA Technical Reports Server (NTRS)
Collier, Mark D.
1990-01-01
Research concepts pertaining to a software (workstation) executive which will support a distributed processing command and control system characterized by high-performance graphics workstations used as computing nodes are presented. Although a workstation-based distributed processing environment offers many advantages, it also introduces a number of new concerns. In order to solve these problems, allow the environment to function as an integrated system, and present a functional development environment to application programmers, it is necessary to develop an additional layer of software. This 'executive' software integrates the system, provides real-time capabilities, and provides the tools necessary to support the application requirements.
Toward an automated parallel computing environment for geosciences
NASA Astrophysics Data System (ADS)
Zhang, Huai; Liu, Mian; Shi, Yaolin; Yuen, David A.; Yan, Zhenzhen; Liang, Guoping
2007-08-01
Software for geodynamic modeling has not kept up with the fast growing computing hardware and network resources. In the past decade supercomputing power has become available to most researchers in the form of affordable Beowulf clusters and other parallel computer platforms. However, to take full advantage of such computing power requires developing parallel algorithms and associated software, a task that is often too daunting for geoscience modelers whose main expertise is in geosciences. We introduce here an automated parallel computing environment built on open-source algorithms and libraries. Users interact with this computing environment by specifying the partial differential equations, solvers, and model-specific properties using an English-like modeling language in the input files. The system then automatically generates the finite element codes that can be run on distributed or shared memory parallel machines. This system is dynamic and flexible, allowing users to address different problems in geosciences. It is capable of providing web-based services, enabling users to generate source codes online. This unique feature will facilitate high-performance computing to be integrated with distributed data grids in the emerging cyber-infrastructures for geosciences. In this paper we discuss the principles of this automated modeling environment and provide examples to demonstrate its versatility.
Multidimensional Environmental Data Resource Brokering on Computational Grids and Scientific Clouds
NASA Astrophysics Data System (ADS)
Montella, Raffaele; Giunta, Giulio; Laccetti, Giuliano
Grid computing has widely evolved over the past years, and its capabilities have found their way even into business products and are no longer relegated to scientific applications. Today, grid computing technology is not restricted to a set of specific grid open source or industrial products, but rather it is comprised of a set of capabilities virtually within any kind of software to create shared and highly collaborative production environments. These environments are focused on computational (workload) capabilities and the integration of information (data) into those computational capabilities. An active grid computing application field is the fully virtualization of scientific instruments in order to increase their availability and decrease operational and maintaining costs. Computational and information grids allow to manage real-world objects in a service-oriented way using industrial world-spread standards.
Information Power Grid Posters
NASA Technical Reports Server (NTRS)
Vaziri, Arsi
2003-01-01
This document is a summary of the accomplishments of the Information Power Grid (IPG). Grids are an emerging technology that provide seamless and uniform access to the geographically dispersed, computational, data storage, networking, instruments, and software resources needed for solving large-scale scientific and engineering problems. The goal of the NASA IPG is to use NASA's remotely located computing and data system resources to build distributed systems that can address problems that are too large or complex for a single site. The accomplishments outlined in this poster presentation are: access to distributed data, IPG heterogeneous computing, integration of large-scale computing node into distributed environment, remote access to high data rate instruments,and exploratory grid environment.
21st century environmental problems are wicked and require holistic systems thinking and solutions that integrate social and economic knowledge with knowledge of the environment. Computer-based technologies are fundamental to our ability to research and understand the relevant sy...
Scientific Inquiry, Digital Literacy, and Mobile Computing in Informal Learning Environments
ERIC Educational Resources Information Center
Marty, Paul F.; Alemanne, Nicole D.; Mendenhall, Anne; Maurya, Manisha; Southerland, Sherry A.; Sampson, Victor; Douglas, Ian; Kazmer, Michelle M.; Clark, Amanda; Schellinger, Jennifer
2013-01-01
Understanding the connections between scientific inquiry and digital literacy in informal learning environments is essential to furthering students' critical thinking and technology skills. The Habitat Tracker project combines a standards-based curriculum focused on the nature of science with an integrated system of online and mobile computing…
Computer Assisted Exercise Environment for Terrorist Attack Consequence Management
2006-09-01
security and crisis management. Fig. 3. Third Generation SSR for Integrated Security Sector MoD MoI MoEM MoFA Special Services Integrated Security...Ministries: MoEM , MoI, MoD, MH, MoEW, MoE, MoAF National Media, NGOs, other agencies District EOC / LEMA MHS/IDS/WIS Field EOC MHS/IDS/WIS MHS IDS...SRA EDA DG Environment ... EU SR Fund MoD MoI MoE&S MoEM ... National USA Great Britain Netherlands Germany ... Bilateral BSEC Stab. Pact ... Regional
Corridor One:An Integrated Distance Visualization Enuronments for SSI+ASCI Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christopher R. Johnson, Charles D. Hansen
2001-10-29
The goal of Corridor One: An Integrated Distance Visualization Environment for ASCI and SSI Application was to combine the forces of six leading edge laboratories working in the areas of visualization and distributed computing and high performance networking (Argonne National Laboratory, Lawrence Berkeley National Laboratory, Los Alamos National Laboratory, University of Illinois, University of Utah and Princeton University) to develop and deploy the most advanced integrated distance visualization environment for large-scale scientific visualization and demonstrate it on applications relevant to the DOE SSI and ASCI programs. The Corridor One team brought world class expertise in parallel rendering, deep image basedmore » rendering, immersive environment technology, large-format multi-projector wall based displays, volume and surface visualization algorithms, collaboration tools and streaming media technology, network protocols for image transmission, high-performance networking, quality of service technology and distributed computing middleware. Our strategy was to build on the very successful teams that produced the I-WAY, ''Computational Grids'' and CAVE technology and to add these to the teams that have developed the fastest parallel visualizations systems and the most widely used networking infrastructure for multicast and distributed media. Unfortunately, just as we were getting going on the Corridor One project, DOE cut the program after the first year. As such, our final report consists of our progress during year one of the grant.« less
Implementing Internet of Things in a military command and control environment
NASA Astrophysics Data System (ADS)
Raglin, Adrienne; Metu, Somiya; Russell, Stephen; Budulas, Peter
2017-05-01
While the term Internet of Things (IoT) has been coined relatively recently, it has deep roots in multiple other areas of research including cyber-physical systems, pervasive and ubiquitous computing, embedded systems, mobile ad-hoc networks, wireless sensor networks, cellular networks, wearable computing, cloud computing, big data analytics, and intelligent agents. As the Internet of Things, these technologies have created a landscape of diverse heterogeneous capabilities and protocols that will require adaptive controls to effect linkages and changes that are useful to end users. In the context of military applications, it will be necessary to integrate disparate IoT devices into a common platform that necessarily must interoperate with proprietary military protocols, data structures, and systems. In this environment, IoT devices and data will not be homogeneous and provenance-controlled (i.e. single vendor/source/supplier owned). This paper presents a discussion of the challenges of integrating varied IoT devices and related software in a military environment. A review of contemporary commercial IoT protocols is given and as a practical example, a middleware implementation is proffered that provides transparent interoperability through a proactive message dissemination system. The implementation is described as a framework through which military applications can integrate and utilize commercial IoT in conjunction with existing military sensor networks and command and control (C2) systems.
Integrated instrumentation & computation environment for GRACE
NASA Astrophysics Data System (ADS)
Dhekne, P. S.
2002-03-01
The project GRACE (Gamma Ray Astrophysics with Coordinated Experiments) aims at setting up a state of the art Gamma Ray Observatory at Mt. Abu, Rajasthan for undertaking comprehensive scientific exploration over a wide spectral window (10's keV - 100's TeV) from a single location through 4 coordinated experiments. The cumulative data collection rate of all the telescopes is expected to be about 1 GB/hr, necessitating innovations in the data management environment. As real-time data acquisition and control as well as off-line data processing, analysis and visualization environment of these systems is based on the us cutting edge and affordable technologies in the field of computers, communications and Internet. We propose to provide a single, unified environment by seamless integration of instrumentation and computations by taking advantage of the recent advancements in Web based technologies. This new environment will allow researchers better acces to facilities, improve resource utilization and enhance collaborations by having identical environments for online as well as offline usage of this facility from any location. We present here a proposed implementation strategy for a platform independent web-based system that supplements automated functions with video-guided interactive and collaborative remote viewing, remote control through virtual instrumentation console, remote acquisition of telescope data, data analysis, data visualization and active imaging system. This end-to-end web-based solution will enhance collaboration among researchers at the national and international level for undertaking scientific studies, using the telescope systems of the GRACE project.
Data Movement Dominates: Advanced Memory Technology to Address the Real Exascale Power Problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bergman, Keren
Energy is the fundamental barrier to Exascale supercomputing and is dominated by the cost of moving data from one point to another, not computation. Similarly, performance is dominated by data movement, not computation. The solution to this problem requires three critical technologies: 3D integration, optical chip-to-chip communication, and a new communication model. The central goal of the Sandia led "Data Movement Dominates" project aimed to develop memory systems and new architectures based on these technologies that have the potential to lower the cost of local memory accesses by orders of magnitude and provide substantially more bandwidth. Only through these transformationalmore » advances can future systems reach the goals of Exascale computing with a manageable power budgets. The Sandia led team included co-PIs from Columbia University, Lawrence Berkeley Lab, and the University of Maryland. The Columbia effort of Data Movement Dominates focused on developing a physically accurate simulation environment and experimental verification for optically-connected memory (OCM) systems that can enable continued performance scaling through high-bandwidth capacity, energy-efficient bit-rate transparency, and time-of-flight latency. With OCM, memory device parallelism and total capacity can scale to match future high-performance computing requirements without sacrificing data-movement efficiency. When we consider systems with integrated photonics, links to memory can be seamlessly integrated with the interconnection network-in a sense, memory becomes a primary aspect of the interconnection network. At the core of the Columbia effort, toward expanding our understanding of OCM enabled computing we have created an integrated modeling and simulation environment that uniquely integrates the physical behavior of the optical layer. The PhoenxSim suite of design and software tools developed under this effort has enabled the co-design of and performance evaluation photonics-enabled OCM architectures on Exascale computing systems.« less
A synthetic design environment for ship design
NASA Technical Reports Server (NTRS)
Chipman, Richard R.
1995-01-01
Rapid advances in computer science and information system technology have made possible the creation of synthetic design environments (SDE) which use virtual prototypes to increase the efficiency and agility of the design process. This next generation of computer-based design tools will rely heavily on simulation and advanced visualization techniques to enable integrated product and process teams to concurrently conceptualize, design, and test a product and its fabrication processes. This paper summarizes a successful demonstration of the feasibility of using a simulation based design environment in the shipbuilding industry. As computer science and information science technologies have evolved, there have been many attempts to apply and integrate the new capabilities into systems for the improvement of the process of design. We see the benefits of those efforts in the abundance of highly reliable, technologically complex products and services in the modern marketplace. Furthermore, the computer-based technologies have been so cost effective that the improvements embodied in modern products have been accompanied by lowered costs. Today the state-of-the-art in computerized design has advanced so dramatically that the focus is no longer on merely improving design methodology; rather the goal is to revolutionize the entire process by which complex products are conceived, designed, fabricated, tested, deployed, operated, maintained, refurbished and eventually decommissioned. By concurrently addressing all life-cycle issues, the basic decision making process within an enterprise will be improved dramatically, leading to new levels of quality, innovation, efficiency, and customer responsiveness. By integrating functions and people with an enterprise, such systems will change the fundamental way American industries are organized, creating companies that are more competitive, creative, and productive.
Multi-tasking computer control of video related equipment
NASA Technical Reports Server (NTRS)
Molina, Rod; Gilbert, Bob
1989-01-01
The flexibility, cost-effectiveness and widespread availability of personal computers now makes it possible to completely integrate the previously separate elements of video post-production into a single device. Specifically, a personal computer, such as the Commodore-Amiga, can perform multiple and simultaneous tasks from an individual unit. Relatively low cost, minimal space requirements and user-friendliness, provides the most favorable environment for the many phases of video post-production. Computers are well known for their basic abilities to process numbers, text and graphics and to reliably perform repetitive and tedious functions efficiently. These capabilities can now apply as either additions or alternatives to existing video post-production methods. A present example of computer-based video post-production technology is the RGB CVC (Computer and Video Creations) WorkSystem. A wide variety of integrated functions are made possible with an Amiga computer existing at the heart of the system.
Leveraging Computer-Mediated Communication Technologies to Enhance Interactions in Online Learning
ERIC Educational Resources Information Center
Wright, Linda J.
2011-01-01
Computer-mediated communication (CMC) technologies have been an integral part of distance education for many years. They are found in both synchronous and asynchronous platforms and are intended to enhance the learning experience for students. CMC technologies add an interactive element to the online learning environment. The findings from this…
ERIC Educational Resources Information Center
Fessakis, G.; Gouli, E.; Mavroudi, E.
2013-01-01
Computer programming is considered an important competence for the development of higher-order thinking in addition to algorithmic problem solving skills. Its horizontal integration throughout all educational levels is considered worthwhile and attracts the attention of researchers. Towards this direction, an exploratory case study is presented…
ERIC Educational Resources Information Center
Wilkerson-Jerde, Michelle; Wagh, Aditi; Wilensky, Uri
2015-01-01
To successfully integrate simulation and computational methods into K-12 STEM education, learning environments should be designed to help educators maintain balance between (a) addressing curricular content and practices and (b) attending to student knowledge and interests. We describe DeltaTick, a graphical simulation construction interface for…
YASS: A System Simulator for Operating System and Computer Architecture Teaching and Learning
ERIC Educational Resources Information Center
Mustafa, Besim
2013-01-01
A highly interactive, integrated and multi-level simulator has been developed specifically to support both the teachers and the learners of modern computer technologies at undergraduate level. The simulator provides a highly visual and user configurable environment with many pedagogical features aimed at facilitating deep understanding of concepts…
Academic Achievement Enhanced by Personal Digital Assistant Use
ERIC Educational Resources Information Center
Bick, Alexander
2005-01-01
Research during the past decade suggests that integrating computing technology in general, and mobile computers in particular, into the educational environment has positive effects. This is the first long-term study of high school Personal Digital Assistant use. It involved three-parts, 146 students during four years. Part one found that PDA use…
Integrating Computer- and Teacher-Based Scaffolds in Science Inquiry
ERIC Educational Resources Information Center
Wu, Hui-Ling; Pedersen, Susan
2011-01-01
Because scaffolding is a crucial form of support for students engaging in complex learning environments, it is important that researchers determine which of the numerous kinds of scaffolding will allow them to educate students most effectively. The existing literature tends to focus on computer-based scaffolding by itself rather than integrating…
Integration of a CAD System Into an MDO Framework
NASA Technical Reports Server (NTRS)
Townsend, J. C.; Samareh, J. A.; Weston, R. P.; Zorumski, W. E.
1998-01-01
NASA Langley has developed a heterogeneous distributed computing environment, called the Framework for Inter-disciplinary Design Optimization, or FIDO. Its purpose has been to demonstrate framework technical feasibility and usefulness for optimizing the preliminary design of complex systems and to provide a working environment for testing optimization schemes. Its initial implementation has been for a simplified model of preliminary design of a high-speed civil transport. Upgrades being considered for the FIDO system include a more complete geometry description, required by high-fidelity aerodynamics and structures codes and based on a commercial Computer Aided Design (CAD) system. This report presents the philosophy behind some of the decisions that have shaped the FIDO system and gives a brief case study of the problems and successes encountered in integrating a CAD system into the FEDO framework.
Boxes with Fires: Wisely Integrating Learning Technologies into the Art Classroom
ERIC Educational Resources Information Center
Gregory, Diane C.
2009-01-01
By integrating and infusing computer learning technologies wisely into student-centered or social constructivist art learning environments, art educators can improve student learning and at the same time provide a creative, substantive model for how schools can and should be reformed. By doing this, art educators have an opportunity to demonstrate…
ERIC Educational Resources Information Center
Yankelevich, Eleonora
2017-01-01
A variety of computing devices are available in today's classrooms, but they have not guaranteed the effective integration of technology. Nationally, teachers have ample devices, applications, productivity software, and digital audio and video tools. Despite all this, the literature suggests these tools are not employed to enhance student learning…
Computer simulation of space station computer steered high gain antenna
NASA Technical Reports Server (NTRS)
Beach, S. W.
1973-01-01
The mathematical modeling and programming of a complete simulation program for a space station computer-steered high gain antenna are described. The program provides for reading input data cards, numerically integrating up to 50 first order differential equations, and monitoring up to 48 variables on printed output and on plots. The program system consists of a high gain antenna, an antenna gimbal control system, an on board computer, and the environment in which all are to operate.
Computer vision and augmented reality in gastrointestinal endoscopy
Mahmud, Nadim; Cohen, Jonah; Tsourides, Kleovoulos; Berzin, Tyler M.
2015-01-01
Augmented reality (AR) is an environment-enhancing technology, widely applied in the computer sciences, which has only recently begun to permeate the medical field. Gastrointestinal endoscopy—which relies on the integration of high-definition video data with pathologic correlates—requires endoscopists to assimilate and process a tremendous amount of data in real time. We believe that AR is well positioned to provide computer-guided assistance with a wide variety of endoscopic applications, beginning with polyp detection. In this article, we review the principles of AR, describe its potential integration into an endoscopy set-up, and envisage a series of novel uses. With close collaboration between physicians and computer scientists, AR promises to contribute significant improvements to the field of endoscopy. PMID:26133175
HeNCE: A Heterogeneous Network Computing Environment
Beguelin, Adam; Dongarra, Jack J.; Geist, George Al; ...
1994-01-01
Network computing seeks to utilize the aggregate resources of many networked computers to solve a single problem. In so doing it is often possible to obtain supercomputer performance from an inexpensive local area network. The drawback is that network computing is complicated and error prone when done by hand, especially if the computers have different operating systems and data formats and are thus heterogeneous. The heterogeneous network computing environment (HeNCE) is an integrated graphical environment for creating and running parallel programs over a heterogeneous collection of computers. It is built on a lower level package called parallel virtual machine (PVM).more » The HeNCE philosophy of parallel programming is to have the programmer graphically specify the parallelism of a computation and to automate, as much as possible, the tasks of writing, compiling, executing, debugging, and tracing the network computation. Key to HeNCE is a graphical language based on directed graphs that describe the parallelism and data dependencies of an application. Nodes in the graphs represent conventional Fortran or C subroutines and the arcs represent data and control flow. This article describes the present state of HeNCE, its capabilities, limitations, and areas of future research.« less
NASA Technical Reports Server (NTRS)
Banda, Carolyn; Bushnell, David; Chen, Scott; Chiu, Alex; Constantine, Betsy; Murray, Jerry; Neukom, Christian; Prevost, Michael; Shankar, Renuka; Staveland, Lowell
1991-01-01
The Man-Machine Integration Design and Analysis System (MIDAS) is an integrated suite of software components that constitutes a prototype workstation to aid designers in applying human factors principles to the design of complex human-machine systems. MIDAS is intended to be used at the very early stages of conceptual design to provide an environment wherein designers can use computational representations of the crew station and operator, instead of hardware simulators and man-in-the-loop studies, to discover problems and ask 'what if' questions regarding the projected mission, equipment, and environment. This document is the Software Product Specification for MIDAS. Introductory descriptions of the processing requirements, hardware/software environment, structure, I/O, and control are given in the main body of the document for the overall MIDAS system, with detailed discussion of the individual modules included in Annexes A-J.
NASA Technical Reports Server (NTRS)
Mayer, Richard J.; Blinn, Thomas M.; Dewitte, Paul S.; Crump, John W.; Ackley, Keith A.
1992-01-01
The Framework Programmable Software Development Platform (FPP) is a project aimed at effectively combining tool and data integration mechanisms with a model of the software development process to provide an intelligent integrated software development environment. Guided by the model, this system development framework will take advantage of an integrated operating environment to automate effectively the management of the software development process so that costly mistakes during the development phase can be eliminated. The Advanced Software Development Workstation (ASDW) program is conducting research into development of advanced technologies for Computer Aided Software Engineering (CASE).
ERIC Educational Resources Information Center
Xu, Yan; Park, Hyungsung; Baek, Youngkyun
2011-01-01
Recently, computer technology and multimedia elements have been developed and integrated into teaching and learning. Entertainment-based learning environments can make learning contents more attractive, and thus can lead to learners' active participation and facilitate learning. A significant amount of research examines using video editing…
Collaborative Spaces for GIS-Based Multimedia Cartography in Blended Environments
ERIC Educational Resources Information Center
Balram, Shivanand; Dragicevic, Suzana
2008-01-01
The interaction spaces between instructors and learners in the traditional face-to-face classroom environment are being changed by the diffusion and adoption of many forms of computer-based pedagogy. An integrated understanding of these evolving interaction spaces together with how they interconnect and leverage learning are needed to develop…
A Virtual Environment for Process Management. A Step by Step Implementation
ERIC Educational Resources Information Center
Mayer, Sergio Valenzuela
2003-01-01
In this paper it is presented a virtual organizational environment, conceived with the integration of three computer programs: a manufacturing simulation package, an automation of businesses processes (workflows), and business intelligence (Balanced Scorecard) software. It was created as a supporting tool for teaching IE, its purpose is to give…
Pre-Service Teachers Designing Virtual World Learning Environments
ERIC Educational Resources Information Center
Jacka, Lisa; Booth, Kate
2012-01-01
Integrating Information Technology Communications in the classroom has been an important part of pre-service teacher education for over a decade. The advent of virtual worlds provides the pre-service teacher with an opportunity to study teaching and learning in a highly immersive 3D computer-based environment. Virtual worlds also provide a place…
A facility for training Space Station astronauts
NASA Technical Reports Server (NTRS)
Hajare, Ankur R.; Schmidt, James R.
1992-01-01
The Space Station Training Facility (SSTF) will be the primary facility for training the Space Station Freedom astronauts and the Space Station Control Center ground support personnel. Conceptually, the SSTF will consist of two parts: a Student Environment and an Author Environment. The Student Environment will contain trainers, instructor stations, computers and other equipment necessary for training. The Author Environment will contain the systems that will be used to manage, develop, integrate, test and verify, operate and maintain the equipment and software in the Student Environment.
Virtual hand: a 3D tactile interface to virtual environments
NASA Astrophysics Data System (ADS)
Rogowitz, Bernice E.; Borrel, Paul
2008-02-01
We introduce a novel system that allows users to experience the sensation of touch in a computer graphics environment. In this system, the user places his/her hand on an array of pins, which is moved about space on a 6 degree-of-freedom robot arm. The surface of the pins defines a surface in the virtual world. This "virtual hand" can move about the virtual world. When the virtual hand encounters an object in the virtual world, the heights of the pins are adjusted so that they represent the object's shape, surface, and texture. A control system integrates pin and robot arm motions to transmit information about objects in the computer graphics world to the user. It also allows the user to edit, change and move the virtual objects, shapes and textures. This system provides a general framework for touching, manipulating, and modifying objects in a 3-D computer graphics environment, which may be useful in a wide range of applications, including computer games, computer aided design systems, and immersive virtual worlds.
NASA Astrophysics Data System (ADS)
Johnston, Michael A.; Farrell, Damien; Nielsen, Jens Erik
2012-04-01
The exchange of information between experimentalists and theoreticians is crucial to improving the predictive ability of theoretical methods and hence our understanding of the related biology. However many barriers exist which prevent the flow of information between the two disciplines. Enabling effective collaboration requires that experimentalists can easily apply computational tools to their data, share their data with theoreticians, and that both the experimental data and computational results are accessible to the wider community. We present a prototype collaborative environment for developing and validating predictive tools for protein biophysical characteristics. The environment is built on two central components; a new python-based integration module which allows theoreticians to provide and manage remote access to their programs; and PEATDB, a program for storing and sharing experimental data from protein biophysical characterisation studies. We demonstrate our approach by integrating PEATSA, a web-based service for predicting changes in protein biophysical characteristics, into PEATDB. Furthermore, we illustrate how the resulting environment aids method development using the Potapov dataset of experimentally measured ΔΔGfold values, previously employed to validate and train protein stability prediction algorithms.
ERIC Educational Resources Information Center
Psycharis, Sarantos; Botsari, Evanthia; Chatzarakis, George
2014-01-01
Learning styles are increasingly being integrated into computational-enhanced earning environments and a great deal of recent research work is taking place in this area. The purpose of this study was to examine the impact of the computational experiment approach, learning styles, epistemic beliefs, and engagement with the inquiry process on the…
Logic integration of mRNA signals by an RNAi-based molecular computer
Xie, Zhen; Liu, Siyuan John; Bleris, Leonidas; Benenson, Yaakov
2010-01-01
Synthetic in vivo molecular ‘computers’ could rewire biological processes by establishing programmable, non-native pathways between molecular signals and biological responses. Multiple molecular computer prototypes have been shown to work in simple buffered solutions. Many of those prototypes were made of DNA strands and performed computations using cycles of annealing-digestion or strand displacement. We have previously introduced RNA interference (RNAi)-based computing as a way of implementing complex molecular logic in vivo. Because it also relies on nucleic acids for its operation, RNAi computing could benefit from the tools developed for DNA systems. However, these tools must be harnessed to produce bioactive components and be adapted for harsh operating environments that reflect in vivo conditions. In a step toward this goal, we report the construction and implementation of biosensors that ‘transduce’ mRNA levels into bioactive, small interfering RNA molecules via RNA strand exchange in a cell-free Drosophila embryo lysate, a step beyond simple buffered environments. We further integrate the sensors with our RNAi ‘computational’ module to evaluate two-input logic functions on mRNA concentrations. Our results show how RNA strand exchange can expand the utility of RNAi computing and point toward the possibility of using strand exchange in a native biological setting. PMID:20194121
CAD/CAE Integration Enhanced by New CAD Services Standard
NASA Technical Reports Server (NTRS)
Claus, Russell W.
2002-01-01
A Government-industry team led by the NASA Glenn Research Center has developed a computer interface standard for accessing data from computer-aided design (CAD) systems. The Object Management Group, an international computer standards organization, has adopted this CAD services standard. The new standard allows software (e.g., computer-aided engineering (CAE) and computer-aided manufacturing software to access multiple CAD systems through one programming interface. The interface is built on top of a distributed computing system called the Common Object Request Broker Architecture (CORBA). CORBA allows the CAD services software to operate in a distributed, heterogeneous computing environment.
NASA Astrophysics Data System (ADS)
Evans, Ben; Allen, Chris; Antony, Joseph; Bastrakova, Irina; Gohar, Kashif; Porter, David; Pugh, Tim; Santana, Fabiana; Smillie, Jon; Trenham, Claire; Wang, Jingbo; Wyborn, Lesley
2015-04-01
The National Computational Infrastructure (NCI) has established a powerful and flexible in-situ petascale computational environment to enable both high performance computing and Data-intensive Science across a wide spectrum of national environmental and earth science data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress so far to harmonise the underlying data collections for future interdisciplinary research across these large volume data collections. NCI has established 10+ PBytes of major national and international data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the major Australian national-scale scientific collections), leading research communities, and collaborating overseas organisations. New infrastructures created at NCI mean the data collections are now accessible within an integrated High Performance Computing and Data (HPC-HPD) environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large-scale high-bandwidth Lustre filesystems. The hardware was designed at inception to ensure that it would allow the layered software environment to flexibly accommodate the advancement of future data science. New approaches to software technology and data models have also had to be developed to enable access to these large and exponentially increasing data volumes at NCI. Traditional HPC and data environments are still made available in a way that flexibly provides the tools, services and supporting software systems on these new petascale infrastructures. But to enable the research to take place at this scale, the data, metadata and software now need to evolve together - creating a new integrated high performance infrastructure. The new infrastructure at NCI currently supports a catalogue of integrated, reusable software and workflows from earth system and ecosystem modelling, weather research, satellite and other observed data processing and analysis. One of the challenges for NCI has been to support existing techniques and methods, while carefully preparing the underlying infrastructure for the transition needed for the next class of Data-intensive Science. In doing so, a flexible range of techniques and software can be made available for application across the corpus of data collections available, and to provide a new infrastructure for future interdisciplinary research.
Information Infrastructures for Integrated Enterprises
1993-05-01
PROCESSING demographic CAM realization; ule leveling; studies; prelimi- rapid tooling; con- accounting/admin- nary CAFE and tinuous cost istrative reports...nies might consider franchising some facets of indirect labor, such as selected functions of administration, finance, and human resources. Incorporate as...vices CAFE Corporate Average Fuel Economy CAD Computer-Aided Design 0 CAE Computer-Aided Engineering CAIS Common Ada Programming Support Environment
2007-06-01
CFD in the AFSEO The SBD, designated GBU - 39 /B, contains a 250 production environment include the requirement for rapid pound warhead, measures 6 feet...Take Off (RATO) Separation." ITEA Conference, Apr 2006. Figure 3. B-52/MOP Figure 1. GBU - 39 /B small diameter bomb computational model Figure 4. MOP
ERIC Educational Resources Information Center
Geigel, Joan; And Others
A self-paced program designed to integrate the use of computers and physics courseware into the regular classroom environment is offered for physics high school teachers in this module on projectile and circular motion. A diversity of instructional strategies including lectures, demonstrations, videotapes, computer simulations, laboratories, and…
The Impact of Computer Supported Collaborative Learning on Internship Outcomes of Pharmacy Students
ERIC Educational Resources Information Center
Timmers, S.; Valcke, M.; de Mil, K.; Baeyens, W. R. G.
2008-01-01
This article focuses on an evaluation of the impact of an innovative instructional design of internships in view of a new integrated pharmaceutical curriculum. A key innovative element was the implementation of a computer-supported collaborative learning environment. Students were, as part of their formal curriculum, expected to work in a…
ROBOTICS IN HAZARDOUS ENVIRONMENTS - REAL DEPLOYMENTS BY THE SAVANNAH RIVER NATIONAL LABORATORY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kriikku, E.; Tibrea, S.; Nance, T.
The Research & Development Engineering (R&DE) section in the Savannah River National Laboratory (SRNL) engineers, integrates, tests, and supports deployment of custom robotics, systems, and tools for use in radioactive, hazardous, or inaccessible environments. Mechanical and electrical engineers, computer control professionals, specialists, machinists, welders, electricians, and mechanics adapt and integrate commercially available technology with in-house designs, to meet the needs of Savannah River Site (SRS), Department of Energy (DOE), and other governmental agency customers. This paper discusses five R&DE robotic and remote system projects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lingerfelt, Eric J; Endeve, Eirik; Hui, Yawei
Improvements in scientific instrumentation allow imaging at mesoscopic to atomic length scales, many spectroscopic modes, and now--with the rise of multimodal acquisition systems and the associated processing capability--the era of multidimensional, informationally dense data sets has arrived. Technical issues in these combinatorial scientific fields are exacerbated by computational challenges best summarized as a necessity for drastic improvement in the capability to transfer, store, and analyze large volumes of data. The Bellerophon Environment for Analysis of Materials (BEAM) platform provides material scientists the capability to directly leverage the integrated computational and analytical power of High Performance Computing (HPC) to perform scalablemore » data analysis and simulation and manage uploaded data files via an intuitive, cross-platform client user interface. This framework delivers authenticated, "push-button" execution of complex user workflows that deploy data analysis algorithms and computational simulations utilizing compute-and-data cloud infrastructures and HPC environments like Titan at the Oak Ridge Leadershp Computing Facility (OLCF).« less
Integration of the Chinese HPC Grid in ATLAS Distributed Computing
NASA Astrophysics Data System (ADS)
Filipčič, A.;
2017-10-01
Fifteen Chinese High-Performance Computing sites, many of them on the TOP500 list of most powerful supercomputers, are integrated into a common infrastructure providing coherent access to a user through an interface based on a RESTful interface called SCEAPI. These resources have been integrated into the ATLAS Grid production system using a bridge between ATLAS and SCEAPI which translates the authorization and job submission protocols between the two environments. The ARC Computing Element (ARC-CE) forms the bridge using an extended batch system interface to allow job submission to SCEAPI. The ARC-CE was setup at the Institute for High Energy Physics, Beijing, in order to be as close as possible to the SCEAPI front-end interface at the Computing Network Information Center, also in Beijing. This paper describes the technical details of the integration between ARC-CE and SCEAPI and presents results so far with two supercomputer centers, Tianhe-IA and ERA. These two centers have been the pilots for ATLAS Monte Carlo Simulation in SCEAPI and have been providing CPU power since fall 2015.
ERIC Educational Resources Information Center
Urbina, Angela; Polly, Drew
2017-01-01
Purpose: The purpose of this paper is to examine how elementary school teachers integrated technology into their mathematics teaching in classroom settings that were one-to-one computer environments for most of the day. Following a series of classroom observations and interviews, inductive qualitative analyses of data indicated that teachers felt…
ERIC Educational Resources Information Center
Depradine, Colin; Gay, Glenda
2004-01-01
With the strong link between programming and the underlying technology, the incorporation of computer technology into the teaching of a programming language course should be a natural progression. However, the abstract nature of programming can make such integration a difficult prospect to achieve. As a result, the main development tool, the…
ERIC Educational Resources Information Center
Steiner, Dasi; Mendelovitch, Miriam
2017-01-01
The communications revolution reaches all sectors of the population and makes information accessible to all. This development presents complex challenges which require changes in the education system, teaching methods and learning environment. The integration of ICT (Information and Communications Technology) and science teaching requires…
NASA Technical Reports Server (NTRS)
Hancock, Thomas
1993-01-01
This experiment investigated the integrity of static computer memory (floppy disk media) when exposed to the environment of low earth orbit. The experiment attempted to record soft-event upsets (bit-flips) in static computer memory. Typical conditions that exist in low earth orbit that may cause soft-event upsets include: cosmic rays, low level background radiation, charged fields, static charges, and the earth's magnetic field. Over the years several spacecraft have been affected by soft-event upsets (bit-flips), and these events have caused a loss of data or affected spacecraft guidance and control. This paper describes a commercial spin-off that is being developed from the experiment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simunovic, Srdjan
2015-02-16
CASL's modeling and simulation technology, the Virtual Environment for Reactor Applications (VERA), incorporates coupled physics and science-based models, state-of-the-art numerical methods, modern computational science, integrated uncertainty quantification (UQ) and validation against data from operating pressurized water reactors (PWRs), single-effect experiments, and integral tests. The computational simulation component of VERA is the VERA Core Simulator (VERA-CS). The core simulator is the specific collection of multi-physics computer codes used to model and deplete a LWR core over multiple cycles. The core simulator has a single common input file that drives all of the different physics codes. The parser code, VERAIn, converts VERAmore » Input into an XML file that is used as input to different VERA codes.« less
Integrating Computer Architectures into the Design of High-Performance Controllers
NASA Technical Reports Server (NTRS)
Jacklin, Stephen A.; Leyland, Jane A.; Warmbrodt, William
1986-01-01
Modern control systems must typically perform real-time identification and control, as well as coordinate a host of other activities related to user interaction, on-line graphics, and file management. This paper discusses five global design considerations that are useful to integrate array processor, multimicroprocessor, and host computer system architecture into versatile, high-speed controllers. Such controllers are capable of very high control throughput, and can maintain constant interaction with the non-real-time or user environment. As an application example, the architecture of a high-speed, closed-loop controller used to actively control helicopter vibration will be briefly discussed. Although this system has been designed for use as the controller for real-time rotorcraft dynamics and control studies in a wind-tunnel environment, the control architecture can generally be applied to a wide range of automatic control applications.
ERIC Educational Resources Information Center
Uysal, Murat Pasa
2014-01-01
Different methods, strategies, or tools have been proposed for teaching Object Oriented Programming (OOP). However, it is still difficult to introduce OOP to novice learners. The problem may be not only adopting a method or language, but also use of an appropriate integrated development environment (IDE). Therefore, the focus should be on the…
Visual Analytics in Public Safety: Example Capabilities for Example Government Agencies
2011-10-01
is not limited to: the Police Records Information Management Environment for British Columbia (PRIME-BC), the Police Reporting and Occurrence System...and filtering for rapid identification of relevant documents - Graphical environment for visual evidence marshaling - Interactive linking and...analytical reasoning facilitated by interactive visual interfaces and integration with computational analytics. Indeed, a wide variety of technologies
A Computational framework for telemedicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Foster, I.; von Laszewski, G.; Thiruvathukal, G. K.
1998-07-01
Emerging telemedicine applications require the ability to exploit diverse and geographically distributed resources. Highspeed networks are used to integrate advanced visualization devices, sophisticated instruments, large databases, archival storage devices, PCs, workstations, and supercomputers. This form of telemedical environment is similar to networked virtual supercomputers, also known as metacomputers. Metacomputers are already being used in many scientific application areas. In this article, we analyze requirements necessary for a telemedical computing infrastructure and compare them with requirements found in a typical metacomputing environment. We will show that metacomputing environments can be used to enable a more powerful and unified computational infrastructure formore » telemedicine. The Globus metacomputing toolkit can provide the necessary low level mechanisms to enable a large scale telemedical infrastructure. The Globus toolkit components are designed in a modular fashion and can be extended to support the specific requirements for telemedicine.« less
A Dynamic Bayesian Observer Model Reveals Origins of Bias in Visual Path Integration.
Lakshminarasimhan, Kaushik J; Petsalis, Marina; Park, Hyeshin; DeAngelis, Gregory C; Pitkow, Xaq; Angelaki, Dora E
2018-06-20
Path integration is a strategy by which animals track their position by integrating their self-motion velocity. To identify the computational origins of bias in visual path integration, we asked human subjects to navigate in a virtual environment using optic flow and found that they generally traveled beyond the goal location. Such a behavior could stem from leaky integration of unbiased self-motion velocity estimates or from a prior expectation favoring slower speeds that causes velocity underestimation. Testing both alternatives using a probabilistic framework that maximizes expected reward, we found that subjects' biases were better explained by a slow-speed prior than imperfect integration. When subjects integrate paths over long periods, this framework intriguingly predicts a distance-dependent bias reversal due to buildup of uncertainty, which we also confirmed experimentally. These results suggest that visual path integration in noisy environments is limited largely by biases in processing optic flow rather than by leaky integration. Copyright © 2018 Elsevier Inc. All rights reserved.
Transportable Applications Environment Plus, Version 5.1
NASA Technical Reports Server (NTRS)
1994-01-01
Transportable Applications Environment Plus (TAE+) computer program providing integrated, portable programming environment for developing and running application programs based on interactive windows, text, and graphical objects. Enables both programmers and nonprogrammers to construct own custom application interfaces easily and to move interfaces and application programs to different computers. Used to define corporate user interface, with noticeable improvements in application developer's and end user's learning curves. Main components are; WorkBench, What You See Is What You Get (WYSIWYG) software tool for design and layout of user interface; and WPT (Window Programming Tools) Package, set of callable subroutines controlling user interface of application program. WorkBench and WPT's written in C++, and remaining code written in C.
O'Donnell, Michael
2015-01-01
State-and-transition simulation modeling relies on knowledge of vegetation composition and structure (states) that describe community conditions, mechanistic feedbacks such as fire that can affect vegetation establishment, and ecological processes that drive community conditions as well as the transitions between these states. However, as the need for modeling larger and more complex landscapes increase, a more advanced awareness of computing resources becomes essential. The objectives of this study include identifying challenges of executing state-and-transition simulation models, identifying common bottlenecks of computing resources, developing a workflow and software that enable parallel processing of Monte Carlo simulations, and identifying the advantages and disadvantages of different computing resources. To address these objectives, this study used the ApexRMS® SyncroSim software and embarrassingly parallel tasks of Monte Carlo simulations on a single multicore computer and on distributed computing systems. The results demonstrated that state-and-transition simulation models scale best in distributed computing environments, such as high-throughput and high-performance computing, because these environments disseminate the workloads across many compute nodes, thereby supporting analysis of larger landscapes, higher spatial resolution vegetation products, and more complex models. Using a case study and five different computing environments, the top result (high-throughput computing versus serial computations) indicated an approximate 96.6% decrease of computing time. With a single, multicore compute node (bottom result), the computing time indicated an 81.8% decrease relative to using serial computations. These results provide insight into the tradeoffs of using different computing resources when research necessitates advanced integration of ecoinformatics incorporating large and complicated data inputs and models. - See more at: http://aimspress.com/aimses/ch/reader/view_abstract.aspx?file_no=Environ2015030&flag=1#sthash.p1XKDtF8.dpuf
Integration of Cloud resources in the LHCb Distributed Computing
NASA Astrophysics Data System (ADS)
Úbeda García, Mario; Méndez Muñoz, Víctor; Stagni, Federico; Cabarrou, Baptiste; Rauschmayr, Nathalie; Charpentier, Philippe; Closier, Joel
2014-06-01
This contribution describes how Cloud resources have been integrated in the LHCb Distributed Computing. LHCb is using its specific Dirac extension (LHCbDirac) as an interware for its Distributed Computing. So far, it was seamlessly integrating Grid resources and Computer clusters. The cloud extension of DIRAC (VMDIRAC) allows the integration of Cloud computing infrastructures. It is able to interact with multiple types of infrastructures in commercial and institutional clouds, supported by multiple interfaces (Amazon EC2, OpenNebula, OpenStack and CloudStack) - instantiates, monitors and manages Virtual Machines running on this aggregation of Cloud resources. Moreover, specifications for institutional Cloud resources proposed by Worldwide LHC Computing Grid (WLCG), mainly by the High Energy Physics Unix Information Exchange (HEPiX) group, have been taken into account. Several initiatives and computing resource providers in the eScience environment have already deployed IaaS in production during 2013. Keeping this on mind, pros and cons of a cloud based infrasctructure have been studied in contrast with the current setup. As a result, this work addresses four different use cases which represent a major improvement on several levels of our infrastructure. We describe the solution implemented by LHCb for the contextualisation of the VMs based on the idea of Cloud Site. We report on operational experience of using in production several institutional Cloud resources that are thus becoming integral part of the LHCb Distributed Computing resources. Furthermore, we describe as well the gradual migration of our Service Infrastructure towards a fully distributed architecture following the Service as a Service (SaaS) model.
Software development environments: Status and trends
NASA Technical Reports Server (NTRS)
Duffel, Larry E.
1988-01-01
Currently software engineers are the essential integrating factors tying several components together. The components consist of process, methods, computers, tools, support environments, and software engineers. The engineers today empower the tools versus the tools empowering the engineers. Some of the issues in software engineering are quality, managing the software engineering process, and productivity. A strategy to accomplish this is to promote the evolution of software engineering from an ad hoc, labor intensive activity to a managed, technology supported discipline. This strategy may be implemented by putting the process under management control, adopting appropriate methods, inserting the technology that provides automated support for the process and methods, collecting automated tools into an integrated environment and educating the personnel.
NASA Lighting Research, Test, & Analysis
NASA Technical Reports Server (NTRS)
Clark, Toni
2015-01-01
The Habitability and Human Factors Branch, at Johnson Space Center, in Houston, TX, provides technical guidance for the development of spaceflight lighting requirements, verification of light system performance, analysis of integrated environmental lighting systems, and research of lighting-related human performance issues. The Habitability & Human Factors Lighting Team maintains two physical facilities that are integrated to provide support. The Lighting Environment Test Facility (LETF) provides a controlled darkroom environment for physical verification of lighting systems with photometric and spetrographic measurement systems. The Graphics Research & Analysis Facility (GRAF) maintains the capability for computer-based analysis of operational lighting environments. The combined capabilities of the Lighting Team at Johnson Space Center have been used for a wide range of lighting-related issues.
GISpark: A Geospatial Distributed Computing Platform for Spatiotemporal Big Data
NASA Astrophysics Data System (ADS)
Wang, S.; Zhong, E.; Wang, E.; Zhong, Y.; Cai, W.; Li, S.; Gao, S.
2016-12-01
Geospatial data are growing exponentially because of the proliferation of cost effective and ubiquitous positioning technologies such as global remote-sensing satellites and location-based devices. Analyzing large amounts of geospatial data can provide great value for both industrial and scientific applications. Data- and compute- intensive characteristics inherent in geospatial big data increasingly pose great challenges to technologies of data storing, computing and analyzing. Such challenges require a scalable and efficient architecture that can store, query, analyze, and visualize large-scale spatiotemporal data. Therefore, we developed GISpark - a geospatial distributed computing platform for processing large-scale vector, raster and stream data. GISpark is constructed based on the latest virtualized computing infrastructures and distributed computing architecture. OpenStack and Docker are used to build multi-user hosting cloud computing infrastructure for GISpark. The virtual storage systems such as HDFS, Ceph, MongoDB are combined and adopted for spatiotemporal data storage management. Spark-based algorithm framework is developed for efficient parallel computing. Within this framework, SuperMap GIScript and various open-source GIS libraries can be integrated into GISpark. GISpark can also integrated with scientific computing environment (e.g., Anaconda), interactive computing web applications (e.g., Jupyter notebook), and machine learning tools (e.g., TensorFlow/Orange). The associated geospatial facilities of GISpark in conjunction with the scientific computing environment, exploratory spatial data analysis tools, temporal data management and analysis systems make up a powerful geospatial computing tool. GISpark not only provides spatiotemporal big data processing capacity in the geospatial field, but also provides spatiotemporal computational model and advanced geospatial visualization tools that deals with other domains related with spatial property. We tested the performance of the platform based on taxi trajectory analysis. Results suggested that GISpark achieves excellent run time performance in spatiotemporal big data applications.
Incorporating Laptop Technologies into an Animal Sciences Curriculum
ERIC Educational Resources Information Center
Birrenkott, Glenn; Bertrand, Jean A.; Bolt, Brian
2005-01-01
Teaching animal sciences, like most agricultural disciplines, requires giving students hands-on learning opportunities in remote and often computer-unfriendly sites such as animal farms. How do faculty integrate laptop use into such an environment?
Music Teachers' Experiences in One-to-One Computing Environments
ERIC Educational Resources Information Center
Dorfman, Jay
2016-01-01
Ubiquitous computing scenarios such as the one-to-one model, in which every student is issued a device that is to be used across all subjects, have increased in popularity and have shown both positive and negative influences on education. Music teachers in schools that adopt one-to-one models may be inadequately equipped to integrate this kind of…
Learning from graphically integrated 2D and 3D representations improves retention of neuroanatomy
NASA Astrophysics Data System (ADS)
Naaz, Farah
Visualizations in the form of computer-based learning environments are highly encouraged in science education, especially for teaching spatial material. Some spatial material, such as sectional neuroanatomy, is very challenging to learn. It involves learning the two dimensional (2D) representations that are sampled from the three dimensional (3D) object. In this study, a computer-based learning environment was used to explore the hypothesis that learning sectional neuroanatomy from a graphically integrated 2D and 3D representation will lead to better learning outcomes than learning from a sequential presentation. The integrated representation explicitly demonstrates the 2D-3D transformation and should lead to effective learning. This study was conducted using a computer graphical model of the human brain. There were two learning groups:
Reducing Manpower for a Technologically Advanced Ship
2010-01-27
Watchstations by 84% (119 to 34) “ Autonomic ” Fire Suppression System AFSS is designed to automatically: (1) Isolate damage to firemain piping... System (IPS) Advanced VLS Autonomic Fire Suppression Hull Form Scale Models Total Ship Computing Environment (TSCE) Integrated Undersea...Warfare (IUSW) System ( AFSS ) 8 Total Ship Organization Ship C3I Engage Support Technical Director TSCEI Sense Integrated Product Teams TSSE Director
Error Mitigation of Point-to-Point Communication for Fault-Tolerant Computing
NASA Technical Reports Server (NTRS)
Akamine, Robert L.; Hodson, Robert F.; LaMeres, Brock J.; Ray, Robert E.
2011-01-01
Fault tolerant systems require the ability to detect and recover from physical damage caused by the hardware s environment, faulty connectors, and system degradation over time. This ability applies to military, space, and industrial computing applications. The integrity of Point-to-Point (P2P) communication, between two microcontrollers for example, is an essential part of fault tolerant computing systems. In this paper, different methods of fault detection and recovery are presented and analyzed.
Integrated geometry and grid generation system for complex configurations
NASA Technical Reports Server (NTRS)
Akdag, Vedat; Wulf, Armin
1992-01-01
A grid generation system was developed that enables grid generation for complex configurations. The system called ICEM/CFD is described and its role in computational fluid dynamics (CFD) applications is presented. The capabilities of the system include full computer aided design (CAD), grid generation on the actual CAD geometry definition using robust surface projection algorithms, interfacing easily with known CAD packages through common file formats for geometry transfer, grid quality evaluation of the volume grid, coupling boundary condition set-up for block faces with grid topology generation, multi-block grid generation with or without point continuity and block to block interface requirement, and generating grid files directly compatible with known flow solvers. The interactive and integrated approach to the problem of computational grid generation not only substantially reduces manpower time but also increases the flexibility of later grid modifications and enhancements which is required in an environment where CFD is integrated into a product design cycle.
Design and Development of ChemInfoCloud: An Integrated Cloud Enabled Platform for Virtual Screening.
Karthikeyan, Muthukumarasamy; Pandit, Deepak; Bhavasar, Arvind; Vyas, Renu
2015-01-01
The power of cloud computing and distributed computing has been harnessed to handle vast and heterogeneous data required to be processed in any virtual screening protocol. A cloud computing platorm ChemInfoCloud was built and integrated with several chemoinformatics and bioinformatics tools. The robust engine performs the core chemoinformatics tasks of lead generation, lead optimisation and property prediction in a fast and efficient manner. It has also been provided with some of the bioinformatics functionalities including sequence alignment, active site pose prediction and protein ligand docking. Text mining, NMR chemical shift (1H, 13C) prediction and reaction fingerprint generation modules for efficient lead discovery are also implemented in this platform. We have developed an integrated problem solving cloud environment for virtual screening studies that also provides workflow management, better usability and interaction with end users using container based virtualization, OpenVz.
Tilson, Julie K; Loeb, Kathryn; Barbosa, Sabrina; Jiang, Fei; Lee, Karin T
2016-04-01
Physical therapists strive to integrate research into daily practice. The tablet computer is a potentially transformational tool for accessing information within the clinical practice environment. The purpose of this study was to measure and describe patterns of tablet computer use among physical therapy students during clinical rotation experiences. Doctor of physical therapy students (n = 13 users) tracked their use of tablet computers (iPad), loaded with commercially available apps, during 16 clinical experiences (6-16 weeks in duration). The tablets were used on 70% of 691 clinic days, averaging 1.3 uses per day. Information seeking represented 48% of uses; 33% of those were foreground searches for research articles and syntheses and 66% were for background medical information. Other common uses included patient education (19%), medical record documentation (13%), and professional communication (9%). The most frequently used app was Safari, the preloaded web browser (representing 281 [36.5%] incidents of use). Users accessed 56 total apps to support clinical practice. Physical therapy students successfully integrated use of a tablet computer into their clinical experiences including regular activities of information seeking. Our findings suggest that the tablet computer represents a potentially transformational tool for promoting knowledge translation in the clinical practice environment.Video Abstract available for more insights from the authors (see Supplemental Digital Content 1, http://links.lww.com/JNPT/A127).
Role of IAC in large space systems thermal analysis
NASA Technical Reports Server (NTRS)
Jones, G. K.; Skladany, J. T.; Young, J. P.
1982-01-01
Computer analysis programs to evaluate critical coupling effects that can significantly influence spacecraft system performance are described. These coupling effects arise from the varied parameters of the spacecraft systems, environments, and forcing functions associated with disciplines such as thermal, structures, and controls. Adverse effects can be expected to significantly impact system design aspects such as structural integrity, controllability, and mission performance. One such needed design analysis capability is a software system that can integrate individual discipline computer codes into a highly user-oriented/interactive-graphics-based analysis capability. The integrated analysis capability (IAC) system can be viewed as: a core framework system which serves as an integrating base whereby users can readily add desired analysis modules and as a self-contained interdisciplinary system analysis capability having a specific set of fully integrated multidisciplinary analysis programs that deal with the coupling of thermal, structures, controls, antenna radiation performance, and instrument optical performance disciplines.
Development of a change management system
NASA Technical Reports Server (NTRS)
Parks, Cathy Bonifas
1993-01-01
The complexity and interdependence of software on a computer system can create a situation where a solution to one problem causes failures in dependent software. In the computer industry, software problems arise and are often solved with 'quick and dirty' solutions. But in implementing these solutions, documentation about the solution or user notification of changes is often overlooked, and new problems are frequently introduced because of insufficient review or testing. These problems increase when numerous heterogeneous systems are involved. Because of this situation, a change management system plays an integral part in the maintenance of any multisystem computing environment. At the NASA Ames Advanced Computational Facility (ACF), the Online Change Management System (OCMS) was designed and developed to manage the changes being applied to its multivendor computing environment. This paper documents the research, design, and modifications that went into the development of this change management system (CMS).
The HEPiX Virtualisation Working Group: Towards a Grid of Clouds
NASA Astrophysics Data System (ADS)
Cass, Tony
2012-12-01
The use of virtual machine images, as for example with Cloud services such as Amazon's Elastic Compute Cloud, is attractive for users as they have a guaranteed execution environment, something that cannot today be provided across sites participating in computing grids such as the Worldwide LHC Computing Grid. However, Grid sites often operate within computer security frameworks which preclude the use of remotely generated images. The HEPiX Virtualisation Working Group was setup with the objective to enable use of remotely generated virtual machine images at Grid sites and, to this end, has introduced the idea of trusted virtual machine images which are guaranteed to be secure and configurable by sites such that security policy commitments can be met. This paper describes the requirements and details of these trusted virtual machine images and presents a model for their use to facilitate the integration of Grid- and Cloud-based computing environments for High Energy Physics.
Mission Critical Computer Resources Management Guide
1988-09-01
Support Analyzers, Management, Generators Environments Word Workbench Processors Showroom System Structure HO Compilers IMath 1OperatingI Functions I...Simulated Automated, On-Line Generators Support Exercises Catalog, Function Environments Formal Spec Libraries Showroom System Structure I ADA Trackers I...shown in Figure 13-2. In this model, showrooms of larger more capable piecesare developed off-line for later integration and use in multiple systems
ERIC Educational Resources Information Center
Alqallaf, Nadeyah
2016-01-01
The purpose of this study was to examine Kuwaiti mathematical elementary teachers' perceptions about their ability to integrate M-learning (mobile learning) into their current teaching practices and the major barriers hindering teachers' ability to create an M-learning environment. Furthermore, this study sought to understand teachers' perceptions…
An Integrated Way of Using a Tangible User Interface in a Classroom
ERIC Educational Resources Information Center
Cuendet, Sébastien; Dehler-Zufferey, Jessica; Ortoleva, Giulia; Dillenbourg, Pierre
2015-01-01
Despite many years of research in CSCL, computers are still scarcely used in classrooms today. One reason for this is that the constraints of the classroom environment are neglected by designers. In this contribution, we present a CSCL environment designed for a classroom usage from the start. The system, called TapaCarp, is based on a tangible…
Advanced Collaborative Environments Supporting Systems Integration and Design
2003-03-01
concurrently view a virtual system or product model while maintaining natural, human communication . These virtual systems operate within a computer-generated...These environments allow multiple individuals to concurrently view a virtual system or product model while simultaneously maintaining natural, human ... communication . As a result, TARDEC researchers and system developers are using this advanced high-end visualization technology to develop future
NASA Technical Reports Server (NTRS)
Chaudhary, Aashish; Votava, Petr; Nemani, Ramakrishna R.; Michaelis, Andrew; Kotfila, Chris
2016-01-01
We are developing capabilities for an integrated petabyte-scale Earth science collaborative analysis and visualization environment. The ultimate goal is to deploy this environment within the NASA Earth Exchange (NEX) and OpenNEX in order to enhance existing science data production pipelines in both high-performance computing (HPC) and cloud environments. Bridging of HPC and cloud is a fairly new concept under active research and this system significantly enhances the ability of the scientific community to accelerate analysis and visualization of Earth science data from NASA missions, model outputs and other sources. We have developed a web-based system that seamlessly interfaces with both high-performance computing (HPC) and cloud environments, providing tools that enable science teams to develop and deploy large-scale analysis, visualization and QA pipelines of both the production process and the data products, and enable sharing results with the community. Our project is developed in several stages each addressing separate challenge - workflow integration, parallel execution in either cloud or HPC environments and big-data analytics or visualization. This work benefits a number of existing and upcoming projects supported by NEX, such as the Web Enabled Landsat Data (WELD), where we are developing a new QA pipeline for the 25PB system.
Analytics and Visualization Pipelines for Big Data on the NASA Earth Exchange (NEX) and OpenNEX
NASA Astrophysics Data System (ADS)
Chaudhary, A.; Votava, P.; Nemani, R. R.; Michaelis, A.; Kotfila, C.
2016-12-01
We are developing capabilities for an integrated petabyte-scale Earth science collaborative analysis and visualization environment. The ultimate goal is to deploy this environment within the NASA Earth Exchange (NEX) and OpenNEX in order to enhance existing science data production pipelines in both high-performance computing (HPC) and cloud environments. Bridging of HPC and cloud is a fairly new concept under active research and this system significantly enhances the ability of the scientific community to accelerate analysis and visualization of Earth science data from NASA missions, model outputs and other sources. We have developed a web-based system that seamlessly interfaces with both high-performance computing (HPC) and cloud environments, providing tools that enable science teams to develop and deploy large-scale analysis, visualization and QA pipelines of both the production process and the data products, and enable sharing results with the community. Our project is developed in several stages each addressing separate challenge - workflow integration, parallel execution in either cloud or HPC environments and big-data analytics or visualization. This work benefits a number of existing and upcoming projects supported by NEX, such as the Web Enabled Landsat Data (WELD), where we are developing a new QA pipeline for the 25PB system.
U.S. Army Research Laboratory (ARL) multimodal signatures database
NASA Astrophysics Data System (ADS)
Bennett, Kelly
2008-04-01
The U.S. Army Research Laboratory (ARL) Multimodal Signatures Database (MMSDB) is a centralized collection of sensor data of various modalities that are co-located and co-registered. The signatures include ground and air vehicles, personnel, mortar, artillery, small arms gunfire from potential sniper weapons, explosives, and many other high value targets. This data is made available to Department of Defense (DoD) and DoD contractors, Intel agencies, other government agencies (OGA), and academia for use in developing target detection, tracking, and classification algorithms and systems to protect our Soldiers. A platform independent Web interface disseminates the signatures to researchers and engineers within the scientific community. Hierarchical Data Format 5 (HDF5) signature models provide an excellent solution for the sharing of complex multimodal signature data for algorithmic development and database requirements. Many open source tools for viewing and plotting HDF5 signatures are available over the Web. Seamless integration of HDF5 signatures is possible in both proprietary computational environments, such as MATLAB, and Free and Open Source Software (FOSS) computational environments, such as Octave and Python, for performing signal processing, analysis, and algorithm development. Future developments include extending the Web interface into a portal system for accessing ARL algorithms and signatures, High Performance Computing (HPC) resources, and integrating existing database and signature architectures into sensor networking environments.
Development of a COTS-Based Computing Environment Blueprint Application at KSC
NASA Technical Reports Server (NTRS)
Ghansah, Isaac; Boatright, Bryan
1996-01-01
This paper describes a blueprint that can be used for developing a distributed computing environment (DCE) for NASA in general, and the Kennedy Space Center (KSC) in particular. A comprehensive, open, secure, integrated, and multi-vendor DCE such as OSF DCE has been suggested. Design issues, as well as recommendations for each component have been given. Where necessary, modifications were suggested to fit the needs of KSC. This was done in the areas of security and directory services. Readers requiring a more comprehensive coverage are encouraged to refer to the eight-chapter document prepared for this work.
Run Environment and Data Management for Earth System Models
NASA Astrophysics Data System (ADS)
Widmann, H.; Lautenschlager, M.; Fast, I.; Legutke, S.
2009-04-01
The Integrating Model and Data Infrastructure (IMDI) developed and maintained by the Model and Data Group (M&D) comprises the Standard Compile Environment (SCE) and the Standard Run Environment (SRE). The IMDI software has a modular design, which allows to combine and couple a suite of model components and as well to execute the tasks independently and on various platforms. Furthermore the modular structure enables the extension to new model combinations and new platforms. The SRE presented here enables the configuration and performance of earth system model experiments from model integration up to storage and visualization of data. We focus on recently implemented tasks such as synchronous data base filling, graphical monitoring and automatic generation of meta data in XML forms during run time. As well we address the capability to run experiments in heterogeneous IT environments with different computing systems for model integration, data processing and storage. These features are demonstrated for model configurations and on platforms used in current or upcoming projects, e.g. MILLENNIUM or IPCC AR5.
Prototyping an institutional IAIMS/UMLS information environment for an academic medical center.
Miller, P L; Paton, J A; Clyman, J I; Powsner, S M
1992-07-01
The paper describes a prototype information environment designed to link network-based information resources in an integrated fashion and thus enhance the information capabilities of an academic medical center. The prototype was implemented on a single Macintosh computer to permit exploration of the overall "information architecture" and to demonstrate the various desired capabilities prior to full-scale network-based implementation. At the heart of the prototype are two components: a diverse set of information resources available over an institutional computer network and an information sources map designed to assist users in finding and accessing information resources relevant to their needs. The paper describes these and other components of the prototype and presents a scenario illustrating its use. The prototype illustrates the link between the goals of two National Library of Medicine initiatives, the Integrated Academic Information Management System (IAIMS) and the Unified Medical Language System (UMLS).
NASA Technical Reports Server (NTRS)
Jacklin, S. A.; Leyland, J. A.; Warmbrodt, W.
1985-01-01
Modern control systems must typically perform real-time identification and control, as well as coordinate a host of other activities related to user interaction, online graphics, and file management. This paper discusses five global design considerations which are useful to integrate array processor, multimicroprocessor, and host computer system architectures into versatile, high-speed controllers. Such controllers are capable of very high control throughput, and can maintain constant interaction with the nonreal-time or user environment. As an application example, the architecture of a high-speed, closed-loop controller used to actively control helicopter vibration is briefly discussed. Although this system has been designed for use as the controller for real-time rotorcraft dynamics and control studies in a wind tunnel environment, the controller architecture can generally be applied to a wide range of automatic control applications.
Dynamic VM Provisioning for TORQUE in a Cloud Environment
NASA Astrophysics Data System (ADS)
Zhang, S.; Boland, L.; Coddington, P.; Sevior, M.
2014-06-01
Cloud computing, also known as an Infrastructure-as-a-Service (IaaS), is attracting more interest from the commercial and educational sectors as a way to provide cost-effective computational infrastructure. It is an ideal platform for researchers who must share common resources but need to be able to scale up to massive computational requirements for specific periods of time. This paper presents the tools and techniques developed to allow the open source TORQUE distributed resource manager and Maui cluster scheduler to dynamically integrate OpenStack cloud resources into existing high throughput computing clusters.
Enforcing compatibility and constraint conditions and information retrieval at the design action
NASA Technical Reports Server (NTRS)
Woodruff, George W.
1990-01-01
The design of complex entities is a multidisciplinary process involving several interacting groups and disciplines. There is a need to integrate the data in such environments to enhance the collaboration between these groups and to enforce compatibility between dependent data entities. This paper discusses the implementation of a workstation based CAD system that is integrated with a DBMS and an expert system, CLIPS, (both implemented on a mini computer) to provide such collaborative and compatibility enforcement capabilities. The current implementation allows for a three way link between the CAD system, the DBMS and CLIPS. The engineering design process associated with the design and fabrication of sheet metal housing for computers in a large computer manufacturing facility provides the basis for this prototype system.
Creating an open environment software infrastructure
NASA Technical Reports Server (NTRS)
Jipping, Michael J.
1992-01-01
As the development of complex computer hardware accelerates at increasing rates, the ability of software to keep pace is essential. The development of software design tools, however, is falling behind the development of hardware for several reasons, the most prominent of which is the lack of a software infrastructure to provide an integrated environment for all parts of a software system. The research was undertaken to provide a basis for answering this problem by investigating the requirements of open environments.
AEROELASTIC SIMULATION TOOL FOR INFLATABLE BALLUTE AEROCAPTURE
NASA Technical Reports Server (NTRS)
Liever, P. A.; Sheta, E. F.; Habchi, S. D.
2006-01-01
A multidisciplinary analysis tool is under development for predicting the impact of aeroelastic effects on the functionality of inflatable ballute aeroassist vehicles in both the continuum and rarefied flow regimes. High-fidelity modules for continuum and rarefied aerodynamics, structural dynamics, heat transfer, and computational grid deformation are coupled in an integrated multi-physics, multi-disciplinary computing environment. This flexible and extensible approach allows the integration of state-of-the-art, stand-alone NASA and industry leading continuum and rarefied flow solvers and structural analysis codes into a computing environment in which the modules can run concurrently with synchronized data transfer. Coupled fluid-structure continuum flow demonstrations were conducted on a clamped ballute configuration. The feasibility of implementing a DSMC flow solver in the simulation framework was demonstrated, and loosely coupled rarefied flow aeroelastic demonstrations were performed. A NASA and industry technology survey identified CFD, DSMC and structural analysis codes capable of modeling non-linear shape and material response of thin-film inflated aeroshells. The simulation technology will find direct and immediate applications with NASA and industry in ongoing aerocapture technology development programs.
NASA Astrophysics Data System (ADS)
Jamroz, Benjamin F.; Klöfkorn, Robert
2016-08-01
The scalability of computational applications on current and next-generation supercomputers is increasingly limited by the cost of inter-process communication. We implement non-blocking asynchronous communication in the High-Order Methods Modeling Environment for the time integration of the hydrostatic fluid equations using both the spectral-element and discontinuous Galerkin methods. This allows the overlap of computation with communication, effectively hiding some of the costs of communication. A novel detail about our approach is that it provides some data movement to be performed during the asynchronous communication even in the absence of other computations. This method produces significant performance and scalability gains in large-scale simulations.
Diagnostics in the Extendable Integrated Support Environment (EISE)
NASA Technical Reports Server (NTRS)
Brink, James R.; Storey, Paul
1988-01-01
Extendable Integrated Support Environment (EISE) is a real-time computer network consisting of commercially available hardware and software components to support systems level integration, modifications, and enhancement to weapons systems. The EISE approach offers substantial potential savings by eliminating unique support environments in favor of sharing common modules for the support of operational weapon systems. An expert system is being developed that will help support diagnosing faults in this network. This is a multi-level, multi-expert diagnostic system that uses experiential knowledge relating symptoms to faults and also reasons from structural and functional models of the underlying physical model when experiential reasoning is inadequate. The individual expert systems are orchestrated by a supervisory reasoning controller, a meta-level reasoner which plans the sequence of reasoning steps to solve the given specific problem. The overall system, termed the Diagnostic Executive, accesses systems level performance checks and error reports, and issues remote test procedures to formulate and confirm fault hypotheses.
Navigation Ground Data System Engineering for the Cassini/Huygens Mission
NASA Technical Reports Server (NTRS)
Beswick, R. M.; Antreasian, P. G.; Gillam, S. D.; Hahn, Y.; Roth, D. C.; Jones, J. B.
2008-01-01
The launch of the Cassini/Huygens mission on October 15, 1997, began a seven year journey across the solar system that culminated in the entry of the spacecraft into Saturnian orbit on June 30, 2004. Cassini/Huygens Spacecraft Navigation is the result of a complex interplay between several teams within the Cassini Project, performed on the Ground Data System. The work of Spacecraft Navigation involves rigorous requirements for accuracy and completeness carried out often under uncompromising critical time pressures. To support the Navigation function, a fault-tolerant, high-reliability/high-availability computational environment was necessary to support data processing. Configuration Management (CM) was integrated with fault tolerant design and security engineering, according to the cornerstone principles of Confidentiality, Integrity, and Availability. Integrated with this approach are security benchmarks and validation to meet strict confidence levels. In addition, similar approaches to CM were applied in consideration of the staffing and training of the system administration team supporting this effort. As a result, the current configuration of this computational environment incorporates a secure, modular system, that provides for almost no downtime during tour operations.
Evaluation and recommendations for work group integration within the Materials and Processes Lab
NASA Technical Reports Server (NTRS)
Farrington, Phillip A.
1992-01-01
The goal of this study was to evaluate and make recommendations for improving the level of integration of several work groups within the Materials and Processes Lab at the Marshall Space Flight Center. This evaluation has uncovered a variety of projects that could improve the efficiency and operation of the work groups as well as the overall integration of the system. In addition, this study provides the foundation for specification of a computer integrated manufacturing test bed environment in the Materials and Processes Lab.
Directional templates for real-time detection of coronal axis rotated faces
NASA Astrophysics Data System (ADS)
Perez, Claudio A.; Estevez, Pablo A.; Garate, Patricio
2004-10-01
Real-time face and iris detection on video images has gained renewed attention because of multiple possible applications in studying eye function, drowsiness detection, virtual keyboard interfaces, face recognition, video processing and multimedia retrieval. In this paper, a study is presented on using directional templates in the detection of faces rotated in the coronal axis. The templates are built by extracting the directional image information from the regions of the eyes, nose and mouth. The face position is determined by computing a line integral using the templates over the face directional image. The line integral reaches a maximum when it coincides with the face position. It is shown an improvement in localization selectivity by the increased value in the line integral computed with the directional template. Besides, improvements in the line integral value for face size and face rotation angle was also found through the computation of the line integral using the directional template. Based on these results the new templates should improve selectivity and hence provide the means to restrict computations to a fewer number of templates and restrict the region of search during the face and eye tracking procedure. The proposed method is real time, completely non invasive and was applied with no background limitation and normal illumination conditions in an indoor environment.
Lewinski, Allison A; Fisher, Edwin B
2016-06-01
Interventions via the internet provide support to individuals managing chronic illness. The purpose of this integrative review was to determine how the features of a computer-mediated environment influence social interactions among individuals with type 2 diabetes. A combination of MeSH and keyword terms, based on the cognates of three broad groupings: social interaction, computer-mediated environments, and chronic illness, was used to search the PubMed, PsychInfo, Sociology Research Database, and Cumulative Index to Nursing and Allied Health Literature databases. Eleven articles met the inclusion criteria. Computer-mediated environments enhance an individual's ability to interact with peers while increasing the convenience of obtaining personalized support. A matrix, focused on social interaction among peers, identified themes across all articles, and five characteristics emerged: (1) the presence of synchronous and asynchronous communication, (2) the ability to connect with similar peers, (3) the presence or absence of a moderator, (4) personalization of feedback regarding individual progress and self-management, and (5) the ability of individuals to maintain choice during participation. Individuals interact with peers to obtain relevant, situation-specific information and knowledge about managing their own care. Computer-mediated environments facilitate the ability of individuals to exchange this information despite temporal or geographical barriers that may be present, thus improving T2D self-management. © The Author(s) 2015.
NASA Technical Reports Server (NTRS)
Shiva, S. G.
1978-01-01
Several high level languages which evolved over the past few years for describing and simulating the structure and behavior of digital systems, on digital computers are assessed. The characteristics of the four prominent languages (CDL, DDL, AHPL, ISP) are summarized. A criterion for selecting a suitable hardware description language for use in an automatic integrated circuit design environment is provided.
Lord, Louis-David; Stevner, Angus B.; Kringelbach, Morten L.
2017-01-01
To survive in an ever-changing environment, the brain must seamlessly integrate a rich stream of incoming information into coherent internal representations that can then be used to efficiently plan for action. The brain must, however, balance its ability to integrate information from various sources with a complementary capacity to segregate information into modules which perform specialized computations in local circuits. Importantly, evidence suggests that imbalances in the brain's ability to bind together and/or segregate information over both space and time is a common feature of several neuropsychiatric disorders. Most studies have, however, until recently strictly attempted to characterize the principles of integration and segregation in static (i.e. time-invariant) representations of human brain networks, hence disregarding the complex spatio-temporal nature of these processes. In the present Review, we describe how the emerging discipline of whole-brain computational connectomics may be used to study the causal mechanisms of the integration and segregation of information on behaviourally relevant timescales. We emphasize how novel methods from network science and whole-brain computational modelling can expand beyond traditional neuroimaging paradigms and help to uncover the neurobiological determinants of the abnormal integration and segregation of information in neuropsychiatric disorders. This article is part of the themed issue ‘Mathematical methods in medicine: neuroscience, cardiology and pathology’. PMID:28507228
Strategic Implications of Cloud Computing for Modeling and Simulation (Briefing)
2016-04-01
of Promises with Cloud • Cost efficiency • Unlimited storage • Backup and recovery • Automatic software integration • Easy access to information...activities that wrap the actual exercise itself (e.g., travel for exercise support, data collection, integration , etc.). Cloud -based simulation would...requiring quick delivery rather than fewer large messages requiring high bandwidth. Cloud environments tend to be better at providing high-bandwidth
Globus | Informatics Technology for Cancer Research (ITCR)
Globus software services provide secure cancer research data transfer, synchronization, and sharing in distributed environments at large scale. These services can be integrated into applications and research data gateways, leveraging Globus identity management, single sign-on, search, and authorization capabilities. Globus Genomics integrates Globus with the Galaxy genomics workflow engine and Amazon Web Services to enable cancer genomics analysis that can elastically scale compute resources with demand.
NASA Astrophysics Data System (ADS)
Masson, Steve; Vázquez-Abad, Jesús
2006-10-01
This paper proposes a new way to integrate history of science in science education to promote conceptual change by introducing the notion of historical microworld, which is a computer-based interactive learning environment respecting historic conceptions. In this definition, "interactive" means that the user can act upon the virtual environment by changing some parameters to see what ensues. "Environment respecting historic conceptions" means that the "world" has been programmed to respect the conceptions of past scientists or philosophers. Three historical microworlds in the field of mechanics are presented in this article: an Aristotelian microworld respecting Aristotle's conceptions about movement, a Buridanian microworld respecting the theory of impetus and, finally, a Newtonian microworld respecting Galileo's conceptions and Newton's laws of movement.
Probability theory, not the very guide of life.
Juslin, Peter; Nilsson, Håkan; Winman, Anders
2009-10-01
Probability theory has long been taken as the self-evident norm against which to evaluate inductive reasoning, and classical demonstrations of violations of this norm include the conjunction error and base-rate neglect. Many of these phenomena require multiplicative probability integration, whereas people seem more inclined to linear additive integration, in part, at least, because of well-known capacity constraints on controlled thought. In this article, the authors show with computer simulations that when based on approximate knowledge of probabilities, as is routinely the case in natural environments, linear additive integration can yield as accurate estimates, and as good average decision returns, as estimates based on probability theory. It is proposed that in natural environments people have little opportunity or incentive to induce the normative rules of probability theory and, given their cognitive constraints, linear additive integration may often offer superior bounded rationality.
Parikh, Priti P; Minning, Todd A; Nguyen, Vinh; Lalithsena, Sarasi; Asiaee, Amir H; Sahoo, Satya S; Doshi, Prashant; Tarleton, Rick; Sheth, Amit P
2012-01-01
Research on the biology of parasites requires a sophisticated and integrated computational platform to query and analyze large volumes of data, representing both unpublished (internal) and public (external) data sources. Effective analysis of an integrated data resource using knowledge discovery tools would significantly aid biologists in conducting their research, for example, through identifying various intervention targets in parasites and in deciding the future direction of ongoing as well as planned projects. A key challenge in achieving this objective is the heterogeneity between the internal lab data, usually stored as flat files, Excel spreadsheets or custom-built databases, and the external databases. Reconciling the different forms of heterogeneity and effectively integrating data from disparate sources is a nontrivial task for biologists and requires a dedicated informatics infrastructure. Thus, we developed an integrated environment using Semantic Web technologies that may provide biologists the tools for managing and analyzing their data, without the need for acquiring in-depth computer science knowledge. We developed a semantic problem-solving environment (SPSE) that uses ontologies to integrate internal lab data with external resources in a Parasite Knowledge Base (PKB), which has the ability to query across these resources in a unified manner. The SPSE includes Web Ontology Language (OWL)-based ontologies, experimental data with its provenance information represented using the Resource Description Format (RDF), and a visual querying tool, Cuebee, that features integrated use of Web services. We demonstrate the use and benefit of SPSE using example queries for identifying gene knockout targets of Trypanosoma cruzi for vaccine development. Answers to these queries involve looking up multiple sources of data, linking them together and presenting the results. The SPSE facilitates parasitologists in leveraging the growing, but disparate, parasite data resources by offering an integrative platform that utilizes Semantic Web techniques, while keeping their workload increase minimal.
Digital video applications in radiologic education: theory, technique, and applications.
Hennessey, J G; Fishman, E K; Ney, D R
1994-05-01
Computer-assisted instruction (CAI) has great potential in medical education. The recent explosion of multimedia platforms provides an environment for the seemless integration of text, images, and sound into a single program. This article discusses the role of digital video in the current educational environment as well as its future potential. An indepth review of the technical decisions of this new technology is also presented.
Tavaxy: integrating Taverna and Galaxy workflows with cloud computing support.
Abouelhoda, Mohamed; Issa, Shadi Alaa; Ghanem, Moustafa
2012-05-04
Over the past decade the workflow system paradigm has evolved as an efficient and user-friendly approach for developing complex bioinformatics applications. Two popular workflow systems that have gained acceptance by the bioinformatics community are Taverna and Galaxy. Each system has a large user-base and supports an ever-growing repository of application workflows. However, workflows developed for one system cannot be imported and executed easily on the other. The lack of interoperability is due to differences in the models of computation, workflow languages, and architectures of both systems. This lack of interoperability limits sharing of workflows between the user communities and leads to duplication of development efforts. In this paper, we present Tavaxy, a stand-alone system for creating and executing workflows based on using an extensible set of re-usable workflow patterns. Tavaxy offers a set of new features that simplify and enhance the development of sequence analysis applications: It allows the integration of existing Taverna and Galaxy workflows in a single environment, and supports the use of cloud computing capabilities. The integration of existing Taverna and Galaxy workflows is supported seamlessly at both run-time and design-time levels, based on the concepts of hierarchical workflows and workflow patterns. The use of cloud computing in Tavaxy is flexible, where the users can either instantiate the whole system on the cloud, or delegate the execution of certain sub-workflows to the cloud infrastructure. Tavaxy reduces the workflow development cycle by introducing the use of workflow patterns to simplify workflow creation. It enables the re-use and integration of existing (sub-) workflows from Taverna and Galaxy, and allows the creation of hybrid workflows. Its additional features exploit recent advances in high performance cloud computing to cope with the increasing data size and complexity of analysis.The system can be accessed either through a cloud-enabled web-interface or downloaded and installed to run within the user's local environment. All resources related to Tavaxy are available at http://www.tavaxy.org.
Multidisciplinary analysis and design of printed wiring boards
NASA Astrophysics Data System (ADS)
Fulton, Robert E.; Hughes, Joseph L.; Scott, Waymond R., Jr.; Umeagukwu, Charles; Yeh, Chao-Pin
1991-04-01
Modern printed wiring board design depends on electronic prototyping using computer-based simulation and design tools. Existing electrical computer-aided design (ECAD) tools emphasize circuit connectivity with only rudimentary analysis capabilities. This paper describes a prototype integrated PWB design environment denoted Thermal Structural Electromagnetic Testability (TSET) being developed at Georgia Tech in collaboration with companies in the electronics industry. TSET provides design guidance based on enhanced electrical and mechanical CAD capabilities including electromagnetic modeling testability analysis thermal management and solid mechanics analysis. TSET development is based on a strong analytical and theoretical science base and incorporates an integrated information framework and a common database design based on a systematic structured methodology.
Proceedings of the 3rd Annual Conference on Aerospace Computational Control, volume 2
NASA Technical Reports Server (NTRS)
Bernard, Douglas E. (Editor); Man, Guy K. (Editor)
1989-01-01
This volume of the conference proceedings contain papers and discussions in the following topical areas: Parallel processing; Emerging integrated capabilities; Low order controllers; Real time simulation; Multibody component representation; User environment; and Distributed parameter techniques.
Enantiomers of chiral molecules commonly exhibit differing pharmacokinetics and toxicities, which can introduce significant uncertainty when evaluating biological and environmental fates and potential risks to humans and the environment. However, racemization (the irreversible tr...
Conway, J; Sharkey, R
2002-10-01
The Faculty of Nursing, University of Newcastle, Australia, has been keen to initiate strategies that enhance student learning and nursing practice. Two strategies are problem based learning (PBL) and clinical practice. The Faculty has maintained a comparatively high proportion of the undergraduate hours in the clinical setting in times when financial constraints suggest that simulations and on campus laboratory experiences may be less expensive.Increasingly, computer based technologies are becoming sufficiently refined to support the exploration of nursing practice in a non-traditional lecture/tutorial environment. In 1998, a group of faculty members proposed that computer mediated instruction would provide an opportunity for partnership between students, academics and clinicians that would promote more positive outcomes for all and maintain the integrity of the PBL approach. This paper discusses the similarities between problem based and practice based learning and presents the findings of an evaluative study of the implementation of a practice based learning model that uses computer mediated communication to promote integration of practice experiences with the broader goals of the undergraduate curriculum.
Architecture for hospital information integration
NASA Astrophysics Data System (ADS)
Chimiak, William J.; Janariz, Daniel L.; Martinez, Ralph
1999-07-01
The ongoing integration of hospital information systems (HIS) continues. Data storage systems, data networks and computers improve, data bases grow and health-care applications increase. Some computer operating systems continue to evolve and some fade. Health care delivery now depends on this computer-assisted environment. The result is the critical harmonization of the various hospital information systems becomes increasingly difficult. The purpose of this paper is to present an architecture for HIS integration that is computer-language-neutral and computer- hardware-neutral for the informatics applications. The proposed architecture builds upon the work done at the University of Arizona on middleware, the work of the National Electrical Manufacturers Association, and the American College of Radiology. It is a fresh approach to allowing applications engineers to access medical data easily and thus concentrates on the application techniques in which they are expert without struggling with medical information syntaxes. The HIS can be modeled using a hierarchy of information sub-systems thus facilitating its understanding. The architecture includes the resulting information model along with a strict but intuitive application programming interface, managed by CORBA. The CORBA requirement facilitates interoperability. It should also reduce software and hardware development times.
Interactive design and analysis of future large spacecraft concepts
NASA Technical Reports Server (NTRS)
Garrett, L. B.
1981-01-01
An interactive computer aided design program used to perform systems level design and analysis of large spacecraft concepts is presented. Emphasis is on rapid design, analysis of integrated spacecraft, and automatic spacecraft modeling for lattice structures. Capabilities and performance of multidiscipline applications modules, the executive and data management software, and graphics display features are reviewed. A single user at an interactive terminal create, design, analyze, and conduct parametric studies of Earth orbiting spacecraft with relative ease. Data generated in the design, analysis, and performance evaluation of an Earth-orbiting large diameter antenna satellite are used to illustrate current capabilities. Computer run time statistics for the individual modules quantify the speed at which modeling, analysis, and design evaluation of integrated spacecraft concepts is accomplished in a user interactive computing environment.
Loeb, Kathryn; Barbosa, Sabrina; Jiang, Fei; Lee, Karin T.
2016-01-01
Background and Purpose: Physical therapists strive to integrate research into daily practice. The tablet computer is a potentially transformational tool for accessing information within the clinical practice environment. The purpose of this study was to measure and describe patterns of tablet computer use among physical therapy students during clinical rotation experiences. Methods: Doctor of physical therapy students (n = 13 users) tracked their use of tablet computers (iPad), loaded with commercially available apps, during 16 clinical experiences (6-16 weeks in duration). Results: The tablets were used on 70% of 691 clinic days, averaging 1.3 uses per day. Information seeking represented 48% of uses; 33% of those were foreground searches for research articles and syntheses and 66% were for background medical information. Other common uses included patient education (19%), medical record documentation (13%), and professional communication (9%). The most frequently used app was Safari, the preloaded web browser (representing 281 [36.5%] incidents of use). Users accessed 56 total apps to support clinical practice. Discussion and Conclusions: Physical therapy students successfully integrated use of a tablet computer into their clinical experiences including regular activities of information seeking. Our findings suggest that the tablet computer represents a potentially transformational tool for promoting knowledge translation in the clinical practice environment. Video Abstract available for more insights from the authors (see Supplemental Digital Content 1, http://links.lww.com/JNPT/A127). PMID:26945431
Beyond the Renderer: Software Architecture for Parallel Graphics and Visualization
NASA Technical Reports Server (NTRS)
Crockett, Thomas W.
1996-01-01
As numerous implementations have demonstrated, software-based parallel rendering is an effective way to obtain the needed computational power for a variety of challenging applications in computer graphics and scientific visualization. To fully realize their potential, however, parallel renderers need to be integrated into a complete environment for generating, manipulating, and delivering visual data. We examine the structure and components of such an environment, including the programming and user interfaces, rendering engines, and image delivery systems. We consider some of the constraints imposed by real-world applications and discuss the problems and issues involved in bringing parallel rendering out of the lab and into production.
Wynden, Rob; Anderson, Nick; Casale, Marco; Lakshminarayanan, Prakash; Anderson, Kent; Prosser, Justin; Errecart, Larry; Livshits, Alice; Thimman, Tim; Weiner, Mark
2011-01-01
Within the CTSA (Clinical Translational Sciences Awards) program, academic medical centers are tasked with the storage of clinical formulary data within an Integrated Data Repository (IDR) and the subsequent exposure of that data over grid computing environments for hypothesis generation and cohort selection. Formulary data collected over long periods of time across multiple institutions requires normalization of terms before those data sets can be aggregated and compared. This paper sets forth a solution to the challenge of generating derived aggregated normalized views from large, distributed data sets of clinical formulary data intended for re-use within clinical translational research.
Integrating autonomous distributed control into a human-centric C4ISR environment
NASA Astrophysics Data System (ADS)
Straub, Jeremy
2017-05-01
This paper considers incorporating autonomy into human-centric Command, Control, Communications, Computers, Intelligence, Surveillance and Reconnaissance (C4ISR) environments. Specifically, it focuses on identifying ways that current autonomy technologies can augment human control and the challenges presented by additive autonomy. Three approaches to this challenge are considered, stemming from prior work in two converging areas. In the first, the problem is approached as augmenting what humans currently do with automation. In the alternate approach, the problem is approached as treating humans as actors within a cyber-physical system-of-systems (stemming from robotic distributed computing). A third approach, combines elements of both of the aforementioned.
The role of graphics super-workstations in a supercomputing environment
NASA Technical Reports Server (NTRS)
Levin, E.
1989-01-01
A new class of very powerful workstations has recently become available which integrate near supercomputer computational performance with very powerful and high quality graphics capability. These graphics super-workstations are expected to play an increasingly important role in providing an enhanced environment for supercomputer users. Their potential uses include: off-loading the supercomputer (by serving as stand-alone processors, by post-processing of the output of supercomputer calculations, and by distributed or shared processing), scientific visualization (understanding of results, communication of results), and by real time interaction with the supercomputer (to steer an iterative computation, to abort a bad run, or to explore and develop new algorithms).
PathCase-SB architecture and database design
2011-01-01
Background Integration of metabolic pathways resources and regulatory metabolic network models, and deploying new tools on the integrated platform can help perform more effective and more efficient systems biology research on understanding the regulation in metabolic networks. Therefore, the tasks of (a) integrating under a single database environment regulatory metabolic networks and existing models, and (b) building tools to help with modeling and analysis are desirable and intellectually challenging computational tasks. Description PathCase Systems Biology (PathCase-SB) is built and released. The PathCase-SB database provides data and API for multiple user interfaces and software tools. The current PathCase-SB system provides a database-enabled framework and web-based computational tools towards facilitating the development of kinetic models for biological systems. PathCase-SB aims to integrate data of selected biological data sources on the web (currently, BioModels database and KEGG), and to provide more powerful and/or new capabilities via the new web-based integrative framework. This paper describes architecture and database design issues encountered in PathCase-SB's design and implementation, and presents the current design of PathCase-SB's architecture and database. Conclusions PathCase-SB architecture and database provide a highly extensible and scalable environment with easy and fast (real-time) access to the data in the database. PathCase-SB itself is already being used by researchers across the world. PMID:22070889
Computer simulation of a single pilot flying a modern high-performance helicopter
NASA Technical Reports Server (NTRS)
Zipf, Mark E.; Vogt, William G.; Mickle, Marlin H.; Hoelzeman, Ronald G.; Kai, Fei; Mihaloew, James R.
1988-01-01
Presented is a computer simulation of a human response pilot model able to execute operational flight maneuvers and vehicle stabilization of a modern high-performance helicopter. Low-order, single-variable, human response mechanisms, integrated to form a multivariable pilot structure, provide a comprehensive operational control over the vehicle. Evaluations of the integrated pilot were performed by direct insertion into a nonlinear, total-force simulation environment provided by NASA Lewis. Comparisons between the integrated pilot structure and single-variable pilot mechanisms are presented. Static and dynamically alterable configurations of the pilot structure are introduced to simulate pilot activities during vehicle maneuvers. These configurations, in conjunction with higher level, decision-making processes, are considered for use where guidance and navigational procedures, operational mode transfers, and resource sharing are required.
Accurate computation of gravitational field of a tesseroid
NASA Astrophysics Data System (ADS)
Fukushima, Toshio
2018-02-01
We developed an accurate method to compute the gravitational field of a tesseroid. The method numerically integrates a surface integral representation of the gravitational potential of the tesseroid by conditionally splitting its line integration intervals and by using the double exponential quadrature rule. Then, it evaluates the gravitational acceleration vector and the gravity gradient tensor by numerically differentiating the numerically integrated potential. The numerical differentiation is conducted by appropriately switching the central and the single-sided second-order difference formulas with a suitable choice of the test argument displacement. If necessary, the new method is extended to the case of a general tesseroid with the variable density profile, the variable surface height functions, and/or the variable intervals in longitude or in latitude. The new method is capable of computing the gravitational field of the tesseroid independently on the location of the evaluation point, namely whether outside, near the surface of, on the surface of, or inside the tesseroid. The achievable precision is 14-15 digits for the potential, 9-11 digits for the acceleration vector, and 6-8 digits for the gradient tensor in the double precision environment. The correct digits are roughly doubled if employing the quadruple precision computation. The new method provides a reliable procedure to compute the topographic gravitational field, especially that near, on, and below the surface. Also, it could potentially serve as a sure reference to complement and elaborate the existing approaches using the Gauss-Legendre quadrature or other standard methods of numerical integration.
Stochastic Simulation Service: Bridging the Gap between the Computational Expert and the Biologist
Banerjee, Debjani; Bellesia, Giovanni; Daigle, Bernie J.; Douglas, Geoffrey; Gu, Mengyuan; Gupta, Anand; Hellander, Stefan; Horuk, Chris; Nath, Dibyendu; Takkar, Aviral; Lötstedt, Per; Petzold, Linda R.
2016-01-01
We present StochSS: Stochastic Simulation as a Service, an integrated development environment for modeling and simulation of both deterministic and discrete stochastic biochemical systems in up to three dimensions. An easy to use graphical user interface enables researchers to quickly develop and simulate a biological model on a desktop or laptop, which can then be expanded to incorporate increasing levels of complexity. StochSS features state-of-the-art simulation engines. As the demand for computational power increases, StochSS can seamlessly scale computing resources in the cloud. In addition, StochSS can be deployed as a multi-user software environment where collaborators share computational resources and exchange models via a public model repository. We demonstrate the capabilities and ease of use of StochSS with an example of model development and simulation at increasing levels of complexity. PMID:27930676
Stochastic Simulation Service: Bridging the Gap between the Computational Expert and the Biologist
Drawert, Brian; Hellander, Andreas; Bales, Ben; ...
2016-12-08
We present StochSS: Stochastic Simulation as a Service, an integrated development environment for modeling and simulation of both deterministic and discrete stochastic biochemical systems in up to three dimensions. An easy to use graphical user interface enables researchers to quickly develop and simulate a biological model on a desktop or laptop, which can then be expanded to incorporate increasing levels of complexity. StochSS features state-of-the-art simulation engines. As the demand for computational power increases, StochSS can seamlessly scale computing resources in the cloud. In addition, StochSS can be deployed as a multi-user software environment where collaborators share computational resources andmore » exchange models via a public model repository. We also demonstrate the capabilities and ease of use of StochSS with an example of model development and simulation at increasing levels of complexity.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jamroz, Benjamin F.; Klofkorn, Robert
The scalability of computational applications on current and next-generation supercomputers is increasingly limited by the cost of inter-process communication. We implement non-blocking asynchronous communication in the High-Order Methods Modeling Environment for the time integration of the hydrostatic fluid equations using both the spectral-element and discontinuous Galerkin methods. This allows the overlap of computation with communication, effectively hiding some of the costs of communication. A novel detail about our approach is that it provides some data movement to be performed during the asynchronous communication even in the absence of other computations. This method produces significant performance and scalability gains in large-scalemore » simulations.« less
Jamroz, Benjamin F.; Klofkorn, Robert
2016-08-26
The scalability of computational applications on current and next-generation supercomputers is increasingly limited by the cost of inter-process communication. We implement non-blocking asynchronous communication in the High-Order Methods Modeling Environment for the time integration of the hydrostatic fluid equations using both the spectral-element and discontinuous Galerkin methods. This allows the overlap of computation with communication, effectively hiding some of the costs of communication. A novel detail about our approach is that it provides some data movement to be performed during the asynchronous communication even in the absence of other computations. This method produces significant performance and scalability gains in large-scalemore » simulations.« less
Provenance based data integrity checking and verification in cloud environments
Haq, Inam Ul; Jan, Bilal; Khan, Fakhri Alam; Ahmad, Awais
2017-01-01
Cloud computing is a recent tendency in IT that moves computing and data away from desktop and hand-held devices into large scale processing hubs and data centers respectively. It has been proposed as an effective solution for data outsourcing and on demand computing to control the rising cost of IT setups and management in enterprises. However, with Cloud platforms user’s data is moved into remotely located storages such that users lose control over their data. This unique feature of the Cloud is facing many security and privacy challenges which need to be clearly understood and resolved. One of the important concerns that needs to be addressed is to provide the proof of data integrity, i.e., correctness of the user’s data stored in the Cloud storage. The data in Clouds is physically not accessible to the users. Therefore, a mechanism is required where users can check if the integrity of their valuable data is maintained or compromised. For this purpose some methods are proposed like mirroring, checksumming and using third party auditors amongst others. However, these methods use extra storage space by maintaining multiple copies of data or the presence of a third party verifier is required. In this paper, we address the problem of proving data integrity in Cloud computing by proposing a scheme through which users are able to check the integrity of their data stored in Clouds. In addition, users can track the violation of data integrity if occurred. For this purpose, we utilize a relatively new concept in the Cloud computing called “Data Provenance”. Our scheme is capable to reduce the need of any third party services, additional hardware support and the replication of data items on client side for integrity checking. PMID:28545151
Provenance based data integrity checking and verification in cloud environments.
Imran, Muhammad; Hlavacs, Helmut; Haq, Inam Ul; Jan, Bilal; Khan, Fakhri Alam; Ahmad, Awais
2017-01-01
Cloud computing is a recent tendency in IT that moves computing and data away from desktop and hand-held devices into large scale processing hubs and data centers respectively. It has been proposed as an effective solution for data outsourcing and on demand computing to control the rising cost of IT setups and management in enterprises. However, with Cloud platforms user's data is moved into remotely located storages such that users lose control over their data. This unique feature of the Cloud is facing many security and privacy challenges which need to be clearly understood and resolved. One of the important concerns that needs to be addressed is to provide the proof of data integrity, i.e., correctness of the user's data stored in the Cloud storage. The data in Clouds is physically not accessible to the users. Therefore, a mechanism is required where users can check if the integrity of their valuable data is maintained or compromised. For this purpose some methods are proposed like mirroring, checksumming and using third party auditors amongst others. However, these methods use extra storage space by maintaining multiple copies of data or the presence of a third party verifier is required. In this paper, we address the problem of proving data integrity in Cloud computing by proposing a scheme through which users are able to check the integrity of their data stored in Clouds. In addition, users can track the violation of data integrity if occurred. For this purpose, we utilize a relatively new concept in the Cloud computing called "Data Provenance". Our scheme is capable to reduce the need of any third party services, additional hardware support and the replication of data items on client side for integrity checking.
Grid-wide neuroimaging data federation in the context of the NeuroLOG project
Michel, Franck; Gaignard, Alban; Ahmad, Farooq; Barillot, Christian; Batrancourt, Bénédicte; Dojat, Michel; Gibaud, Bernard; Girard, Pascal; Godard, David; Kassel, Gilles; Lingrand, Diane; Malandain, Grégoire; Montagnat, Johan; Pélégrini-Issac, Mélanie; Pennec, Xavier; Rojas Balderrama, Javier; Wali, Bacem
2010-01-01
Grid technologies are appealing to deal with the challenges raised by computational neurosciences and support multi-centric brain studies. However, core grids middleware hardly cope with the complex neuroimaging data representation and multi-layer data federation needs. Moreover, legacy neuroscience environments need to be preserved and cannot be simply superseded by grid services. This paper describes the NeuroLOG platform design and implementation, shedding light on its Data Management Layer. It addresses the integration of brain image files, associated relational metadata and neuroscience semantic data in a heterogeneous distributed environment, integrating legacy data managers through a mediation layer. PMID:20543431
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kornreich, Drew E; Vaidya, Rajendra U; Ammerman, Curtt N
Integrated Computational Materials Engineering (ICME) is a novel overarching approach to bridge length and time scales in computational materials science and engineering. This approach integrates all elements of multi-scale modeling (including various empirical and science-based models) with materials informatics to provide users the opportunity to tailor material selections based on stringent application needs. Typically, materials engineering has focused on structural requirements (stress, strain, modulus, fracture toughness etc.) while multi-scale modeling has been science focused (mechanical threshold strength model, grain-size models, solid-solution strengthening models etc.). Materials informatics (mechanical property inventories) on the other hand, is extensively data focused. All of thesemore » elements are combined within the framework of ICME to create architecture for the development, selection and design new composite materials for challenging environments. We propose development of the foundations for applying ICME to composite materials development for nuclear and high-radiation environments (including nuclear-fusion energy reactors, nuclear-fission reactors, and accelerators). We expect to combine all elements of current material models (including thermo-mechanical and finite-element models) into the ICME framework. This will be accomplished through the use of a various mathematical modeling constructs. These constructs will allow the integration of constituent models, which in tum would allow us to use the adaptive strengths of using a combinatorial scheme (fabrication and computational) for creating new composite materials. A sample problem where these concepts are used is provided in this summary.« less
NASA Astrophysics Data System (ADS)
Yue, Songshan; Chen, Min; Wen, Yongning; Lu, Guonian
2016-04-01
Earth environment is extremely complicated and constantly changing; thus, it is widely accepted that the use of a single geo-analysis model cannot accurately represent all details when solving complex geo-problems. Over several years of research, numerous geo-analysis models have been developed. However, a collaborative barrier between model providers and model users still exists. The development of cloud computing has provided a new and promising approach for sharing and integrating geo-analysis models across an open web environment. To share and integrate these heterogeneous models, encapsulation studies should be conducted that are aimed at shielding original execution differences to create services which can be reused in the web environment. Although some model service standards (such as Web Processing Service (WPS) and Geo Processing Workflow (GPW)) have been designed and developed to help researchers construct model services, various problems regarding model encapsulation remain. (1) The descriptions of geo-analysis models are complicated and typically require rich-text descriptions and case-study illustrations, which are difficult to fully represent within a single web request (such as the GetCapabilities and DescribeProcess operations in the WPS standard). (2) Although Web Service technologies can be used to publish model services, model users who want to use a geo-analysis model and copy the model service into another computer still encounter problems (e.g., they cannot access the model deployment dependencies information). This study presents a strategy for encapsulating geo-analysis models to reduce problems encountered when sharing models between model providers and model users and supports the tasks with different web service standards (e.g., the WPS standard). A description method for heterogeneous geo-analysis models is studied. Based on the model description information, the methods for encapsulating the model-execution program to model services and for describing model-service deployment information are also included in the proposed strategy. Hence, the model-description interface, model-execution interface and model-deployment interface are studied to help model providers and model users more easily share, reuse and integrate geo-analysis models in an open web environment. Finally, a prototype system is established, and the WPS standard is employed as an example to verify the capability and practicability of the model-encapsulation strategy. The results show that it is more convenient for modellers to share and integrate heterogeneous geo-analysis models in cloud computing platforms.
Enhancing Knowledge Flow in a Health Care Context: A Mobile Computing Approach
Souza, Diego Da Silva; de Lima, Patrícia Zudio; da Silveira, Pedro C; de Souza, Jano Moreira
2014-01-01
Background Advances in mobile computing and wireless communication have allowed people to interact and exchange knowledge almost anywhere. These technologies support Medicine 2.0, where the health knowledge flows among all involved people (eg, patients, caregivers, doctors, and patients’ relatives). Objective Our paper proposes a knowledge-sharing environment that takes advantage of mobile computing and contextual information to support knowledge sharing among participants within a health care community (ie, from patients to health professionals). This software environment enables knowledge exchange using peer-to-peer (P2P) mobile networks based on users’ profiles, and it facilitates face-to-face interactions among people with similar health interests, needs, or goals. Methods First, we reviewed and analyzed relevant scientific articles and software apps to determine the current state of knowledge flow within health care. Although no proposal was capable of addressing every aspect in the Medicine 2.0 paradigm, a list of requirements was compiled. Using this requirement list and our previous works, a knowledge-sharing environment was created integrating Mobile Exchange of Knowledge (MEK) and the Easy to Deploy Indoor Positioning System (EDIPS), and a twofold qualitative evaluation was performed. Second, we analyzed the efficiency and reliability of the knowledge that the integrated MEK-EDIPS tool provided to users according to their interest topics, and then performed a proof of concept with health professionals to determine the feasibility and usefulness of using this solution in a real-world scenario. Results . Using MEK, we reached 100% precision and 80% recall in the exchange of files within the peer-to-peer network. The mechanism that facilitated face-to-face interactions was evaluated by the difference between the location indicated by the EDIPS tool and the actual location of the people involved in the knowledge exchange. The average distance error was <6.28 m for an indoor environment. The usability and usefulness of this tool was assessed by questioning a sample of 18 health professionals: 94% (17/18) agreed the integrated MEK-EDIPS tool provides greater interaction among all the participants (eg, patients, caregivers, doctors, and patients’ relatives), most considered it extremely important in the health scenario, 72% (13/18) believed it could increase the knowledge flow in a health environment, and 67% (12/18) recommend it or would like to recommend its use. Conclusions The integrated MEK-EDIPS tool can provide more services than any other software tool analyzed in this paper. The proposed integrated MEK-EDIPS tool seems to be the best alternative for supporting health knowledge flow within the Medicine 2.0 paradigm. PMID:25427923
Integration of virtual and real scenes within an integral 3D imaging environment
NASA Astrophysics Data System (ADS)
Ren, Jinsong; Aggoun, Amar; McCormick, Malcolm
2002-11-01
The Imaging Technologies group at De Montfort University has developed an integral 3D imaging system, which is seen as the most likely vehicle for 3D television avoiding psychological effects. To create real fascinating three-dimensional television programs, a virtual studio that performs the task of generating, editing and integrating the 3D contents involving virtual and real scenes is required. The paper presents, for the first time, the procedures, factors and methods of integrating computer-generated virtual scenes with real objects captured using the 3D integral imaging camera system. The method of computer generation of 3D integral images, where the lens array is modelled instead of the physical camera is described. In the model each micro-lens that captures different elemental images of the virtual scene is treated as an extended pinhole camera. An integration process named integrated rendering is illustrated. Detailed discussion and deep investigation are focused on depth extraction from captured integral 3D images. The depth calculation method from the disparity and the multiple baseline method that is used to improve the precision of depth estimation are also presented. The concept of colour SSD and its further improvement in the precision is proposed and verified.
Parallels in Computer-Aided Design Framework and Software Development Environment Efforts.
1992-05-01
de - sign kits, and tool and design management frameworks. Also, books about software engineer- ing environments [Long 91] and electronic design...tool integration [Zarrella 90], and agreement upon a universal de - sign automation framework, such as the CAD Framework Initiative (CFI) [Malasky 91...ments: identification, control, status accounting, and audit and review. The paper by Dart ex- tracts 15 CM concepts from existing SDEs and tools
Display integration for ground combat vehicles
NASA Astrophysics Data System (ADS)
Busse, David J.
1998-09-01
The United States Army's requirement to employ high resolution target acquisition sensors and information warfare to increase its dominance over enemy forces has led to the need to integrate advanced display devices into ground combat vehicle crew stations. The Army's force structure require the integration of advanced displays on both existing and emerging ground combat vehicle systems. The fielding of second generation target acquisition sensors, color digital terrain maps and high volume digital command and control information networks on these platforms define display performance requirements. The greatest challenge facing the system integrator is the development and integration of advanced displays that meet operational, vehicle and human computer interface performance requirements for the ground combat vehicle fleet. The subject of this paper is to address those challenges: operational and vehicle performance, non-soldier centric crew station configurations, display performance limitations related to human computer interfaces and vehicle physical environments, display technology limitations and the Department of Defense (DOD) acquisition reform initiatives. How the ground combat vehicle Program Manager and system integrator are addressing these challenges are discussed through the integration of displays on fielded, current and future close combat vehicle applications.
A hardware/software environment to support R D in intelligent machines and mobile robotic systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mann, R.C.
1990-01-01
The Center for Engineering Systems Advanced Research (CESAR) serves as a focal point at the Oak Ridge National Laboratory (ORNL) for basic and applied research in intelligent machines. R D at CESAR addresses issues related to autonomous systems, unstructured (i.e. incompletely known) operational environments, and multiple performing agents. Two mobile robot prototypes (HERMIES-IIB and HERMIES-III) are being used to test new developments in several robot component technologies. This paper briefly introduces the computing environment at CESAR which includes three hypercube concurrent computers (two on-board the mobile robots), a graphics workstation, VAX, and multiple VME-based systems (several on-board the mobile robots).more » The current software environment at CESAR is intended to satisfy several goals, e.g.: code portability, re-usability in different experimental scenarios, modularity, concurrent computer hardware transparent to applications programmer, future support for multiple mobile robots, support human-machine interface modules, and support for integration of software from other, geographically disparate laboratories with different hardware set-ups. 6 refs., 1 fig.« less
Using the iPlant collaborative discovery environment.
Oliver, Shannon L; Lenards, Andrew J; Barthelson, Roger A; Merchant, Nirav; McKay, Sheldon J
2013-06-01
The iPlant Collaborative is an academic consortium whose mission is to develop an informatics and social infrastructure to address the "grand challenges" in plant biology. Its cyberinfrastructure supports the computational needs of the research community and facilitates solving major challenges in plant science. The Discovery Environment provides a powerful and rich graphical interface to the iPlant Collaborative cyberinfrastructure by creating an accessible virtual workbench that enables all levels of expertise, ranging from students to traditional biology researchers and computational experts, to explore, analyze, and share their data. By providing access to iPlant's robust data-management system and high-performance computing resources, the Discovery Environment also creates a unified space in which researchers can access scalable tools. Researchers can use available Applications (Apps) to execute analyses on their data, as well as customize or integrate their own tools to better meet the specific needs of their research. These Apps can also be used in workflows that automate more complicated analyses. This module describes how to use the main features of the Discovery Environment, using bioinformatics workflows for high-throughput sequence data as examples. © 2013 by John Wiley & Sons, Inc.
NASA Astrophysics Data System (ADS)
Carvalho, D.; Gavillet, Ph.; Delgado, V.; Albert, J. N.; Bellas, N.; Javello, J.; Miere, Y.; Ruffinoni, D.; Smith, G.
Large Scientific Equipments are controlled by Computer Systems whose complexity is growing driven, on the one hand by the volume and variety of the information, its distributed nature, the sophistication of its treatment and, on the other hand by the fast evolution of the computer and network market. Some people call them genetically Large-Scale Distributed Data Intensive Information Systems or Distributed Computer Control Systems (DCCS) for those systems dealing more with real time control. Taking advantage of (or forced by) the distributed architecture, the tasks are more and more often implemented as Client-Server applications. In this framework the monitoring of the computer nodes, the communications network and the applications becomes of primary importance for ensuring the safe running and guaranteed performance of the system. With the future generation of HEP experiments, such as those at the LHC in view, it is proposed to integrate the various functions of DCCS monitoring into one general purpose Multi-layer System.
Central Limit Theorem: New SOCR Applet and Demonstration Activity
ERIC Educational Resources Information Center
Dinov, Ivo D.; Christou, Nicholas; Sanchez, Juana
2008-01-01
Modern approaches for information technology based blended education utilize a variety of novel instructional, computational and network resources. Such attempts employ technology to deliver integrated, dynamically linked, interactive content and multi-faceted learning environments, which may facilitate student comprehension and information…
Signal Coherence Recovery Using Acousto-Optic Fourier Transform Architectures
1990-06-14
processing of data in ground- and space-based applications. We have implemented a prototype one-dimensional time-integrating acousto - optic (AO) Fourier...theory of optimum coherence recovery (CR) applicable in computation-limited environments. We have demonstrated direct acousto - optic implementation of CR
EVALUATING LANDSCAPE CHANGE AND HYDROLOGICAL CONSEQUENCES IN A SEMI-ARID ENVIRONMENT
During the past two decades, important advances in the integration of remote imagery, computer processing, and spatial analysis technologies have been used to better understand the distribution of natural communities and ecosystems, and the ecological processes that affect these ...
NETL - Supercomputing: NETL Simulation Based Engineering User Center (SBEUC)
None
2018-02-07
NETL's Simulation-Based Engineering User Center, or SBEUC, integrates one of the world's largest high-performance computers with an advanced visualization center. The SBEUC offers a collaborative environment among researchers at NETL sites and those working through the NETL-Regional University Alliance.
NETL - Supercomputing: NETL Simulation Based Engineering User Center (SBEUC)
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2013-09-30
NETL's Simulation-Based Engineering User Center, or SBEUC, integrates one of the world's largest high-performance computers with an advanced visualization center. The SBEUC offers a collaborative environment among researchers at NETL sites and those working through the NETL-Regional University Alliance.
Moertl, Peter M; Canning, John M; Gronlund, Scott D; Dougherty, Michael R P; Johansson, Joakim; Mills, Scott H
2002-01-01
Prior research examined how controllers plan in their traditional environment and identified various information uncertainties as detriments to planning. A planning aid was designed to reduce this uncertainty by perceptually representing important constraints. This included integrating spatial information on the radar screen with discrete information (planned sequences of air traffic). Previous research reported improved planning performance and decreased workload in the planning aid condition. The purpose of this paper was to determine the source of these performance improvements. Analysis of computer interactions using log-linear modeling showed that the planning interface led to less repetitive--but more integrated--information retrieval compared with the traditional planning environment. Ecological interface design principles helped explain how the integrated information retrieval gave rise to the performance improvements. Actual or potential applications of this research include the design and evaluation of interface automation that keeps users in active control by modification of perceptual task characteristics.
P3: a practice focused learning environment
NASA Astrophysics Data System (ADS)
Irving, Paul W.; Obsniuk, Michael J.; Caballero, Marcos D.
2017-09-01
There has been an increased focus on the integration of practices into physics curricula, with a particular emphasis on integrating computation into the undergraduate curriculum of scientists and engineers. In this paper, we present a university-level, introductory physics course for science and engineering majors at Michigan State University called P3 (projects and practices in physics) that is centred around providing introductory physics students with the opportunity to appropriate various science and engineering practices. The P3 design integrates computation with analytical problem solving and is built upon a curriculum foundation of problem-based learning, the principles of constructive alignment and the theoretical framework of community of practice. The design includes an innovative approach to computational physics instruction, instructional scaffolds, and a unique approach to assessment that enables instructors to guide students in the development of the practices of a physicist. We present the very positive student related outcomes of the design gathered via attitudinal and conceptual inventories and research interviews of students’ reflecting on their experiences in the P3 classroom.
Overview of ICE Project: Integration of Computational Fluid Dynamics and Experiments
NASA Technical Reports Server (NTRS)
Stegeman, James D.; Blech, Richard A.; Babrauckas, Theresa L.; Jones, William H.
2001-01-01
Researchers at the NASA Glenn Research Center have developed a prototype integrated environment for interactively exploring, analyzing, and validating information from computational fluid dynamics (CFD) computations and experiments. The Integrated CFD and Experiments (ICE) project is a first attempt at providing a researcher with a common user interface for control, manipulation, analysis, and data storage for both experiments and simulation. ICE can be used as a live, on-tine system that displays and archives data as they are gathered; as a postprocessing system for dataset manipulation and analysis; and as a control interface or "steering mechanism" for simulation codes while visualizing the results. Although the full capabilities of ICE have not been completely demonstrated, this report documents the current system. Various applications of ICE are discussed: a low-speed compressor, a supersonic inlet, real-time data visualization, and a parallel-processing simulation code interface. A detailed data model for the compressor application is included in the appendix.
Radiation environment for ATS-F. [including ambient trapped particle fluxes
NASA Technical Reports Server (NTRS)
Stassinopoulos, E. G.
1974-01-01
The ambient trapped particle fluxes incident on the ATS-F satellite were determined. Several synchronous circular flight paths were evaluated and the effect of parking longitude on vehicle encountered intensities was investigated. Temporal variations in the electron environment were considered and partially accounted for. Magnetic field calculations were performed with a current field model extrapolated to a later epoch with linear time terms. Orbital flux integrations were performed with the latest proton and electron environment models using new improved computational methods. The results are presented in graphical and tabular form; they are analyzed, explained, and discussed. Estimates of energetic solar proton fluxes are given for a one year mission at selected integral energies ranging from 10 to 100 Mev, calculated for a year of maximum solar activity during the next solar cycle.
A Series of Computational Neuroscience Labs Increases Comfort with MATLAB.
Nichols, David F
2015-01-01
Computational simulations allow for a low-cost, reliable means to demonstrate complex and often times inaccessible concepts to undergraduates. However, students without prior computer programming training may find working with code-based simulations to be intimidating and distracting. A series of computational neuroscience labs involving the Hodgkin-Huxley equations, an Integrate-and-Fire model, and a Hopfield Memory network were used in an undergraduate neuroscience laboratory component of an introductory level course. Using short focused surveys before and after each lab, student comfort levels were shown to increase drastically from a majority of students being uncomfortable or with neutral feelings about working in the MATLAB environment to a vast majority of students being comfortable working in the environment. Though change was reported within each lab, a series of labs was necessary in order to establish a lasting high level of comfort. Comfort working with code is important as a first step in acquiring computational skills that are required to address many questions within neuroscience.
A Series of Computational Neuroscience Labs Increases Comfort with MATLAB
Nichols, David F.
2015-01-01
Computational simulations allow for a low-cost, reliable means to demonstrate complex and often times inaccessible concepts to undergraduates. However, students without prior computer programming training may find working with code-based simulations to be intimidating and distracting. A series of computational neuroscience labs involving the Hodgkin-Huxley equations, an Integrate-and-Fire model, and a Hopfield Memory network were used in an undergraduate neuroscience laboratory component of an introductory level course. Using short focused surveys before and after each lab, student comfort levels were shown to increase drastically from a majority of students being uncomfortable or with neutral feelings about working in the MATLAB environment to a vast majority of students being comfortable working in the environment. Though change was reported within each lab, a series of labs was necessary in order to establish a lasting high level of comfort. Comfort working with code is important as a first step in acquiring computational skills that are required to address many questions within neuroscience. PMID:26557798
A Web-based Distributed Voluntary Computing Platform for Large Scale Hydrological Computations
NASA Astrophysics Data System (ADS)
Demir, I.; Agliamzanov, R.
2014-12-01
Distributed volunteer computing can enable researchers and scientist to form large parallel computing environments to utilize the computing power of the millions of computers on the Internet, and use them towards running large scale environmental simulations and models to serve the common good of local communities and the world. Recent developments in web technologies and standards allow client-side scripting languages to run at speeds close to native application, and utilize the power of Graphics Processing Units (GPU). Using a client-side scripting language like JavaScript, we have developed an open distributed computing framework that makes it easy for researchers to write their own hydrologic models, and run them on volunteer computers. Users will easily enable their websites for visitors to volunteer sharing their computer resources to contribute running advanced hydrological models and simulations. Using a web-based system allows users to start volunteering their computational resources within seconds without installing any software. The framework distributes the model simulation to thousands of nodes in small spatial and computational sizes. A relational database system is utilized for managing data connections and queue management for the distributed computing nodes. In this paper, we present a web-based distributed volunteer computing platform to enable large scale hydrological simulations and model runs in an open and integrated environment.
CONFIG: Integrated engineering of systems and their operation
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Ryan, Dan; Fleming, Land
1994-01-01
This article discusses CONFIG 3, a prototype software tool that supports integrated conceptual design evaluation from early in the product life cycle, by supporting isolated or integrated modeling, simulation, and analysis of the function, structure, behavior, failures and operations of system designs. Integration and reuse of models is supported in an object-oriented environment providing capabilities for graph analysis and discrete event simulation. CONFIG supports integration among diverse modeling approaches (component view, configuration or flow path view, and procedure view) and diverse simulation and analysis approaches. CONFIG is designed to support integrated engineering in diverse design domains, including mechanical and electro-mechanical systems, distributed computer systems, and chemical processing and transport systems.
Universal computer control system (UCCS) for space telerobots
NASA Technical Reports Server (NTRS)
Bejczy, Antal K.; Szakaly, Zoltan
1987-01-01
A universal computer control system (UCCS) is under development for all motor elements of a space telerobot. The basic hardware architecture and software design of UCCS are described, together with the rich motor sensing, control, and self-test capabilities of this all-computerized motor control system. UCCS is integrated into a multibus computer environment with direct interface to higher level control processors, uses pulsewidth multiplier power amplifiers, and one unit can control up to sixteen different motors simultaneously at a high I/O rate. UCCS performance capabilities are illustrated by a few data.
Enabling drug discovery project decisions with integrated computational chemistry and informatics
NASA Astrophysics Data System (ADS)
Tsui, Vickie; Ortwine, Daniel F.; Blaney, Jeffrey M.
2017-03-01
Computational chemistry/informatics scientists and software engineers in Genentech Small Molecule Drug Discovery collaborate with experimental scientists in a therapeutic project-centric environment. Our mission is to enable and improve pre-clinical drug discovery design and decisions. Our goal is to deliver timely data, analysis, and modeling to our therapeutic project teams using best-in-class software tools. We describe our strategy, the organization of our group, and our approaches to reach this goal. We conclude with a summary of the interdisciplinary skills required for computational scientists and recommendations for their training.
NASA Technical Reports Server (NTRS)
Mckay, Charles W.; Feagin, Terry; Bishop, Peter C.; Hallum, Cecil R.; Freedman, Glenn B.
1987-01-01
The principle focus of one of the RICIS (Research Institute for Computing and Information Systems) components is computer systems and software engineering in-the-large of the lifecycle of large, complex, distributed systems which: (1) evolve incrementally over a long time; (2) contain non-stop components; and (3) must simultaneously satisfy a prioritized balance of mission and safety critical requirements at run time. This focus is extremely important because of the contribution of the scaling direction problem to the current software crisis. The Computer Systems and Software Engineering (CSSE) component addresses the lifestyle issues of three environments: host, integration, and target.
ORBIT: an integrated environment for user-customized bioinformatics tools.
Bellgard, M I; Hiew, H L; Hunter, A; Wiebrands, M
1999-10-01
There are a large number of computational programs freely available to bioinformaticians via a client/server, web-based environment. However, the client interface to these tools (typically an html form page) cannot be customized from the client side as it is created by the service provider. The form page is usually generic enough to cater for a wide range of users. However, this implies that a user cannot set as 'default' advanced program parameters on the form or even customize the interface to his/her specific requirements or preferences. Currently, there is a lack of end-user interface environments that can be modified by the user when accessing computer programs available on a remote server running on an intranet or over the Internet. We have implemented a client/server system called ORBIT (Online Researcher's Bioinformatics Interface Tools) where individual clients can have interfaces created and customized to command-line-driven, server-side programs. Thus, Internet-based interfaces can be tailored to a user's specific bioinformatic needs. As interfaces are created on the client machine independent of the server, there can be different interfaces to the same server-side program to cater for different parameter settings. The interface customization is relatively quick (between 10 and 60 min) and all client interfaces are integrated into a single modular environment which will run on any computer platform supporting Java. The system has been developed to allow for a number of future enhancements and features. ORBIT represents an important advance in the way researchers gain access to bioinformatics tools on the Internet.
CLINICAL SURFACES - Activity-Based Computing for Distributed Multi-Display Environments in Hospitals
NASA Astrophysics Data System (ADS)
Bardram, Jakob E.; Bunde-Pedersen, Jonathan; Doryab, Afsaneh; Sørensen, Steffen
A multi-display environment (MDE) is made up of co-located and networked personal and public devices that form an integrated workspace enabling co-located group work. Traditionally, MDEs have, however, mainly been designed to support a single “smart room”, and have had little sense of the tasks and activities that the MDE is being used for. This paper presents a novel approach to support activity-based computing in distributed MDEs, where displays are physically distributed across a large building. CLINICAL SURFACES was designed for clinical work in hospitals, and enables context-sensitive retrieval and browsing of patient data on public displays. We present the design and implementation of CLINICAL SURFACES, and report from an evaluation of the system at a large hospital. The evaluation shows that using distributed public displays to support activity-based computing inside a hospital is very useful for clinical work, and that the apparent contradiction between maintaining privacy of medical data in a public display environment can be mitigated by the use of CLINICAL SURFACES.
Development of a Multi-Disciplinary Computing Environment (MDICE)
NASA Technical Reports Server (NTRS)
Kingsley, Gerry; Siegel, John M., Jr.; Harrand, Vincent J.; Lawrence, Charles; Luker, Joel J.
1999-01-01
The growing need for and importance of multi-component and multi-disciplinary engineering analysis has been understood for many years. For many applications, loose (or semi-implicit) coupling is optimal, and allows the use of various legacy codes without requiring major modifications. For this purpose, CFDRC and NASA LeRC have developed a computational environment to enable coupling between various flow analysis codes at several levels of fidelity. This has been referred to as the Visual Computing Environment (VCE), and is being successfully applied to the analysis of several aircraft engine components. Recently, CFDRC and AFRL/VAAC (WL) have extended the framework and scope of VCE to enable complex multi-disciplinary simulations. The chosen initial focus is on aeroelastic aircraft applications. The developed software is referred to as MDICE-AE, an extensible system suitable for integration of several engineering analysis disciplines. This paper describes the methodology, basic architecture, chosen software technologies, salient library modules, and the current status of and plans for MDICE. A fluid-structure interaction application is described in a separate companion paper.
An Artificial Neural Network-Based Decision-Support System for Integrated Network Security
2014-09-01
group that they need to know in order to make team-based decisions in real-time environments, (c) Employ secure cloud computing services to host mobile...THESIS Presented to the Faculty Department of Electrical and Computer Engineering Graduate School of Engineering and Management Air Force...out-of-the-loop syndrome and create complexity creep. As a result, full automation efforts can lead to inappropriate decision-making despite a
Mass storage system experiences and future needs at the National Center for Atmospheric Research
NASA Technical Reports Server (NTRS)
Olear, Bernard T.
1992-01-01
This presentation is designed to relate some of the experiences of the Scientific Computing Division at NCAR dealing with the 'data problem'. A brief history and a development of some basic Mass Storage System (MSS) principles are given. An attempt is made to show how these principles apply to the integration of various components into NCAR's MSS. There is discussion of future MSS needs for future computing environments.
Technical integration of hippocampus, Basal Ganglia and physical models for spatial navigation.
Fox, Charles; Humphries, Mark; Mitchinson, Ben; Kiss, Tamas; Somogyvari, Zoltan; Prescott, Tony
2009-01-01
Computational neuroscience is increasingly moving beyond modeling individual neurons or neural systems to consider the integration of multiple models, often constructed by different research groups. We report on our preliminary technical integration of recent hippocampal formation, basal ganglia and physical environment models, together with visualisation tools, as a case study in the use of Python across the modelling tool-chain. We do not present new modeling results here. The architecture incorporates leaky-integrator and rate-coded neurons, a 3D environment with collision detection and tactile sensors, 3D graphics and 2D plots. We found Python to be a flexible platform, offering a significant reduction in development time, without a corresponding significant increase in execution time. We illustrate this by implementing a part of the model in various alternative languages and coding styles, and comparing their execution times. For very large-scale system integration, communication with other languages and parallel execution may be required, which we demonstrate using the BRAHMS framework's Python bindings.
Can virtual reality be used to conduct mass prophylaxis clinic training? A pilot program.
Yellowlees, Peter; Cook, James N; Marks, Shayna L; Wolfe, Daniel; Mangin, Elanor
2008-03-01
To create and evaluate a pilot bioterrorism defense training environment using virtual reality technology. The present pilot project used Second Life, an internet-based virtual world system, to construct a virtual reality environment to mimic an actual setting that might be used as a Strategic National Stockpile (SNS) distribution site for northern California in the event of a bioterrorist attack. Scripted characters were integrated into the system as mock patients to analyze various clinic workflow scenarios. Users tested the virtual environment over two sessions. Thirteen users who toured the environment were asked to complete an evaluation survey. Respondents reported that the virtual reality system was relevant to their practice and had potential as a method of bioterrorism defense training. Computer simulations of bioterrorism defense training scenarios are feasible with existing personal computer technology. The use of internet-connected virtual environments holds promise for bioterrorism defense training. Recommendations are made for public health agencies regarding the implementation and benefits of using virtual reality for mass prophylaxis clinic training.
Law of Large Numbers: The Theory, Applications and Technology-Based Education
ERIC Educational Resources Information Center
Dinov, Ivo D.; Christou, Nicolas; Gould, Robert
2009-01-01
Modern approaches for technology-based blended education utilize a variety of recently developed novel pedagogical, computational and network resources. Such attempts employ technology to deliver integrated, dynamically-linked, interactive-content and heterogeneous learning environments, which may improve student comprehension and information…
Mobile Technologies and Augmented Reality in Open Education
ERIC Educational Resources Information Center
Kurubacak, Gulsun, Ed.; Altinpulluk, Hakan, Ed.
2017-01-01
Novel trends and innovations have enhanced contemporary educational environments. When applied properly, these computing advances can create enriched learning opportunities for students. "Mobile Technologies and Augmented Reality in Open Education" is a pivotal reference source for the latest academic research on the integration of…
Digital Literacy and Netiquette: Awareness and Perception in EFL Learning Context
ERIC Educational Resources Information Center
Nia, Sara Farshad; Marandi, Susan
2014-01-01
With the growing popularity of digital technologies and computer-mediated communication (CMC), various types of interactive communication technology are being increasingly integrated into foreign/second language learning environments. Nevertheless, due to its nature, online communication is susceptible to misunderstandings and miscommunications,…
The Wireless Student & the Library.
ERIC Educational Resources Information Center
Drew, Bill
2002-01-01
Describes a program at the State University of New York College of Agriculture and Technology at Morrisville (SUNY-Morrisville) developed with IBM called ThinkPad University that integrates computers into the teaching and learning environment. Explains a partnership with Raytheon that provides wireless connectivity; and discusses changes in…
NASA Astrophysics Data System (ADS)
Berres, A.; Karthik, R.; Nugent, P.; Sorokine, A.; Myers, A.; Pang, H.
2017-12-01
Building an integrated data infrastructure that can meet the needs of a sustainable energy-water resource management requires a robust data management and geovisual analytics platform, capable of cross-domain scientific discovery and knowledge generation. Such a platform can facilitate the investigation of diverse complex research and policy questions for emerging priorities in Energy-Water Nexus (EWN) science areas. Using advanced data analytics, machine learning techniques, multi-dimensional statistical tools, and interactive geovisualization components, such a multi-layered federated platform is being developed, the Energy-Water Nexus Knowledge Discovery Framework (EWN-KDF). This platform utilizes several enterprise-grade software design concepts and standards such as extensible service-oriented architecture, open standard protocols, event-driven programming model, enterprise service bus, and adaptive user interfaces to provide a strategic value to the integrative computational and data infrastructure. EWN-KDF is built on the Compute and Data Environment for Science (CADES) environment in Oak Ridge National Laboratory (ORNL).
NASA Astrophysics Data System (ADS)
Kovalets, Ivan V.; Efthimiou, George C.; Andronopoulos, Spyros; Venetsanos, Alexander G.; Argyropoulos, Christos D.; Kakosimos, Konstantinos E.
2018-05-01
In this work, we present an inverse computational method for the identification of the location, start time, duration and quantity of emitted substance of an unknown air pollution source of finite time duration in an urban environment. We considered a problem of transient pollutant dispersion under stationary meteorological fields, which is a reasonable assumption for the assimilation of available concentration measurements within 1 h from the start of an incident. We optimized the calculation of the source-receptor function by developing a method which requires integrating as many backward adjoint equations as the available measurement stations. This resulted in high numerical efficiency of the method. The source parameters are computed by maximizing the correlation function of the simulated and observed concentrations. The method has been integrated into the CFD code ADREA-HF and it has been tested successfully by performing a series of source inversion runs using the data of 200 individual realizations of puff releases, previously generated in a wind tunnel experiment.
NASA Technical Reports Server (NTRS)
Bruce, E. A.
1980-01-01
The software developed by the IPAD project, a new and very powerful tool for the implementation of integrated Computer Aided Design (CAD) systems in the aerospace engineering community, is discussed. The IPAD software is a tool and, as such, can be well applied or misapplied in any particular environment. The many benefits of an integrated CAD system are well documented, but there are few such systems in existence, especially in the mechanical engineering disciplines, and therefore little available experience to guide the implementor.
NASA Astrophysics Data System (ADS)
Shi, X.
2015-12-01
As NSF indicated - "Theory and experimentation have for centuries been regarded as two fundamental pillars of science. It is now widely recognized that computational and data-enabled science forms a critical third pillar." Geocomputation is the third pillar of GIScience and geosciences. With the exponential growth of geodata, the challenge of scalable and high performance computing for big data analytics become urgent because many research activities are constrained by the inability of software or tool that even could not complete the computation process. Heterogeneous geodata integration and analytics obviously magnify the complexity and operational time frame. Many large-scale geospatial problems may be not processable at all if the computer system does not have sufficient memory or computational power. Emerging computer architectures, such as Intel's Many Integrated Core (MIC) Architecture and Graphics Processing Unit (GPU), and advanced computing technologies provide promising solutions to employ massive parallelism and hardware resources to achieve scalability and high performance for data intensive computing over large spatiotemporal and social media data. Exploring novel algorithms and deploying the solutions in massively parallel computing environment to achieve the capability for scalable data processing and analytics over large-scale, complex, and heterogeneous geodata with consistent quality and high-performance has been the central theme of our research team in the Department of Geosciences at the University of Arkansas (UARK). New multi-core architectures combined with application accelerators hold the promise to achieve scalability and high performance by exploiting task and data levels of parallelism that are not supported by the conventional computing systems. Such a parallel or distributed computing environment is particularly suitable for large-scale geocomputation over big data as proved by our prior works, while the potential of such advanced infrastructure remains unexplored in this domain. Within this presentation, our prior and on-going initiatives will be summarized to exemplify how we exploit multicore CPUs, GPUs, and MICs, and clusters of CPUs, GPUs and MICs, to accelerate geocomputation in different applications.
Demir, E; Babur, O; Dogrusoz, U; Gursoy, A; Nisanci, G; Cetin-Atalay, R; Ozturk, M
2002-07-01
Availability of the sequences of entire genomes shifts the scientific curiosity towards the identification of function of the genomes in large scale as in genome studies. In the near future, data produced about cellular processes at molecular level will accumulate with an accelerating rate as a result of proteomics studies. In this regard, it is essential to develop tools for storing, integrating, accessing, and analyzing this data effectively. We define an ontology for a comprehensive representation of cellular events. The ontology presented here enables integration of fragmented or incomplete pathway information and supports manipulation and incorporation of the stored data, as well as multiple levels of abstraction. Based on this ontology, we present the architecture of an integrated environment named Patika (Pathway Analysis Tool for Integration and Knowledge Acquisition). Patika is composed of a server-side, scalable, object-oriented database and client-side editors to provide an integrated, multi-user environment for visualizing and manipulating network of cellular events. This tool features automated pathway layout, functional computation support, advanced querying and a user-friendly graphical interface. We expect that Patika will be a valuable tool for rapid knowledge acquisition, microarray generated large-scale data interpretation, disease gene identification, and drug development. A prototype of Patika is available upon request from the authors.
Process Integrated Mechanism for Human-Computer Collaboration and Coordination
2012-09-12
system we implemented the TAFLib library that provides the communication with TAF . The data received from the TAF server is collected in a data structure...send new commands and flight plans for the UAVs to the TAF server. Test scenarios Several scenarios have been implemented to test and prove our...areas. Shooting Enemies The basic scenario proved the successful integration of PIM and the TAF simulation environment. Subsequently we improved the CP
Bringing numerous methods for expression and promoter analysis to a public cloud computing service.
Polanski, Krzysztof; Gao, Bo; Mason, Sam A; Brown, Paul; Ott, Sascha; Denby, Katherine J; Wild, David L
2018-03-01
Every year, a large number of novel algorithms are introduced to the scientific community for a myriad of applications, but using these across different research groups is often troublesome, due to suboptimal implementations and specific dependency requirements. This does not have to be the case, as public cloud computing services can easily house tractable implementations within self-contained dependency environments, making the methods easily accessible to a wider public. We have taken 14 popular methods, the majority related to expression data or promoter analysis, developed these up to a good implementation standard and housed the tools in isolated Docker containers which we integrated into the CyVerse Discovery Environment, making these easily usable for a wide community as part of the CyVerse UK project. The integrated apps can be found at http://www.cyverse.org/discovery-environment, while the raw code is available at https://github.com/cyversewarwick and the corresponding Docker images are housed at https://hub.docker.com/r/cyversewarwick/. info@cyverse.warwick.ac.uk or D.L.Wild@warwick.ac.uk. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.
Condor-COPASI: high-throughput computing for biochemical networks
2012-01-01
Background Mathematical modelling has become a standard technique to improve our understanding of complex biological systems. As models become larger and more complex, simulations and analyses require increasing amounts of computational power. Clusters of computers in a high-throughput computing environment can help to provide the resources required for computationally expensive model analysis. However, exploiting such a system can be difficult for users without the necessary expertise. Results We present Condor-COPASI, a server-based software tool that integrates COPASI, a biological pathway simulation tool, with Condor, a high-throughput computing environment. Condor-COPASI provides a web-based interface, which makes it extremely easy for a user to run a number of model simulation and analysis tasks in parallel. Tasks are transparently split into smaller parts, and submitted for execution on a Condor pool. Result output is presented to the user in a number of formats, including tables and interactive graphical displays. Conclusions Condor-COPASI can effectively use a Condor high-throughput computing environment to provide significant gains in performance for a number of model simulation and analysis tasks. Condor-COPASI is free, open source software, released under the Artistic License 2.0, and is suitable for use by any institution with access to a Condor pool. Source code is freely available for download at http://code.google.com/p/condor-copasi/, along with full instructions on deployment and usage. PMID:22834945
ROS-IGTL-Bridge: an open network interface for image-guided therapy using the ROS environment.
Frank, Tobias; Krieger, Axel; Leonard, Simon; Patel, Niravkumar A; Tokuda, Junichi
2017-08-01
With the growing interest in advanced image-guidance for surgical robot systems, rapid integration and testing of robotic devices and medical image computing software are becoming essential in the research and development. Maximizing the use of existing engineering resources built on widely accepted platforms in different fields, such as robot operating system (ROS) in robotics and 3D Slicer in medical image computing could simplify these tasks. We propose a new open network bridge interface integrated in ROS to ensure seamless cross-platform data sharing. A ROS node named ROS-IGTL-Bridge was implemented. It establishes a TCP/IP network connection between the ROS environment and external medical image computing software using the OpenIGTLink protocol. The node exports ROS messages to the external software over the network and vice versa simultaneously, allowing seamless and transparent data sharing between the ROS-based devices and the medical image computing platforms. Performance tests demonstrated that the bridge could stream transforms, strings, points, and images at 30 fps in both directions successfully. The data transfer latency was <1.2 ms for transforms, strings and points, and 25.2 ms for color VGA images. A separate test also demonstrated that the bridge could achieve 900 fps for transforms. Additionally, the bridge was demonstrated in two representative systems: a mock image-guided surgical robot setup consisting of 3D slicer, and Lego Mindstorms with ROS as a prototyping and educational platform for IGT research; and the smart tissue autonomous robot surgical setup with 3D Slicer. The study demonstrated that the bridge enabled cross-platform data sharing between ROS and medical image computing software. This will allow rapid and seamless integration of advanced image-based planning/navigation offered by the medical image computing software such as 3D Slicer into ROS-based surgical robot systems.
Geometric modeling for computer aided design
NASA Technical Reports Server (NTRS)
Schwing, James L.
1993-01-01
Over the past several years, it has been the primary goal of this grant to design and implement software to be used in the conceptual design of aerospace vehicles. The work carried out under this grant was performed jointly with members of the Vehicle Analysis Branch (VAB) of NASA LaRC, Computer Sciences Corp., and Vigyan Corp. This has resulted in the development of several packages and design studies. Primary among these are the interactive geometric modeling tool, the Solid Modeling Aerospace Research Tool (smart), and the integration and execution tools provided by the Environment for Application Software Integration and Execution (EASIE). In addition, it is the purpose of the personnel of this grant to provide consultation in the areas of structural design, algorithm development, and software development and implementation, particularly in the areas of computer aided design, geometric surface representation, and parallel algorithms.
Open-Source, Distributed Computational Environment for Virtual Materials Exploration
2015-01-01
compromising structural integrity. For example, advanced designs could specify advanced materials processing techniques such as heat treatments in specific...orchestration of execution of multiple standalone codes at varying length scales will need advanced high ‐performance computing (HPC) integration in...possible hooks that could be used to coordinate larger workflows spanning tools developed by different groups. The high level approach explored
The radiation environment of OSO missions from 1974 to 1978
NASA Technical Reports Server (NTRS)
Stassinopoulos, E. G.
1973-01-01
Trapped particle radiation levels on several OSO missions were calculated for nominal trajectories using improved computational methods and new electron environment models. Temporal variations of the electron fluxes were considered and partially accounted for. Magnetic field calculations were performed with a current field model and extrapolated to a later epoch with linear time terms. Orbital flux integration results, which are presented in graphical and tabular form, are analyzed, explained, and discussed.
2011-09-30
capability to emulate the dive and movement behavior of marine mammals provides a significant advantage to modeling environmental impact than do historic...approaches used in Navy environmental assessments (EA) and impact statements (EIS). Many previous methods have been statistical or pseudo-statistical...Siderius. 2011. Comparison of methods used for computing the impact of sound on the marine environment, Marine Environmental Research, 71:342-350. [published
Audio-visual affective expression recognition
NASA Astrophysics Data System (ADS)
Huang, Thomas S.; Zeng, Zhihong
2007-11-01
Automatic affective expression recognition has attracted more and more attention of researchers from different disciplines, which will significantly contribute to a new paradigm for human computer interaction (affect-sensitive interfaces, socially intelligent environments) and advance the research in the affect-related fields including psychology, psychiatry, and education. Multimodal information integration is a process that enables human to assess affective states robustly and flexibly. In order to understand the richness and subtleness of human emotion behavior, the computer should be able to integrate information from multiple sensors. We introduce in this paper our efforts toward machine understanding of audio-visual affective behavior, based on both deliberate and spontaneous displays. Some promising methods are presented to integrate information from both audio and visual modalities. Our experiments show the advantage of audio-visual fusion in affective expression recognition over audio-only or visual-only approaches.
NASA Astrophysics Data System (ADS)
Wang, Xi Vincent; Wang, Lihui
2017-08-01
Cloud computing is the new enabling technology that offers centralised computing, flexible data storage and scalable services. In the manufacturing context, it is possible to utilise the Cloud technology to integrate and provide industrial resources and capabilities in terms of Cloud services. In this paper, a function block-based integration mechanism is developed to connect various types of production resources. A Cloud-based architecture is also deployed to offer a service pool which maintains these resources as production services. The proposed system provides a flexible and integrated information environment for the Cloud-based production system. As a specific type of manufacturing, Waste Electrical and Electronic Equipment (WEEE) remanufacturing experiences difficulties in system integration, information exchange and resource management. In this research, WEEE is selected as the example of Internet of Things to demonstrate how the obstacles and bottlenecks are overcome with the help of Cloud-based informatics approach. In the case studies, the WEEE recycle/recovery capabilities are also integrated and deployed as flexible Cloud services. Supporting mechanisms and technologies are presented and evaluated towards the end of the paper.
NASA Astrophysics Data System (ADS)
Pierce, S. A.
2017-12-01
Decision making for groundwater systems is becoming increasingly important, as shifting water demands increasingly impact aquifers. As buffer systems, aquifers provide room for resilient responses and augment the actual timeframe for hydrological response. Yet the pace impacts, climate shifts, and degradation of water resources is accelerating. To meet these new drivers, groundwater science is transitioning toward the emerging field of Integrated Water Resources Management, or IWRM. IWRM incorporates a broad array of dimensions, methods, and tools to address problems that tend to be complex. Computational tools and accessible cyberinfrastructure (CI) are needed to cross the chasm between science and society. Fortunately cloud computing environments, such as the new Jetstream system, are evolving rapidly. While still targeting scientific user groups systems such as, Jetstream, offer configurable cyberinfrastructure to enable interactive computing and data analysis resources on demand. The web-based interfaces allow researchers to rapidly customize virtual machines, modify computing architecture and increase the usability and access for broader audiences to advanced compute environments. The result enables dexterous configurations and opening up opportunities for IWRM modelers to expand the reach of analyses, number of case studies, and quality of engagement with stakeholders and decision makers. The acute need to identify improved IWRM solutions paired with advanced computational resources refocuses the attention of IWRM researchers on applications, workflows, and intelligent systems that are capable of accelerating progress. IWRM must address key drivers of community concern, implement transdisciplinary methodologies, adapt and apply decision support tools in order to effectively support decisions about groundwater resource management. This presentation will provide an overview of advanced computing services in the cloud using integrated groundwater management case studies to highlight how Cloud CI streamlines the process for setting up an interactive decision support system. Moreover, advances in artificial intelligence offer new techniques for old problems from integrating data to adaptive sensing or from interactive dashboards to optimizing multi-attribute problems. The combination of scientific expertise, flexible cloud computing solutions, and intelligent systems opens new research horizons.
User's Guide for Computer Program that Routes Signal Traces
NASA Technical Reports Server (NTRS)
Hedgley, David R., Jr.
2000-01-01
This disk contains both a FORTRAN computer program and the corresponding user's guide that facilitates both its incorporation into your system and its utility. The computer program represents an efficient algorithm that routes signal traces on layers of a printed circuit with both through-pins and surface mounts. The computer program included is an implementation of the ideas presented in the theoretical paper titled "A Formal Algorithm for Routing Signal Traces on a Printed Circuit Board", NASA TP-3639 published in 1996. The computer program in the "connects" file can be read with a FORTRAN compiler and readily integrated into software unique to each particular environment where it might be used.
Burton, Brett M; Tate, Jess D; Erem, Burak; Swenson, Darrell J; Wang, Dafang F; Steffen, Michael; Brooks, Dana H; van Dam, Peter M; Macleod, Rob S
2012-01-01
Computational modeling in electrocardiography often requires the examination of cardiac forward and inverse problems in order to non-invasively analyze physiological events that are otherwise inaccessible or unethical to explore. The study of these models can be performed in the open-source SCIRun problem solving environment developed at the Center for Integrative Biomedical Computing (CIBC). A new toolkit within SCIRun provides researchers with essential frameworks for constructing and manipulating electrocardiographic forward and inverse models in a highly efficient and interactive way. The toolkit contains sample networks, tutorials and documentation which direct users through SCIRun-specific approaches in the assembly and execution of these specific problems. PMID:22254301
Aeroheating Design Issues for Reusable Launch Vehicles: A Perspective
NASA Technical Reports Server (NTRS)
Zoby, E. Vincent; Thompson, Richard A.; Wurster, Kathryn E.
2004-01-01
An overview of basic aeroheating design issues for Reusable Launch Vehicles (RLV), which addresses the application of hypersonic ground-based testing, and computational fluid dynamic (CFD) and engineering codes, is presented. Challenges inherent to the prediction of aeroheating environments required for the successful design of the RLV Thermal Protection System (TPS) are discussed in conjunction with the importance of employing appropriate experimental/computational tools. The impact of the information garnered by using these tools in the resulting analyses, ultimately enhancing the RLV TPS design is illustrated. A wide range of topics is presented in this overview; e.g. the impact of flow physics issues such as boundary-layer transition, including effects of distributed and discrete roughness, shock-shock interactions, and flow separation/reattachment. Also, the benefit of integrating experimental and computational studies to gain an improved understanding of flow phenomena is illustrated. From computational studies, the effect of low-density conditions and of uncertainties in material surface properties on the computed heating rates a r e highlighted as well as the significant role of CFD in improving the Outer Mold Line (OML) definition to reduce aeroheating while maintaining aerodynamic performance. Appropriate selection of the TPS design trajectories and trajectory shaping to mitigate aeroheating levels and loads are discussed. Lastly, an illustration of an aeroheating design process is presented whereby data from hypersonic wind-tunnel tests are integrated with predictions from CFD codes and engineering methods to provide heating environments along an entry trajectory as required for TPS design.
Aeroheating Design Issues for Reusable Launch Vehicles: A Perspective
NASA Technical Reports Server (NTRS)
Zoby, E. Vincent; Thompson, Richard A.; Wurster, Kathryn E.
2004-01-01
An overview of basic aeroheating design issues for Reusable Launch Vehicles (RLV), which addresses the application of hypersonic ground-based testing, and computational fluid dynamic (CFD) and engineering codes, is presented. Challenges inherent to the prediction of aeroheating environments required for the successful design of the RLV Thermal Protection System (TPS) are discussed in conjunction with the importance of employing appropriate experimental/computational tools. The impact of the information garnered by using these tools in the resulting analyses, ultimately enhancing the RLV TPS design is illustrated. A wide range of topics is presented in this overview; e.g. the impact of flow physics issues such as boundary-layer transition, including effects of distributed and discrete roughness, shockshock interactions, and flow separation/reattachment. Also, the benefit of integrating experimental and computational studies to gain an improved understanding of flow phenomena is illustrated. From computational studies, the effect of low-density conditions and of uncertainties in material surface properties on the computed heating rates are highlighted as well as the significant role of CFD in improving the Outer Mold Line (OML) definition to reduce aeroheating while maintaining aerodynamic performance. Appropriate selection of the TPS design trajectories and trajectory shaping to mitigate aeroheating levels and loads are discussed. Lastly, an illustration of an aeroheating design process is presented whereby data from hypersonic wind-tunnel tests are integrated with predictions from CFD codes and engineering methods to provide heating environments along an entry trajectory as required for TPS design.
A Curriculum Review: The Voyage of the Mimi.
ERIC Educational Resources Information Center
Johns, Kenneth W.
1988-01-01
The curriculum package, "The Voyage of the Mimi," uses computer, videocassette, student text, and workbook for integrated study of the great whales and the impact of social actions on society and the environment. This review suggests that the package also offers many ancillary teaching opportunities. (CB)
An integrative architecture for a sensor-supported trust management system.
Trček, Denis
2012-01-01
Trust plays a key role not only in e-worlds and emerging pervasive computing environments, but also already for millennia in human societies. Trust management solutions that have being around now for some fifteen years were primarily developed for the above mentioned cyber environments and they are typically focused on artificial agents, sensors, etc. However, this paper presents extensions of a new methodology together with architecture for trust management support that is focused on humans and human-like agents. With this methodology and architecture sensors play a crucial role. The architecture presents an already deployable tool for multi and interdisciplinary research in various areas where humans are involved. It provides new ways to obtain an insight into dynamics and evolution of such structures, not only in pervasive computing environments, but also in other important areas like management and decision making support.
Extending the granularity of representation and control for the MIL-STD CAIS 1.0 node model
NASA Technical Reports Server (NTRS)
Rogers, Kathy L.
1986-01-01
The Common APSE (Ada 1 Program Support Environment) Interface Set (CAIS) (DoD85) node model provides an excellent baseline for interfaces in a single-host development environment. To encompass the entire spectrum of computing, however, the CAIS model should be extended in four areas. It should provide the interface between the engineering workstation and the host system throughout the entire lifecycle of the system. It should provide a basis for communication and integration functions needed by distributed host environments. It should provide common interfaces for communications mechanisms to and among target processors. It should provide facilities for integration, validation, and verification of test beds extending to distributed systems on geographically separate processors with heterogeneous instruction set architectures (ISAS). Additions to the PROCESS NODE model to extend the CAIS into these four areas are proposed.
2014-09-01
becoming a more and more prevalent technology in the business world today. According to Syal and Goswami (2012), cloud technology is seen as a...use of computing resources, applications, and personal files without reliance on a single computer or system ( Syal & Goswami, 2012). By operating in...cloud services largely being web-based, which can be retrieved through most systems with access to the Internet ( Syal & Goswami, 2012). The end user can
SAVA 3: A testbed for integration and control of visual processes
NASA Technical Reports Server (NTRS)
Crowley, James L.; Christensen, Henrik
1994-01-01
The development of an experimental test-bed to investigate the integration and control of perception in a continuously operating vision system is described. The test-bed integrates a 12 axis robotic stereo camera head mounted on a mobile robot, dedicated computer boards for real-time image acquisition and processing, and a distributed system for image description. The architecture was designed to: (1) be continuously operating, (2) integrate software contributions from geographically dispersed laboratories, (3) integrate description of the environment with 2D measurements, 3D models, and recognition of objects, (4) capable of supporting diverse experiments in gaze control, visual servoing, navigation, and object surveillance, and (5) dynamically reconfiguarable.
NASA Astrophysics Data System (ADS)
Vassiliou, Marius S.; Sundareswaran, Venkataraman; Chen, S.; Behringer, Reinhold; Tam, Clement K.; Chan, M.; Bangayan, Phil T.; McGee, Joshua H.
2000-08-01
We describe new systems for improved integrated multimodal human-computer interaction and augmented reality for a diverse array of applications, including future advanced cockpits, tactical operations centers, and others. We have developed an integrated display system featuring: speech recognition of multiple concurrent users equipped with both standard air- coupled microphones and novel throat-coupled sensors (developed at Army Research Labs for increased noise immunity); lip reading for improving speech recognition accuracy in noisy environments, three-dimensional spatialized audio for improved display of warnings, alerts, and other information; wireless, coordinated handheld-PC control of a large display; real-time display of data and inferences from wireless integrated networked sensors with on-board signal processing and discrimination; gesture control with disambiguated point-and-speak capability; head- and eye- tracking coupled with speech recognition for 'look-and-speak' interaction; and integrated tetherless augmented reality on a wearable computer. The various interaction modalities (speech recognition, 3D audio, eyetracking, etc.) are implemented a 'modality servers' in an Internet-based client-server architecture. Each modality server encapsulates and exposes commercial and research software packages, presenting a socket network interface that is abstracted to a high-level interface, minimizing both vendor dependencies and required changes on the client side as the server's technology improves.
20170312 - Computer Simulation of Developmental ...
Rationale: Recent progress in systems toxicology and synthetic biology have paved the way to new thinking about in vitro/in silico modeling of developmental processes and toxicities, both for embryological and reproductive impacts. Novel in vitro platforms such as 3D organotypic culture models, engineered microscale tissues and complex microphysiological systems (MPS), together with computational models and computer simulation of tissue dynamics, lend themselves to a integrated testing strategies for predictive toxicology. As these emergent methodologies continue to evolve, they must be integrally tied to maternal/fetal physiology and toxicity of the developing individual across early lifestage transitions, from fertilization to birth, through puberty and beyond. Scope: This symposium will focus on how the novel technology platforms can help now and in the future, with in vitro/in silico modeling of complex biological systems for developmental and reproductive toxicity issues, and translating systems models into integrative testing strategies. The symposium is based on three main organizing principles: (1) that novel in vitro platforms with human cells configured in nascent tissue architectures with a native microphysiological environments yield mechanistic understanding of developmental and reproductive impacts of drug/chemical exposures; (2) that novel in silico platforms with high-throughput screening (HTS) data, biologically-inspired computational models of
Computer Simulation of Developmental Processes and ...
Rationale: Recent progress in systems toxicology and synthetic biology have paved the way to new thinking about in vitro/in silico modeling of developmental processes and toxicities, both for embryological and reproductive impacts. Novel in vitro platforms such as 3D organotypic culture models, engineered microscale tissues and complex microphysiological systems (MPS), together with computational models and computer simulation of tissue dynamics, lend themselves to a integrated testing strategies for predictive toxicology. As these emergent methodologies continue to evolve, they must be integrally tied to maternal/fetal physiology and toxicity of the developing individual across early lifestage transitions, from fertilization to birth, through puberty and beyond. Scope: This symposium will focus on how the novel technology platforms can help now and in the future, with in vitro/in silico modeling of complex biological systems for developmental and reproductive toxicity issues, and translating systems models into integrative testing strategies. The symposium is based on three main organizing principles: (1) that novel in vitro platforms with human cells configured in nascent tissue architectures with a native microphysiological environments yield mechanistic understanding of developmental and reproductive impacts of drug/chemical exposures; (2) that novel in silico platforms with high-throughput screening (HTS) data, biologically-inspired computational models of
HCI∧2 framework: a software framework for multimodal human-computer interaction systems.
Shen, Jie; Pantic, Maja
2013-12-01
This paper presents a novel software framework for the development and research in the area of multimodal human-computer interface (MHCI) systems. The proposed software framework, which is called the HCI∧2 Framework, is built upon publish/subscribe (P/S) architecture. It implements a shared-memory-based data transport protocol for message delivery and a TCP-based system management protocol. The latter ensures that the integrity of system structure is maintained at runtime. With the inclusion of bridging modules, the HCI∧2 Framework is interoperable with other software frameworks including Psyclone and ActiveMQ. In addition to the core communication middleware, we also present the integrated development environment (IDE) of the HCI∧2 Framework. It provides a complete graphical environment to support every step in a typical MHCI system development process, including module development, debugging, packaging, and management, as well as the whole system management and testing. The quantitative evaluation indicates that our framework outperforms other similar tools in terms of average message latency and maximum data throughput under a typical single PC scenario. To demonstrate HCI∧2 Framework's capabilities in integrating heterogeneous modules, we present several example modules working with a variety of hardware and software. We also present an example of a full system developed using the proposed HCI∧2 Framework, which is called the CamGame system and represents a computer game based on hand-held marker(s) and low-cost camera(s).
Fang, Yu-Hua Dean; Asthana, Pravesh; Salinas, Cristian; Huang, Hsuan-Ming; Muzic, Raymond F
2010-01-01
An integrated software package, Compartment Model Kinetic Analysis Tool (COMKAT), is presented in this report. COMKAT is an open-source software package with many functions for incorporating pharmacokinetic analysis in molecular imaging research and has both command-line and graphical user interfaces. With COMKAT, users may load and display images, draw regions of interest, load input functions, select kinetic models from a predefined list, or create a novel model and perform parameter estimation, all without having to write any computer code. For image analysis, COMKAT image tool supports multiple image file formats, including the Digital Imaging and Communications in Medicine (DICOM) standard. Image contrast, zoom, reslicing, display color table, and frame summation can be adjusted in COMKAT image tool. It also displays and automatically registers images from 2 modalities. Parametric imaging capability is provided and can be combined with the distributed computing support to enhance computation speeds. For users without MATLAB licenses, a compiled, executable version of COMKAT is available, although it currently has only a subset of the full COMKAT capability. Both the compiled and the noncompiled versions of COMKAT are free for academic research use. Extensive documentation, examples, and COMKAT itself are available on its wiki-based Web site, http://comkat.case.edu. Users are encouraged to contribute, sharing their experience, examples, and extensions of COMKAT. With integrated functionality specifically designed for imaging and kinetic modeling analysis, COMKAT can be used as a software environment for molecular imaging and pharmacokinetic analysis.
Exploiting opportunistic resources for ATLAS with ARC CE and the Event Service
NASA Astrophysics Data System (ADS)
Cameron, D.; Filipčič, A.; Guan, W.; Tsulaia, V.; Walker, R.; Wenaus, T.;
2017-10-01
With ever-greater computing needs and fixed budgets, big scientific experiments are turning to opportunistic resources as a means to add much-needed extra computing power. These resources can be very different in design from those that comprise the Grid computing of most experiments, therefore exploiting them requires a change in strategy for the experiment. They may be highly restrictive in what can be run or in connections to the outside world, or tolerate opportunistic usage only on condition that tasks may be terminated without warning. The Advanced Resource Connector Computing Element (ARC CE) with its nonintrusive architecture is designed to integrate resources such as High Performance Computing (HPC) systems into a computing Grid. The ATLAS experiment developed the ATLAS Event Service (AES) primarily to address the issue of jobs that can be terminated at any point when opportunistic computing capacity is needed by someone else. This paper describes the integration of these two systems in order to exploit opportunistic resources for ATLAS in a restrictive environment. In addition to the technical details, results from deployment of this solution in the SuperMUC HPC centre in Munich are shown.
Bringing your tools to CyVerse Discovery Environment using Docker
Devisetty, Upendra Kumar; Kennedy, Kathleen; Sarando, Paul; Merchant, Nirav; Lyons, Eric
2016-01-01
Docker has become a very popular container-based virtualization platform for software distribution that has revolutionized the way in which scientific software and software dependencies (software stacks) can be packaged, distributed, and deployed. Docker makes the complex and time-consuming installation procedures needed for scientific software a one-time process. Because it enables platform-independent installation, versioning of software environments, and easy redeployment and reproducibility, Docker is an ideal candidate for the deployment of identical software stacks on different compute environments such as XSEDE and Amazon AWS. CyVerse’s Discovery Environment also uses Docker for integrating its powerful, community-recommended software tools into CyVerse’s production environment for public use. This paper will help users bring their tools into CyVerse Discovery Environment (DE) which will not only allows users to integrate their tools with relative ease compared to the earlier method of tool deployment in DE but will also help users to share their apps with collaborators and release them for public use. PMID:27803802
Bringing your tools to CyVerse Discovery Environment using Docker.
Devisetty, Upendra Kumar; Kennedy, Kathleen; Sarando, Paul; Merchant, Nirav; Lyons, Eric
2016-01-01
Docker has become a very popular container-based virtualization platform for software distribution that has revolutionized the way in which scientific software and software dependencies (software stacks) can be packaged, distributed, and deployed. Docker makes the complex and time-consuming installation procedures needed for scientific software a one-time process. Because it enables platform-independent installation, versioning of software environments, and easy redeployment and reproducibility, Docker is an ideal candidate for the deployment of identical software stacks on different compute environments such as XSEDE and Amazon AWS. CyVerse's Discovery Environment also uses Docker for integrating its powerful, community-recommended software tools into CyVerse's production environment for public use. This paper will help users bring their tools into CyVerse Discovery Environment (DE) which will not only allows users to integrate their tools with relative ease compared to the earlier method of tool deployment in DE but will also help users to share their apps with collaborators and release them for public use.
1988-01-24
vanes.-The new facility is currently being called the Engine Blade/ Vape Facility (EB/VF). There are three primary goals in automating this proc..e...earlier, the search led primarily into the areas of CIM Justification, Automation Strategies , Performance Measurement, and Integration issues. Of...of living, has been steadily eroding. One dangerous trend that has developed in keenly competitive world markets , says Rohan [33], has been for U.S
A National Virtual Specimen Database for Early Cancer Detection
NASA Technical Reports Server (NTRS)
Crichton, Daniel; Kincaid, Heather; Kelly, Sean; Thornquist, Mark; Johnsey, Donald; Winget, Marcy
2003-01-01
Access to biospecimens is essential for enabling cancer biomarker discovery. The National Cancer Institute's (NCI) Early Detection Research Network (EDRN) comprises and integrates a large number of laboratories into a network in order to establish a collaborative scientific environment to discover and validate disease markers. The diversity of both the institutions and the collaborative focus has created the need for establishing cross-disciplinary teams focused on integrating expertise in biomedical research, computational and biostatistics, and computer science. Given the collaborative design of the network, the EDRN needed an informatics infrastructure. The Fred Hutchinson Cancer Research Center, the National Cancer Institute,and NASA's Jet Propulsion Laboratory (JPL) teamed up to build an informatics infrastructure creating a collaborative, science-driven research environment despite the geographic and morphology differences of the information systems that existed within the diverse network. EDRN investigators identified the need to share biospecimen data captured across the country managed in disparate databases. As a result, the informatics team initiated an effort to create a virtual tissue database whereby scientists could search and locate details about specimens located at collaborating laboratories. Each database, however, was locally implemented and integrated into collection processes and methods unique to each institution. This meant that efforts to integrate databases needed to be done in a manner that did not require redesign or re-implementation of existing system
Workflow4Metabolomics: a collaborative research infrastructure for computational metabolomics
Giacomoni, Franck; Le Corguillé, Gildas; Monsoor, Misharl; Landi, Marion; Pericard, Pierre; Pétéra, Mélanie; Duperier, Christophe; Tremblay-Franco, Marie; Martin, Jean-François; Jacob, Daniel; Goulitquer, Sophie; Thévenot, Etienne A.; Caron, Christophe
2015-01-01
Summary: The complex, rapidly evolving field of computational metabolomics calls for collaborative infrastructures where the large volume of new algorithms for data pre-processing, statistical analysis and annotation can be readily integrated whatever the language, evaluated on reference datasets and chained to build ad hoc workflows for users. We have developed Workflow4Metabolomics (W4M), the first fully open-source and collaborative online platform for computational metabolomics. W4M is a virtual research environment built upon the Galaxy web-based platform technology. It enables ergonomic integration, exchange and running of individual modules and workflows. Alternatively, the whole W4M framework and computational tools can be downloaded as a virtual machine for local installation. Availability and implementation: http://workflow4metabolomics.org homepage enables users to open a private account and access the infrastructure. W4M is developed and maintained by the French Bioinformatics Institute (IFB) and the French Metabolomics and Fluxomics Infrastructure (MetaboHUB). Contact: contact@workflow4metabolomics.org PMID:25527831
Workflow4Metabolomics: a collaborative research infrastructure for computational metabolomics.
Giacomoni, Franck; Le Corguillé, Gildas; Monsoor, Misharl; Landi, Marion; Pericard, Pierre; Pétéra, Mélanie; Duperier, Christophe; Tremblay-Franco, Marie; Martin, Jean-François; Jacob, Daniel; Goulitquer, Sophie; Thévenot, Etienne A; Caron, Christophe
2015-05-01
The complex, rapidly evolving field of computational metabolomics calls for collaborative infrastructures where the large volume of new algorithms for data pre-processing, statistical analysis and annotation can be readily integrated whatever the language, evaluated on reference datasets and chained to build ad hoc workflows for users. We have developed Workflow4Metabolomics (W4M), the first fully open-source and collaborative online platform for computational metabolomics. W4M is a virtual research environment built upon the Galaxy web-based platform technology. It enables ergonomic integration, exchange and running of individual modules and workflows. Alternatively, the whole W4M framework and computational tools can be downloaded as a virtual machine for local installation. http://workflow4metabolomics.org homepage enables users to open a private account and access the infrastructure. W4M is developed and maintained by the French Bioinformatics Institute (IFB) and the French Metabolomics and Fluxomics Infrastructure (MetaboHUB). contact@workflow4metabolomics.org. © The Author 2014. Published by Oxford University Press.
Use of agents to implement an integrated computing environment
NASA Technical Reports Server (NTRS)
Hale, Mark A.; Craig, James I.
1995-01-01
Integrated Product and Process Development (IPPD) embodies the simultaneous application to both system and quality engineering methods throughout an iterative design process. The use of IPPD results in the time-conscious, cost-saving development of engineering systems. To implement IPPD, a Decision-Based Design perspective is encapsulated in an approach that focuses on the role of the human designer in product development. The approach has two parts and is outlined in this paper. First, an architecture, called DREAMS, is being developed that facilitates design from a decision-based perspective. Second, a supporting computing infrastructure, called IMAGE, is being designed. Agents are used to implement the overall infrastructure on the computer. Successful agent utilization requires that they be made of three components: the resource, the model, and the wrap. Current work is focused on the development of generalized agent schemes and associated demonstration projects. When in place, the technology independent computing infrastructure will aid the designer in systematically generating knowledge used to facilitate decision-making.
NASA Astrophysics Data System (ADS)
Gorelick, Noel
2013-04-01
The Google Earth Engine platform is a system designed to enable petabyte-scale, scientific analysis and visualization of geospatial datasets. Earth Engine provides a consolidated environment including a massive data catalog co-located with thousands of computers for analysis. The user-friendly front-end provides a workbench environment to allow interactive data and algorithm development and exploration and provides a convenient mechanism for scientists to share data, visualizations and analytic algorithms via URLs. The Earth Engine data catalog contains a wide variety of popular, curated datasets, including the world's largest online collection of Landsat scenes (> 2.0M), numerous MODIS collections, and many vector-based data sets. The platform provides a uniform access mechanism to a variety of data types, independent of their bands, projection, bit-depth, resolution, etc..., facilitating easy multi-sensor analysis. Additionally, a user is able to add and curate their own data and collections. Using a just-in-time, distributed computation model, Earth Engine can rapidly process enormous quantities of geo-spatial data. All computation is performed lazily; nothing is computed until it's required either for output or as input to another step. This model allows real-time feedback and preview during algorithm development, supporting a rapid algorithm development, test, and improvement cycle that scales seamlessly to large-scale production data processing. Through integration with a variety of other services, Earth Engine is able to bring to bear considerable analytic and technical firepower in a transparent fashion, including: AI-based classification via integration with Google's machine learning infrastructure, publishing and distribution at Google scale through integration with the Google Maps API, Maps Engine and Google Earth, and support for in-the-field activities such as validation, ground-truthing, crowd-sourcing and citizen science though the Android Open Data Kit.
NASA Astrophysics Data System (ADS)
Gorelick, N.
2012-12-01
The Google Earth Engine platform is a system designed to enable petabyte-scale, scientific analysis and visualization of geospatial datasets. Earth Engine provides a consolidated environment including a massive data catalog co-located with thousands of computers for analysis. The user-friendly front-end provides a workbench environment to allow interactive data and algorithm development and exploration and provides a convenient mechanism for scientists to share data, visualizations and analytic algorithms via URLs. The Earth Engine data catalog contains a wide variety of popular, curated datasets, including the world's largest online collection of Landsat scenes (> 2.0M), numerous MODIS collections, and many vector-based data sets. The platform provides a uniform access mechanism to a variety of data types, independent of their bands, projection, bit-depth, resolution, etc..., facilitating easy multi-sensor analysis. Additionally, a user is able to add and curate their own data and collections. Using a just-in-time, distributed computation model, Earth Engine can rapidly process enormous quantities of geo-spatial data. All computation is performed lazily; nothing is computed until it's required either for output or as input to another step. This model allows real-time feedback and preview during algorithm development, supporting a rapid algorithm development, test, and improvement cycle that scales seamlessly to large-scale production data processing. Through integration with a variety of other services, Earth Engine is able to bring to bear considerable analytic and technical firepower in a transparent fashion, including: AI-based classification via integration with Google's machine learning infrastructure, publishing and distribution at Google scale through integration with the Google Maps API, Maps Engine and Google Earth, and support for in-the-field activities such as validation, ground-truthing, crowd-sourcing and citizen science though the Android Open Data Kit.
NASA Technical Reports Server (NTRS)
Katz, Daniel S.; Cwik, Tom; Fu, Chuigang; Imbriale, William A.; Jamnejad, Vahraz; Springer, Paul L.; Borgioli, Andrea
2000-01-01
The process of designing and analyzing a multiple-reflector system has traditionally been time-intensive, requiring large amounts of both computational and human time. At many frequencies, a discrete approximation of the radiation integral may be used to model the system. The code which implements this physical optics (PO) algorithm was developed at the Jet Propulsion Laboratory. It analyzes systems of antennas in pairs, and for each pair, the analysis can be computationally time-consuming. Additionally, the antennas must be described using a local coordinate system for each antenna, which makes it difficult to integrate the design into a multi-disciplinary framework in which there is traditionally one global coordinate system, even before considering deforming the antenna as prescribed by external structural and/or thermal factors. Finally, setting up the code to correctly analyze all the antenna pairs in the system can take a fair amount of time, and introduces possible human error. The use of parallel computing to reduce the computational time required for the analysis of a given pair of antennas has been previously discussed. This paper focuses on the other problems mentioned above. It will present a methodology and examples of use of an automated tool that performs the analysis of a complete multiple-reflector system in an integrated multi-disciplinary environment (including CAD modeling, and structural and thermal analysis) at the click of a button. This tool, named MOD Tool (Millimeter-wave Optics Design Tool), has been designed and implemented as a distributed tool, with a client that runs almost identically on Unix, Mac, and Windows platforms, and a server that runs primarily on a Unix workstation and can interact with parallel supercomputers with simple instruction from the user interacting with the client.
Numerical modeling of chemical spills and assessment of their environmental impacts
USDA-ARS?s Scientific Manuscript database
Chemical spills in surface water bodies often occur in modern societies, which cause significant impacts on water quality, eco-environment and drinking water safety. In this paper, chemical spill contamination in water resources was studied using a depth-integrated computational model, CCHE2D, for p...
Code of Federal Regulations, 2011 CFR
2011-04-01
... integrity of the data can be verified. (6) Electronic record means any combination of text, graphics, data... computer data compilation of any symbol or series of symbols executed, adopted, or authorized by an... name or mark. (9) Open system means an environment in which system access is not controlled by persons...
Code of Federal Regulations, 2013 CFR
2013-04-01
... integrity of the data can be verified. (6) Electronic record means any combination of text, graphics, data... computer data compilation of any symbol or series of symbols executed, adopted, or authorized by an... name or mark. (9) Open system means an environment in which system access is not controlled by persons...
Code of Federal Regulations, 2010 CFR
2010-04-01
... integrity of the data can be verified. (6) Electronic record means any combination of text, graphics, data... computer data compilation of any symbol or series of symbols executed, adopted, or authorized by an... name or mark. (9) Open system means an environment in which system access is not controlled by persons...
ERIC Educational Resources Information Center
Scogin, Stephen C.; Stuessy, Carol L.
2015-01-01
Next Generation Science Standards (NGSS) call for integrating knowledge and practice in learning experiences in K-12 science education. "PlantingScience" (PS), an ideal curriculum for use as an NGSS model, is a computer-mediated collaborative learning environment intertwining scientific inquiry, classroom instruction, and online…
From Augmentation Media to Meme Media.
ERIC Educational Resources Information Center
Tanaka, Yuzuru
Computers as meta media are now evolving from augmentation media vehicles to meme media vehicles. While an augmentation media system provides a seamlessly integrated environment of various tools and documents, meme media system provides further functions to edit and distribute tools and documents. Documents and tools on meme media can easily…
Teacher-Education Students' Views about Knowledge Building Theory and Practice
ERIC Educational Resources Information Center
Hong, Huang-Yao; Chen, Fei-Ching; Chai, Ching Sing; Chan, Wen-Ching
2011-01-01
This study investigated the effects of engaging students to collectively learn and work with knowledge in a computer-supported collaborative learning environment called Knowledge Forum on their views about knowledge building theory and practice. Participants were 24 teacher-education students who took a required course titled "Integrating Theory…
Teaching and Learning in the Mixed-Reality Science Classroom
ERIC Educational Resources Information Center
Tolentino, Lisa; Birchfield, David; Megowan-Romanowicz, Colleen; Johnson-Glenberg, Mina C.; Kelliher, Aisling; Martinez, Christopher
2009-01-01
As emerging technologies become increasingly inexpensive and robust, there is an exciting opportunity to move beyond general purpose computing platforms to realize a new generation of K-12 technology-based learning environments. Mixed-reality technologies integrate real world components with interactive digital media to offer new potential to…
ERIC Educational Resources Information Center
Chen, Yu-Lung; Pan, Pei-Rong; Sung, Yao-Ting; Chang, Kuo-En
2013-01-01
Computer simulation has significant potential as a supplementary tool for effective conceptual-change learning based on the integration of technology and appropriate instructional strategies. This study elucidates misconceptions in learning on diodes and constructs a conceptual-change learning system that incorporates…
The assessment of risk from dermal exposure for thousands of chemicals, such as consumer products, due to their potential to enter the environment as contaminants is a daunting task. A strategy has been developed to integrate high-throughput technologies with toxicity, known as ...
Effects of Virtual Manipulatives with Different Approaches on Students' Knowledge of Slope
ERIC Educational Resources Information Center
Demir, Mustafa
2018-01-01
Virtual Manipulatives (VMs) are computer-based, dynamic, and visual representations of mathematical concepts, provide interactive learning environments to advance mathematics instruction (Moyer et al., 2002). Despite their broad use, few research explored the integration of VMs into mathematics instruction (Moyer-Packenham & Westenskow, 2013).…
The (Campus) Empire Strikes Back
ERIC Educational Resources Information Center
Archibald, Fred
2008-01-01
When it comes to anti-malware protection, today's university IT departments have their work cut out for them. Network managers must walk the fine line between enabling a highly collaborative, non-restrictive environment, and ensuring the confidentiality, integrity, and availability of data and computing resources. This is no easy task, especially…
A Geospatial Information Grid Framework for Geological Survey.
Wu, Liang; Xue, Lei; Li, Chaoling; Lv, Xia; Chen, Zhanlong; Guo, Mingqiang; Xie, Zhong
2015-01-01
The use of digital information in geological fields is becoming very important. Thus, informatization in geological surveys should not stagnate as a result of the level of data accumulation. The integration and sharing of distributed, multi-source, heterogeneous geological information is an open problem in geological domains. Applications and services use geological spatial data with many features, including being cross-region and cross-domain and requiring real-time updating. As a result of these features, desktop and web-based geographic information systems (GISs) experience difficulties in meeting the demand for geological spatial information. To facilitate the real-time sharing of data and services in distributed environments, a GIS platform that is open, integrative, reconfigurable, reusable and elastic would represent an indispensable tool. The purpose of this paper is to develop a geological cloud-computing platform for integrating and sharing geological information based on a cloud architecture. Thus, the geological cloud-computing platform defines geological ontology semantics; designs a standard geological information framework and a standard resource integration model; builds a peer-to-peer node management mechanism; achieves the description, organization, discovery, computing and integration of the distributed resources; and provides the distributed spatial meta service, the spatial information catalog service, the multi-mode geological data service and the spatial data interoperation service. The geological survey information cloud-computing platform has been implemented, and based on the platform, some geological data services and geological processing services were developed. Furthermore, an iron mine resource forecast and an evaluation service is introduced in this paper.
A Geospatial Information Grid Framework for Geological Survey
Wu, Liang; Xue, Lei; Li, Chaoling; Lv, Xia; Chen, Zhanlong; Guo, Mingqiang; Xie, Zhong
2015-01-01
The use of digital information in geological fields is becoming very important. Thus, informatization in geological surveys should not stagnate as a result of the level of data accumulation. The integration and sharing of distributed, multi-source, heterogeneous geological information is an open problem in geological domains. Applications and services use geological spatial data with many features, including being cross-region and cross-domain and requiring real-time updating. As a result of these features, desktop and web-based geographic information systems (GISs) experience difficulties in meeting the demand for geological spatial information. To facilitate the real-time sharing of data and services in distributed environments, a GIS platform that is open, integrative, reconfigurable, reusable and elastic would represent an indispensable tool. The purpose of this paper is to develop a geological cloud-computing platform for integrating and sharing geological information based on a cloud architecture. Thus, the geological cloud-computing platform defines geological ontology semantics; designs a standard geological information framework and a standard resource integration model; builds a peer-to-peer node management mechanism; achieves the description, organization, discovery, computing and integration of the distributed resources; and provides the distributed spatial meta service, the spatial information catalog service, the multi-mode geological data service and the spatial data interoperation service. The geological survey information cloud-computing platform has been implemented, and based on the platform, some geological data services and geological processing services were developed. Furthermore, an iron mine resource forecast and an evaluation service is introduced in this paper. PMID:26710255
NASA Astrophysics Data System (ADS)
Dong, J. Y.; Cheng, W.; Ma, C. P.; Tan, Y. T.; Xin, L. S.
2017-04-01
The residential public space is an important part in designing the ecological residence, and a proper physics environment of public space is of greater significance to urban residence in China. Actually, the measure to apply computer aided design software into residential design can effectively avoid an inconformity of design intent with actual using condition, and a negative impact on users due to bad architectural physics environment of buildings, etc. The paper largely adopts a design method of analyzing architectural physics environment of residential public space. By analyzing and evaluating various physics environments, a suitability assessment is obtained for residential public space, thereby guiding the space design.
Hybrid Cloud Computing Environment for EarthCube and Geoscience Community
NASA Astrophysics Data System (ADS)
Yang, C. P.; Qin, H.
2016-12-01
The NSF EarthCube Integration and Test Environment (ECITE) has built a hybrid cloud computing environment to provides cloud resources from private cloud environments by using cloud system software - OpenStack and Eucalyptus, and also manages public cloud - Amazon Web Service that allow resource synchronizing and bursting between private and public cloud. On ECITE hybrid cloud platform, EarthCube and geoscience community can deploy and manage the applications by using base virtual machine images or customized virtual machines, analyze big datasets by using virtual clusters, and real-time monitor the virtual resource usage on the cloud. Currently, a number of EarthCube projects have deployed or started migrating their projects to this platform, such as CHORDS, BCube, CINERGI, OntoSoft, and some other EarthCube building blocks. To accomplish the deployment or migration, administrator of ECITE hybrid cloud platform prepares the specific needs (e.g. images, port numbers, usable cloud capacity, etc.) of each project in advance base on the communications between ECITE and participant projects, and then the scientists or IT technicians in those projects launch one or multiple virtual machines, access the virtual machine(s) to set up computing environment if need be, and migrate their codes, documents or data without caring about the heterogeneity in structure and operations among different cloud platforms.
An optical brain computer interface for environmental control.
Ayaz, Hasan; Shewokis, Patricia A; Bunce, Scott; Onaral, Banu
2011-01-01
A brain computer interface (BCI) is a system that translates neurophysiological signals detected from the brain to supply input to a computer or to control a device. Volitional control of neural activity and its real-time detection through neuroimaging modalities are key constituents of BCI systems. The purpose of this study was to develop and test a new BCI design that utilizes intention-related cognitive activity within the dorsolateral prefrontal cortex using functional near infrared (fNIR) spectroscopy. fNIR is a noninvasive, safe, portable and affordable optical technique with which to monitor hemodynamic changes, in the brain's cerebral cortex. Because of its portability and ease of use, fNIR is amenable to deployment in ecologically valid natural working environments. We integrated a control paradigm in a computerized 3D virtual environment to augment interactivity. Ten healthy participants volunteered for a two day study in which they navigated a virtual environment with keyboard inputs, but were required to use the fNIR-BCI for interaction with virtual objects. Results showed that participants consistently utilized the fNIR-BCI with an overall success rate of 84% and volitionally increased their cerebral oxygenation level to trigger actions within the virtual environment.
NASA Astrophysics Data System (ADS)
Bessonov, O.; Silvestrov, P.
2017-02-01
This paper describes the general idea and the first implementation of the Interactive information and simulation system - an integrated environment that combines computational modules for modeling the aerodynamics and aerothermodynamics of re-entry space vehicles with the large collection of different information materials on this topic. The internal organization and the composition of the system are described and illustrated. Examples of the computational and information output are presented. The system has the unified implementation for Windows and Linux operation systems and can be deployed on any modern high-performance personal computer.
Exploring the Integration of Computational Modeling in the ASU Modeling Curriculum
NASA Astrophysics Data System (ADS)
Schatz, Michael; Aiken, John; Burk, John; Caballero, Marcos; Douglas, Scott; Thoms, Brian
2012-03-01
We describe the implementation of computational modeling in a ninth grade classroom in the context of the Arizona Modeling Instruction physics curriculum. Using a high-level programming environment (VPython), students develop computational models to predict the motion of objects under a variety of physical situations (e.g., constant net force), to simulate real world phenomenon (e.g., car crash), and to visualize abstract quantities (e.g., acceleration). We discuss how VPython allows students to utilize all four structures that describe a model as given by the ASU Modeling Instruction curriculum. Implications for future work will also be discussed.
Uncertainty Modeling of Pollutant Transport in Atmosphere and Aquatic Route Using Soft Computing
NASA Astrophysics Data System (ADS)
Datta, D.
2010-10-01
Hazardous radionuclides are released as pollutants in the atmospheric and aquatic environment (ATAQE) during the normal operation of nuclear power plants. Atmospheric and aquatic dispersion models are routinely used to assess the impact of release of radionuclide from any nuclear facility or hazardous chemicals from any chemical plant on the ATAQE. Effect of the exposure from the hazardous nuclides or chemicals is measured in terms of risk. Uncertainty modeling is an integral part of the risk assessment. The paper focuses the uncertainty modeling of the pollutant transport in atmospheric and aquatic environment using soft computing. Soft computing is addressed due to the lack of information on the parameters that represent the corresponding models. Soft-computing in this domain basically addresses the usage of fuzzy set theory to explore the uncertainty of the model parameters and such type of uncertainty is called as epistemic uncertainty. Each uncertain input parameters of the model is described by a triangular membership function.
Parallel computing in genomic research: advances and applications
Ocaña, Kary; de Oliveira, Daniel
2015-01-01
Today’s genomic experiments have to process the so-called “biological big data” that is now reaching the size of Terabytes and Petabytes. To process this huge amount of data, scientists may require weeks or months if they use their own workstations. Parallelism techniques and high-performance computing (HPC) environments can be applied for reducing the total processing time and to ease the management, treatment, and analyses of this data. However, running bioinformatics experiments in HPC environments such as clouds, grids, clusters, and graphics processing unit requires the expertise from scientists to integrate computational, biological, and mathematical techniques and technologies. Several solutions have already been proposed to allow scientists for processing their genomic experiments using HPC capabilities and parallelism techniques. This article brings a systematic review of literature that surveys the most recently published research involving genomics and parallel computing. Our objective is to gather the main characteristics, benefits, and challenges that can be considered by scientists when running their genomic experiments to benefit from parallelism techniques and HPC capabilities. PMID:26604801
Modelling parallel programs and multiprocessor architectures with AXE
NASA Technical Reports Server (NTRS)
Yan, Jerry C.; Fineman, Charles E.
1991-01-01
AXE, An Experimental Environment for Parallel Systems, was designed to model and simulate for parallel systems at the process level. It provides an integrated environment for specifying computation models, multiprocessor architectures, data collection, and performance visualization. AXE is being used at NASA-Ames for developing resource management strategies, parallel problem formulation, multiprocessor architectures, and operating system issues related to the High Performance Computing and Communications Program. AXE's simple, structured user-interface enables the user to model parallel programs and machines precisely and efficiently. Its quick turn-around time keeps the user interested and productive. AXE models multicomputers. The user may easily modify various architectural parameters including the number of sites, connection topologies, and overhead for operating system activities. Parallel computations in AXE are represented as collections of autonomous computing objects known as players. Their use and behavior is described. Performance data of the multiprocessor model can be observed on a color screen. These include CPU and message routing bottlenecks, and the dynamic status of the software.
Parallel computing in genomic research: advances and applications.
Ocaña, Kary; de Oliveira, Daniel
2015-01-01
Today's genomic experiments have to process the so-called "biological big data" that is now reaching the size of Terabytes and Petabytes. To process this huge amount of data, scientists may require weeks or months if they use their own workstations. Parallelism techniques and high-performance computing (HPC) environments can be applied for reducing the total processing time and to ease the management, treatment, and analyses of this data. However, running bioinformatics experiments in HPC environments such as clouds, grids, clusters, and graphics processing unit requires the expertise from scientists to integrate computational, biological, and mathematical techniques and technologies. Several solutions have already been proposed to allow scientists for processing their genomic experiments using HPC capabilities and parallelism techniques. This article brings a systematic review of literature that surveys the most recently published research involving genomics and parallel computing. Our objective is to gather the main characteristics, benefits, and challenges that can be considered by scientists when running their genomic experiments to benefit from parallelism techniques and HPC capabilities.
NASA Astrophysics Data System (ADS)
Neves, Rui Gomes; Teodoro, Vítor Duarte
2012-09-01
A teaching approach aiming at an epistemologically balanced integration of computational modelling in science and mathematics education is presented. The approach is based on interactive engagement learning activities built around computational modelling experiments that span the range of different kinds of modelling from explorative to expressive modelling. The activities are designed to make a progressive introduction to scientific computation without requiring prior development of a working knowledge of programming, generate and foster the resolution of cognitive conflicts in the understanding of scientific and mathematical concepts and promote performative competency in the manipulation of different and complementary representations of mathematical models. The activities are supported by interactive PDF documents which explain the fundamental concepts, methods and reasoning processes using text, images and embedded movies, and include free space for multimedia enriched student modelling reports and teacher feedback. To illustrate, an example from physics implemented in the Modellus environment and tested in undergraduate university general physics and biophysics courses is discussed.
Tavaxy: Integrating Taverna and Galaxy workflows with cloud computing support
2012-01-01
Background Over the past decade the workflow system paradigm has evolved as an efficient and user-friendly approach for developing complex bioinformatics applications. Two popular workflow systems that have gained acceptance by the bioinformatics community are Taverna and Galaxy. Each system has a large user-base and supports an ever-growing repository of application workflows. However, workflows developed for one system cannot be imported and executed easily on the other. The lack of interoperability is due to differences in the models of computation, workflow languages, and architectures of both systems. This lack of interoperability limits sharing of workflows between the user communities and leads to duplication of development efforts. Results In this paper, we present Tavaxy, a stand-alone system for creating and executing workflows based on using an extensible set of re-usable workflow patterns. Tavaxy offers a set of new features that simplify and enhance the development of sequence analysis applications: It allows the integration of existing Taverna and Galaxy workflows in a single environment, and supports the use of cloud computing capabilities. The integration of existing Taverna and Galaxy workflows is supported seamlessly at both run-time and design-time levels, based on the concepts of hierarchical workflows and workflow patterns. The use of cloud computing in Tavaxy is flexible, where the users can either instantiate the whole system on the cloud, or delegate the execution of certain sub-workflows to the cloud infrastructure. Conclusions Tavaxy reduces the workflow development cycle by introducing the use of workflow patterns to simplify workflow creation. It enables the re-use and integration of existing (sub-) workflows from Taverna and Galaxy, and allows the creation of hybrid workflows. Its additional features exploit recent advances in high performance cloud computing to cope with the increasing data size and complexity of analysis. The system can be accessed either through a cloud-enabled web-interface or downloaded and installed to run within the user's local environment. All resources related to Tavaxy are available at http://www.tavaxy.org. PMID:22559942
Technology transfer of operator-in-the-loop simulation
NASA Technical Reports Server (NTRS)
Yae, K. H.; Lin, H. C.; Lin, T. C.; Frisch, H. P.
1994-01-01
The technology developed for operator-in-the-loop simulation in space teleoperation has been applied to Caterpillar's backhoe, wheel loader, and off-highway truck. On an SGI workstation, the simulation integrates computer modeling of kinematics and dynamics, real-time computational and visualization, and an interface with the operator through the operator's console. The console is interfaced with the workstation through an IBM-PC in which the operator's commands were digitized and sent through an RS-232 serial port. The simulation gave visual feedback adequate for the operator in the loop, with the camera's field of vision projected on a large screen in multiple view windows. The view control can emulate either stationary or moving cameras. This simulator created an innovative engineering design environment by integrating computer software and hardware with the human operator's interactions. The backhoe simulation has been adopted by Caterpillar in building a virtual reality tool for backhoe design.
Development of Mobile Electronic Health Records Application in a Secondary General Hospital in Korea
Park, Min Ah; Hong, Eunseok; Kim, Sunhyu; Ahn, Ryeok; Hong, Jungseok; Song, Seungyeol; Kim, Tak; Kim, Jeongkeun; Yeo, Seongwoon
2013-01-01
Objectives The recent evolution of mobile devices has opened new possibilities of providing strongly integrated mobile services in healthcare. The objective of this paper is to describe the decision driver, development, and implementation of an integrated mobile Electronic Health Record (EHR) application at Ulsan University Hospital. This application helps healthcare providers view patients' medical records and information without a stationary computer workstation. Methods We developed an integrated mobile application prototype that aimed to improve the mobility and usability of healthcare providers during their daily medical activities. The Android and iOS platform was used to create the mobile EHR application. The first working version was completed in 5 months and required 1,080 development hours. Results The mobile EHR application provides patient vital signs, patient data, text communication, and integrated EHR. The application allows our healthcare providers to know the status of patients within and outside the hospital environment. The application provides a consistent user environment on several compatible Android and iOS devices. A group of 10 beta testers has consistently used and maintained our copy of the application, suggesting user acceptance. Conclusions We are developing the integrated mobile EHR application with the goals of implementing an environment that is user-friendly, implementing a patient-centered system, and increasing the hospital's competitiveness. PMID:24523996
Choi, Wookjin; Park, Min Ah; Hong, Eunseok; Kim, Sunhyu; Ahn, Ryeok; Hong, Jungseok; Song, Seungyeol; Kim, Tak; Kim, Jeongkeun; Yeo, Seongwoon
2013-12-01
The recent evolution of mobile devices has opened new possibilities of providing strongly integrated mobile services in healthcare. The objective of this paper is to describe the decision driver, development, and implementation of an integrated mobile Electronic Health Record (EHR) application at Ulsan University Hospital. This application helps healthcare providers view patients' medical records and information without a stationary computer workstation. We developed an integrated mobile application prototype that aimed to improve the mobility and usability of healthcare providers during their daily medical activities. The Android and iOS platform was used to create the mobile EHR application. The first working version was completed in 5 months and required 1,080 development hours. The mobile EHR application provides patient vital signs, patient data, text communication, and integrated EHR. The application allows our healthcare providers to know the status of patients within and outside the hospital environment. The application provides a consistent user environment on several compatible Android and iOS devices. A group of 10 beta testers has consistently used and maintained our copy of the application, suggesting user acceptance. We are developing the integrated mobile EHR application with the goals of implementing an environment that is user-friendly, implementing a patient-centered system, and increasing the hospital's competitiveness.
Protecting genomic data analytics in the cloud: state of the art and opportunities.
Tang, Haixu; Jiang, Xiaoqian; Wang, Xiaofeng; Wang, Shuang; Sofia, Heidi; Fox, Dov; Lauter, Kristin; Malin, Bradley; Telenti, Amalio; Xiong, Li; Ohno-Machado, Lucila
2016-10-13
The outsourcing of genomic data into public cloud computing settings raises concerns over privacy and security. Significant advancements in secure computation methods have emerged over the past several years, but such techniques need to be rigorously evaluated for their ability to support the analysis of human genomic data in an efficient and cost-effective manner. With respect to public cloud environments, there are concerns about the inadvertent exposure of human genomic data to unauthorized users. In analyses involving multiple institutions, there is additional concern about data being used beyond agreed research scope and being prcoessed in untrused computational environments, which may not satisfy institutional policies. To systematically investigate these issues, the NIH-funded National Center for Biomedical Computing iDASH (integrating Data for Analysis, 'anonymization' and SHaring) hosted the second Critical Assessment of Data Privacy and Protection competition to assess the capacity of cryptographic technologies for protecting computation over human genomes in the cloud and promoting cross-institutional collaboration. Data scientists were challenged to design and engineer practical algorithms for secure outsourcing of genome computation tasks in working software, whereby analyses are performed only on encrypted data. They were also challenged to develop approaches to enable secure collaboration on data from genomic studies generated by multiple organizations (e.g., medical centers) to jointly compute aggregate statistics without sharing individual-level records. The results of the competition indicated that secure computation techniques can enable comparative analysis of human genomes, but greater efficiency (in terms of compute time and memory utilization) are needed before they are sufficiently practical for real world environments.
NASA Astrophysics Data System (ADS)
Watari, S.; Morikawa, Y.; Yamamoto, K.; Inoue, S.; Tsubouchi, K.; Fukazawa, K.; Kimura, E.; Tatebe, O.; Kato, H.; Shimojo, S.; Murata, K. T.
2010-12-01
In the Solar-Terrestrial Physics (STP) field, spatio-temporal resolution of computer simulations is getting higher and higher because of tremendous advancement of supercomputers. A more advanced technology is Grid Computing that integrates distributed computational resources to provide scalable computing resources. In the simulation research, it is effective that a researcher oneself designs his physical model, performs calculations with a supercomputer, and analyzes and visualizes for consideration by a familiar method. A supercomputer is far from an analysis and visualization environment. In general, a researcher analyzes and visualizes in the workstation (WS) managed at hand because the installation and the operation of software in the WS are easy. Therefore, it is necessary to copy the data from the supercomputer to WS manually. Time necessary for the data transfer through long delay network disturbs high-accuracy simulations actually. In terms of usefulness, integrating a supercomputer and an analysis and visualization environment seamlessly with a researcher's familiar method is important. NICT has been developing a cloud computing environment (NICT Space Weather Cloud). In the NICT Space Weather Cloud, disk servers are located near its supercomputer and WSs for data analysis and visualization. They are connected to JGN2plus that is high-speed network for research and development. Distributed virtual high-capacity storage is also constructed by Grid Datafarm (Gfarm v2). Huge-size data output from the supercomputer is transferred to the virtual storage through JGN2plus. A researcher can concentrate on the research by a familiar method without regard to distance between a supercomputer and an analysis and visualization environment. Now, total 16 disk servers are setup in NICT headquarters (at Koganei, Tokyo), JGN2plus NOC (at Otemachi, Tokyo), Okinawa Subtropical Environment Remote-Sensing Center, and Cybermedia Center, Osaka University. They are connected on JGN2plus, and they constitute 1PB (physical size) virtual storage by Gfarm v2. These disk servers are connected with supercomputers of NICT and Osaka University. A system that data output from the supercomputers are automatically transferred to the virtual storage had been built up. Transfer rate is about 50 GB/hrs by actual measurement. It is estimated that the performance is reasonable for a certain simulation and analysis for reconstruction of coronal magnetic field. This research is assumed an experiment of the system, and the verification of practicality is advanced at the same time. Herein we introduce an overview of the space weather cloud system so far we have developed. We also demonstrate several scientific results using the space weather cloud system. We also introduce several web applications of the cloud as a service of the space weather cloud, which is named as "e-SpaceWeather" (e-SW). The e-SW provides with a variety of space weather online services from many aspects.
NASA Technical Reports Server (NTRS)
Strutzenberg, L. L.; Dougherty, N. S.; Liever, P. A.; West, J. S.; Smith, S. D.
2007-01-01
This paper details advances being made in the development of Reynolds-Averaged Navier-Stokes numerical simulation tools, models, and methods for the integrated Space Shuttle Vehicle at launch. The conceptual model and modeling approach described includes the development of multiple computational models to appropriately analyze the potential debris transport for critical debris sources at Lift-Off. The conceptual model described herein involves the integration of propulsion analysis for the nozzle/plume flow with the overall 3D vehicle flowfield at Lift-Off. Debris Transport Analyses are being performed using the Shuttle Lift-Off models to assess the risk to the vehicle from Lift-Off debris and appropriately prioritized mitigation of potential debris sources to continue to reduce vehicle risk. These integrated simulations are being used to evaluate plume-induced debris environments where the multi-plume interactions with the launch facility can potentially accelerate debris particles toward the vehicle.
Neural mechanisms underlying human consensus decision-making
Suzuki, Shinsuke; Adachi, Ryo; Dunne, Simon; Bossaerts, Peter; O'Doherty, John P.
2015-01-01
SUMMARY Consensus building in a group is a hallmark of animal societies, yet little is known about its underlying computational and neural mechanisms. Here, we applied a novel computational framework to behavioral and fMRI data from human participants performing a consensus decision-making task with up to five other participants. We found that participants reached consensus decisions through integrating their own preferences with information about the majority of group-members’ prior choices, as well as inferences about how much each option was stuck to by the other people. These distinct decision variables were separately encoded in distinct brain areas: the ventromedial prefrontal cortex, posterior superior temporal sulcus/temporoparietal junction and intraparietal sulcus, and were integrated in the dorsal anterior cingulate cortex. Our findings provide support for a theoretical account in which collective decisions are made through integrating multiple types of inference about oneself, others and environments, processed in distinct brain modules. PMID:25864634
Neural mechanisms underlying human consensus decision-making.
Suzuki, Shinsuke; Adachi, Ryo; Dunne, Simon; Bossaerts, Peter; O'Doherty, John P
2015-04-22
Consensus building in a group is a hallmark of animal societies, yet little is known about its underlying computational and neural mechanisms. Here, we applied a computational framework to behavioral and fMRI data from human participants performing a consensus decision-making task with up to five other participants. We found that participants reached consensus decisions through integrating their own preferences with information about the majority group members' prior choices, as well as inferences about how much each option was stuck to by the other people. These distinct decision variables were separately encoded in distinct brain areas-the ventromedial prefrontal cortex, posterior superior temporal sulcus/temporoparietal junction, and intraparietal sulcus-and were integrated in the dorsal anterior cingulate cortex. Our findings provide support for a theoretical account in which collective decisions are made through integrating multiple types of inference about oneself, others, and environments, processed in distinct brain modules. Copyright © 2015 Elsevier Inc. All rights reserved.
Adaptation disrupts motion integration in the primate dorsal stream
Patterson, Carlyn A.; Wissig, Stephanie C.; Kohn, Adam
2014-01-01
Summary Sensory systems adjust continuously to the environment. The effects of recent sensory experience—or adaptation—are typically assayed by recording in a relevant subcortical or cortical network. However, adaptation effects cannot be localized to a single, local network. Adjustments in one circuit or area will alter the input provided to others, with unclear consequences for computations implemented in the downstream circuit. Here we show that prolonged adaptation with drifting gratings, which alters responses in the early visual system, impedes the ability of area MT neurons to integrate motion signals in plaid stimuli. Perceptual experiments reveal a corresponding loss of plaid coherence. A simple computational model shows how the altered representation of motion signals in early cortex can derail integration in MT. Our results suggest that the effects of adaptation cascade through the visual system, derailing the downstream representation of distinct stimulus attributes. PMID:24507198
NASA Technical Reports Server (NTRS)
Mayer, Richard
1988-01-01
The integrated development support environment (IDSE) is a suite of integrated software tools that provide intelligent support for information modelling. These tools assist in function, information, and process modeling. Additional tools exist to assist in gathering and analyzing information to be modeled. This is a user's guide to application of the IDSE. Sections covering the requirements and design of each of the tools are presented. There are currently three integrated computer aided manufacturing definition (IDEF) modeling methodologies: IDEF0, IDEF1, and IDEF2. Also, four appendices exist to describe hardware and software requirements, installation procedures, and basic hardware usage.
NASA Technical Reports Server (NTRS)
Kerr, Andrew W.
1989-01-01
Programs related to rotorcraft aeromechanics and man-machine integration are discussed which will support advanced army rotorcraft design. In aeromechanics, recent advances in computational fluid dynamics will be used to characterize the complex unsteady flowfields of rotorcraft, and a second-generation comprehensive helicopter analysis system will be used along with models of aerodynamics, engines, and control systems to study the structural dynamics of rotor/body configurations. The man-machine integration program includes the development of advanced cockpit design technology and the evaluation of cockpit and mission equipment concepts in a real-time full-combat environment.
Adapting line integral convolution for fabricating artistic virtual environment
NASA Astrophysics Data System (ADS)
Lee, Jiunn-Shyan; Wang, Chung-Ming
2003-04-01
Vector field occurs not only extensively in scientific applications but also in treasured art such as sculptures and paintings. Artist depicts our natural environment stressing valued directional feature besides color and shape information. Line integral convolution (LIC), developed for imaging vector field in scientific visualization, has potential of producing directional image. In this paper we present several techniques of exploring LIC techniques to generate impressionistic images forming artistic virtual environment. We take advantage of directional information given by a photograph, and incorporate many investigations to the work including non-photorealistic shading technique and statistical detail control. In particular, the non-photorealistic shading technique blends cool and warm colors into the photograph to imitate artists painting convention. Besides, we adopt statistical technique controlling integral length according to image variance to preserve details. Furthermore, we also propose method for generating a series of mip-maps, which revealing constant strokes under multi-resolution viewing and achieving frame coherence in an interactive walkthrough system. The experimental results show merits of emulating satisfyingly and computing efficiently, as a consequence, relying on the proposed technique successfully fabricates a wide category of non-photorealistic rendering (NPR) application such as interactive virtual environment with artistic perception.
A Prototyping Effort for the Integrated Spacecraft Analysis System
NASA Technical Reports Server (NTRS)
Wong, Raymond; Tung, Yu-Wen; Maldague, Pierre
2011-01-01
Computer modeling and simulation has recently become an essential technique for predicting and validating spacecraft performance. However, most computer models only examine spacecraft subsystems, and the independent nature of the models creates integration problems, which lowers the possibilities of simulating a spacecraft as an integrated unit despite a desire for this type of analysis. A new project called Integrated Spacecraft Analysis was proposed to serve as a framework for an integrated simulation environment. The project is still in its infancy, but a software prototype would help future developers assess design issues. The prototype explores a service oriented design paradigm that theoretically allows programs written in different languages to communicate with one another. It includes creating a uniform interface to the SPICE libraries such that different in-house tools like APGEN or SEQGEN can exchange information with it without much change. Service orientation may result in a slower system as compared to a single application, and more research needs to be done on the different available technologies, but a service oriented approach could increase long term maintainability and extensibility.
Sign: large-scale gene network estimation environment for high performance computing.
Tamada, Yoshinori; Shimamura, Teppei; Yamaguchi, Rui; Imoto, Seiya; Nagasaki, Masao; Miyano, Satoru
2011-01-01
Our research group is currently developing software for estimating large-scale gene networks from gene expression data. The software, called SiGN, is specifically designed for the Japanese flagship supercomputer "K computer" which is planned to achieve 10 petaflops in 2012, and other high performance computing environments including Human Genome Center (HGC) supercomputer system. SiGN is a collection of gene network estimation software with three different sub-programs: SiGN-BN, SiGN-SSM and SiGN-L1. In these three programs, five different models are available: static and dynamic nonparametric Bayesian networks, state space models, graphical Gaussian models, and vector autoregressive models. All these models require a huge amount of computational resources for estimating large-scale gene networks and therefore are designed to be able to exploit the speed of 10 petaflops. The software will be available freely for "K computer" and HGC supercomputer system users. The estimated networks can be viewed and analyzed by Cell Illustrator Online and SBiP (Systems Biology integrative Pipeline). The software project web site is available at http://sign.hgc.jp/ .
Computer interfaces for the visually impaired
NASA Technical Reports Server (NTRS)
Higgins, Gerry
1991-01-01
Information access via computer terminals extends to blind and low vision persons employed in many technical and nontechnical disciplines. Two aspects are detailed of providing computer technology for persons with a vision related handicap. First, research into the most effective means of integrating existing adaptive technologies into information systems was made. This was conducted to integrate off the shelf products with adaptive equipment for cohesive integrated information processing systems. Details are included that describe the type of functionality required in software to facilitate its incorporation into a speech and/or braille system. The second aspect is research into providing audible and tactile interfaces to graphics based interfaces. Parameters are included for the design and development of the Mercator Project. The project will develop a prototype system for audible access to graphics based interfaces. The system is being built within the public domain architecture of X windows to show that it is possible to provide access to text based applications within a graphical environment. This information will be valuable to suppliers to ADP equipment since new legislation requires manufacturers to provide electronic access to the visually impaired.
EC FP6 Enviro-RISKS project outcomes in area of Earth and Space Science Informatics applications
NASA Astrophysics Data System (ADS)
Gordov, E. P.; Zakarin, E. A.
2009-04-01
Nowadays the community acknowledged that to understand dynamics of regional environment properly and perform its assessment on the base of monitoring and modeling more strong involvement of information-computational technologies (ICT) is required, which should lead to development of information-computational infrastructure as an inherent part of such investigations. This paper is based on the Report&Recommendations (www.dmi.dk/dmi/sr08-05-4.pdf) of the Enviro-RISKS (Man-induced Environmental Risks: Monitoring, Management and Remediation of Man-made Changes in Siberia) Project Thematic expert group for Information Systems, Integration and Synthesis Focus and presents results of activities of Project Partners in area of Information Technologies for Environmental Sciences development and usage. Approaches used the web-based Information Technologies and the GIS-based Information Technologies are described and a way to their integration is outlined. In particular, developed in course of the Project carrying out Enviro-RISKS web portal and its Climate site (http://climate.risks.scert.ru/), providing an access to interactive web-system for regional climate assessment on the base of standard meteorological data archives, which is a key element of the information-computational infrastructure of the Siberia Integrated Regional Study (SIRS), is described in details as well as developed on the base of GIS technology system for monitoring and modeling air and water pollutions transport and transformations. The later is quite useful for practical applications realization of geoinformation modeling, in which relevant mathematical models are plunged into GIS and all the modeling and analysis phases are accomplished in the informational sphere, based on the real data including those coming from satellites. Major efforts currently are undertaken in attempt to integrate GIS based environmental applications with web accessibility, computing power and data interoperability thus to exploit completely huge potential of web bases technologies. In particular, development of a region devoted web portal using approached suggested by the Open Geospatial Consortium has been started recently. The state of the art of the information-computational infrastructure in the targeted region is quite a step in the process of development of a distributed collaborative information-computational environment to support multidisciplinary investigations of Earth regional environment, especially those required meteorology, atmospheric pollution transport and climate modeling. Established in process of the Project carrying out cooperative links, new Partners initiatives, and gained expertise allow us to hope that this infrastructure rather soon will make significant input into understanding regional environmental processes in their relationships with Global Change. In particular, this infrastructure will play a role of the 'underlying mechanics' of the research work, leaving the earth scientists to concentrate on their investigations as well as providing the environment to make research results available and understandable to everyone. Additionally to the core FP6 Enviro-RISKS project (INCO-CT-2004-013427) support this activity was partially supported by SB RAS Integration Project 34, SB RAS Basic Program Project 4.5.2.2 and APN Project CBA2007-08NSY. Valuable input into the expert group work and elaborated outcomes of Profs. V. Lykosov and A. Starchenko, Drs. D. Belikov, , M. Korets, S. Kostrykin, B. Mirkarimova, I. Okladnikov, , A. Titov and A. Tridvornov is acknowledged.
A Cloud Computing Based Patient Centric Medical Information System
NASA Astrophysics Data System (ADS)
Agarwal, Ankur; Henehan, Nathan; Somashekarappa, Vivek; Pandya, A. S.; Kalva, Hari; Furht, Borko
This chapter discusses an emerging concept of a cloud computing based Patient Centric Medical Information System framework that will allow various authorized users to securely access patient records from various Care Delivery Organizations (CDOs) such as hospitals, urgent care centers, doctors, laboratories, imaging centers among others, from any location. Such a system must seamlessly integrate all patient records including images such as CT-SCANS and MRI'S which can easily be accessed from any location and reviewed by any authorized user. In such a scenario the storage and transmission of medical records will have be conducted in a totally secure and safe environment with a very high standard of data integrity, protecting patient privacy and complying with all Health Insurance Portability and Accountability Act (HIPAA) regulations.
MRIVIEW: An interactive computational tool for investigation of brain structure and function
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ranken, D.; George, J.
MRIVIEW is a software system which uses image processing and visualization to provide neuroscience researchers with an integrated environment for combining functional and anatomical information. Key features of the software include semi-automated segmentation of volumetric head data and an interactive coordinate reconciliation method which utilizes surface visualization. The current system is a precursor to a computational brain atlas. We describe features this atlas will incorporate, including methods under development for visualizing brain functional data obtained from several different research modalities.
An Integrative Architecture for a Sensor-Supported Trust Management System
Trček, Denis
2012-01-01
Trust plays a key role not only in e-worlds and emerging pervasive computing environments, but also already for millennia in human societies. Trust management solutions that have being around now for some fifteen years were primarily developed for the above mentioned cyber environments and they are typically focused on artificial agents, sensors, etc. However, this paper presents extensions of a new methodology together with architecture for trust management support that is focused on humans and human-like agents. With this methodology and architecture sensors play a crucial role. The architecture presents an already deployable tool for multi and interdisciplinary research in various areas where humans are involved. It provides new ways to obtain an insight into dynamics and evolution of such structures, not only in pervasive computing environments, but also in other important areas like management and decision making support. PMID:23112628
Using PVM to host CLIPS in distributed environments
NASA Technical Reports Server (NTRS)
Myers, Leonard; Pohl, Kym
1994-01-01
It is relatively easy to enhance CLIPS (C Language Integrated Production System) to support multiple expert systems running in a distributed environment with heterogeneous machines. The task is minimized by using the PVM (Parallel Virtual Machine) code from Oak Ridge Labs to provide the distributed utility. PVM is a library of C and FORTRAN subprograms that supports distributive computing on many different UNIX platforms. A PVM deamon is easily installed on each CPU that enters the virtual machine environment. Any user with rsh or rexec access to a machine can use the one PVM deamon to obtain a generous set of distributed facilities. The ready availability of both CLIPS and PVM makes the combination of software particularly attractive for budget conscious experimentation of heterogeneous distributive computing with multiple CLIPS executables. This paper presents a design that is sufficient to provide essential message passing functions in CLIPS and enable the full range of PVM facilities.
BIM based virtual environment for fire emergency evacuation.
Wang, Bin; Li, Haijiang; Rezgui, Yacine; Bradley, Alex; Ong, Hoang N
2014-01-01
Recent building emergency management research has highlighted the need for the effective utilization of dynamically changing building information. BIM (building information modelling) can play a significant role in this process due to its comprehensive and standardized data format and integrated process. This paper introduces a BIM based virtual environment supported by virtual reality (VR) and a serious game engine to address several key issues for building emergency management, for example, timely two-way information updating and better emergency awareness training. The focus of this paper lies on how to utilize BIM as a comprehensive building information provider to work with virtual reality technologies to build an adaptable immersive serious game environment to provide real-time fire evacuation guidance. The innovation lies on the seamless integration between BIM and a serious game based virtual reality (VR) environment aiming at practical problem solving by leveraging state-of-the-art computing technologies. The system has been tested for its robustness and functionality against the development requirements, and the results showed promising potential to support more effective emergency management.
A progress report on a NASA research program for embedded computer systems software
NASA Technical Reports Server (NTRS)
Foudriat, E. C.; Senn, E. H.; Will, R. W.; Straeter, T. A.
1979-01-01
The paper presents the results of the second stage of the Multipurpose User-oriented Software Technology (MUST) program. Four primary areas of activities are discussed: programming environment, HAL/S higher-order programming language support, the Integrated Verification and Testing System (IVTS), and distributed system language research. The software development environment is provided by the interactive software invocation system. The higher-order programming language (HOL) support chosen for consideration is HAL/S mainly because at the time it was one of the few HOLs with flight computer experience and it is the language used on the Shuttle program. The overall purpose of IVTS is to provide a 'user-friendly' software testing system which is highly modular, user controlled, and cooperative in nature.
Third CLIPS Conference Proceedings, volume 2
NASA Technical Reports Server (NTRS)
Riley, Gary (Editor)
1994-01-01
Expert systems are computer programs which emulate human expertise in well defined problem domains. The C Language Integrated Production System (CLIPS) is an expert system building tool, developed at the Johnson Space Center, which provides a complete environment for the development and delivery of rule and/or object based expert systems. CLIPS was specifically designed to provide a low cost option for developing and deploying expert system applications across a wide range of hardware platforms. The development of CLIPS has helped to improve the ability to deliver expert system technology throughout the public and private sectors for a wide range of applications and diverse computing environments. The Third Conference on CLIPS provided a forum for CLIPS users to present and discuss papers relating to CLIPS applications, uses, and extensions.
Models and techniques for evaluating the effectiveness of aircraft computing systems
NASA Technical Reports Server (NTRS)
Meyer, J. F.
1982-01-01
Models, measures, and techniques for evaluating the effectiveness of aircraft computing systems were developed. By "effectiveness" in this context we mean the extent to which the user, i.e., a commercial air carrier, may expect to benefit from the computational tasks accomplished by a computing system in the environment of an advanced commercial aircraft. Thus, the concept of effectiveness involves aspects of system performance, reliability, and worth (value, benefit) which are appropriately integrated in the process of evaluating system effectiveness. Specifically, the primary objectives are: the development of system models that provide a basis for the formulation and evaluation of aircraft computer system effectiveness, the formulation of quantitative measures of system effectiveness, and the development of analytic and simulation techniques for evaluating the effectiveness of a proposed or existing aircraft computer.
NASA Technical Reports Server (NTRS)
Molthan, Andrew; Case, Jonathan; Venner, Jason; Moreno-Madrinan, Max J.; Delgado, Francisco
2012-01-01
Two projects at NASA Marshall Space Flight Center have collaborated to develop a high resolution weather forecast model for Mesoamerica: The NASA Short-term Prediction Research and Transition (SPoRT) Center, which integrates unique NASA satellite and weather forecast modeling capabilities into the operational weather forecasting community. NASA's SERVIR Program, which integrates satellite observations, ground-based data, and forecast models to improve disaster response in Central America, the Caribbean, Africa, and the Himalayas.
Hanus, Josef; Nosek, Tomas; Zahora, Jiri; Bezrouk, Ales; Masin, Vladimir
2013-01-01
We designed and evaluated an innovative computer-aided-learning environment based on the on-line integration of computer controlled medical diagnostic devices and a medical information system for use in the preclinical medical physics education of medical students. Our learning system simulates the actual clinical environment in a hospital or primary care unit. It uses a commercial medical information system for on-line storage and processing of clinical type data acquired during physics laboratory classes. Every student adopts two roles, the role of 'patient' and the role of 'physician'. As a 'physician' the student operates the medical devices to clinically assess 'patient' colleagues and records all results in an electronic 'patient' record. We also introduced an innovative approach to the use of supportive education materials, based on the methods of adaptive e-learning. A survey of student feedback is included and statistically evaluated. The results from the student feedback confirm the positive response of the latter to this novel implementation of medical physics and informatics in preclinical education. This approach not only significantly improves learning of medical physics and informatics skills but has the added advantage that it facilitates students' transition from preclinical to clinical subjects. Copyright © 2011 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
A Web-Based Monitoring System for Multidisciplinary Design Projects
NASA Technical Reports Server (NTRS)
Rogers, James L.; Salas, Andrea O.; Weston, Robert P.
1998-01-01
In today's competitive environment, both industry and government agencies are under pressure to reduce the time and cost of multidisciplinary design projects. New tools have been introduced to assist in this process by facilitating the integration of and communication among diverse disciplinary codes. One such tool, a framework for multidisciplinary computational environments, is defined as a hardware and software architecture that enables integration, execution, and communication among diverse disciplinary processes. An examination of current frameworks reveals weaknesses in various areas, such as sequencing, displaying, monitoring, and controlling the design process. The objective of this research is to explore how Web technology, integrated with an existing framework, can improve these areas of weakness. This paper describes a Web-based system that optimizes and controls the execution sequence of design processes; and monitors the project status and results. The three-stage evolution of the system with increasingly complex problems demonstrates the feasibility of this approach.
A component-based software environment for visualizing large macromolecular assemblies.
Sanner, Michel F
2005-03-01
The interactive visualization of large biological assemblies poses a number of challenging problems, including the development of multiresolution representations and new interaction methods for navigating and analyzing these complex systems. An additional challenge is the development of flexible software environments that will facilitate the integration and interoperation of computational models and techniques from a wide variety of scientific disciplines. In this paper, we present a component-based software development strategy centered on the high-level, object-oriented, interpretive programming language: Python. We present several software components, discuss their integration, and describe some of their features that are relevant to the visualization of large molecular assemblies. Several examples are given to illustrate the interoperation of these software components and the integration of structural data from a variety of experimental sources. These examples illustrate how combining visual programming with component-based software development facilitates the rapid prototyping of novel visualization tools.
Using a medical simulation center as an electronic health record usability laboratory
Landman, Adam B; Redden, Lisa; Neri, Pamela; Poole, Stephen; Horsky, Jan; Raja, Ali S; Pozner, Charles N; Schiff, Gordon; Poon, Eric G
2014-01-01
Usability testing is increasingly being recognized as a way to increase the usability and safety of health information technology (HIT). Medical simulation centers can serve as testing environments for HIT usability studies. We integrated the quality assurance version of our emergency department (ED) electronic health record (EHR) into our medical simulation center and piloted a clinical care scenario in which emergency medicine resident physicians evaluated a simulated ED patient and documented electronically using the ED EHR. Meticulous planning and close collaboration with expert simulation staff was important for designing test scenarios, pilot testing, and running the sessions. Similarly, working with information systems teams was important for integration of the EHR. Electronic tools are needed to facilitate entry of fictitious clinical results while the simulation scenario is unfolding. EHRs can be successfully integrated into existing simulation centers, which may provide realistic environments for usability testing, training, and evaluation of human–computer interactions. PMID:24249778
The IDEAS**2 computing environment
NASA Technical Reports Server (NTRS)
Racheli, Ugo
1990-01-01
This document presents block diagrams of the IDEAS**2 computing environment. IDEAS**2 is the computing environment selected for system engineering (design and analysis) by the Center for Space Construction (CSC) at the University of Colorado (UCB). It is intended to support integration and analysis of any engineering system and at any level of development, from Pre-Phase A conceptual studies to fully mature Phase C/D projects. The University of Colorado (through the Center for Space Construction) has joined the Structural Dynamics Research Corporation (SDRC) University Consortium which makes available unlimited software licenses for instructional purposes. In addition to providing the backbone for the implementation of the IDEAS**2 computing environment, I-DEAS can be used as a stand-alone product for undergraduate CAD/CAE instruction. Presently, SDRC is in the process of releasing I-DEAS level 5.0 which represents a substantial improvement in both the user interface and graphic processing capabilities. IDEAS**2 will be immediately useful for a number of current programs within CSC (such as DYCAM and the 'interruptability problem'). In the future, the following expansions of the basic IDEAS**2 program will be pursued, consistent with the overall objectives of the Center and of the College: upgrade I-DEAS and IDEAS**2 to level 5.0; create new analytical programs for applications not limited to orbital platforms; research the semantic organization of engineering databases; and create an 'interoperability' testbed.
NASA Technical Reports Server (NTRS)
Tennille, Geoffrey M.; Howser, Lona M.
1993-01-01
The use of the CONVEX computers that are an integral part of the Supercomputing Network Subsystems (SNS) of the Central Scientific Computing Complex of LaRC is briefly described. Features of the CONVEX computers that are significantly different than the CRAY supercomputers are covered, including: FORTRAN, C, architecture of the CONVEX computers, the CONVEX environment, batch job submittal, debugging, performance analysis, utilities unique to CONVEX, and documentation. This revision reflects the addition of the Applications Compiler and X-based debugger, CXdb. The document id intended for all CONVEX users as a ready reference to frequently asked questions and to more detailed information contained with the vendor manuals. It is appropriate for both the novice and the experienced user.
Scaling to diversity: The DERECHOS distributed infrastructure for analyzing and sharing data
NASA Astrophysics Data System (ADS)
Rilee, M. L.; Kuo, K. S.; Clune, T.; Oloso, A.; Brown, P. G.
2016-12-01
Integrating Earth Science data from diverse sources such as satellite imagery and simulation output can be expensive and time-consuming, limiting scientific inquiry and the quality of our analyses. Reducing these costs will improve innovation and quality in science. The current Earth Science data infrastructure focuses on downloading data based on requests formed from the search and analysis of associated metadata. And while the data products provided by archives may use the best available data sharing technologies, scientist end-users generally do not have such resources (including staff) available to them. Furthermore, only once an end-user has received the data from multiple diverse sources and has integrated them can the actual analysis and synthesis begin. The cost of getting from idea to where synthesis can start dramatically slows progress. In this presentation we discuss a distributed computational and data storage framework that eliminates much of the aforementioned cost. The SciDB distributed array database is central as it is optimized for scientific computing involving very large arrays, performing better than less specialized frameworks like Spark. Adding spatiotemporal functions to the SciDB creates a powerful platform for analyzing and integrating massive, distributed datasets. SciDB allows Big Earth Data analysis to be performed "in place" without the need for expensive downloads and end-user resources. Spatiotemporal indexing technologies such as the hierarchical triangular mesh enable the compute and storage affinity needed to efficiently perform co-located and conditional analyses minimizing data transfers. These technologies automate the integration of diverse data sources using the framework, a critical step beyond current metadata search and analysis. Instead of downloading data into their idiosyncratic local environments, end-users can generate and share data products integrated from diverse multiple sources using a common shared environment, turning distributed active archive centers (DAACs) from warehouses into distributed active analysis centers.
2010-01-01
Background Simulation of sophisticated biological models requires considerable computational power. These models typically integrate together numerous biological phenomena such as spatially-explicit heterogeneous cells, cell-cell interactions, cell-environment interactions and intracellular gene networks. The recent advent of programming for graphical processing units (GPU) opens up the possibility of developing more integrative, detailed and predictive biological models while at the same time decreasing the computational cost to simulate those models. Results We construct a 3D model of epidermal development and provide a set of GPU algorithms that executes significantly faster than sequential central processing unit (CPU) code. We provide a parallel implementation of the subcellular element method for individual cells residing in a lattice-free spatial environment. Each cell in our epidermal model includes an internal gene network, which integrates cellular interaction of Notch signaling together with environmental interaction of basement membrane adhesion, to specify cellular state and behaviors such as growth and division. We take a pedagogical approach to describing how modeling methods are efficiently implemented on the GPU including memory layout of data structures and functional decomposition. We discuss various programmatic issues and provide a set of design guidelines for GPU programming that are instructive to avoid common pitfalls as well as to extract performance from the GPU architecture. Conclusions We demonstrate that GPU algorithms represent a significant technological advance for the simulation of complex biological models. We further demonstrate with our epidermal model that the integration of multiple complex modeling methods for heterogeneous multicellular biological processes is both feasible and computationally tractable using this new technology. We hope that the provided algorithms and source code will be a starting point for modelers to develop their own GPU implementations, and encourage others to implement their modeling methods on the GPU and to make that code available to the wider community. PMID:20696053
NASA Astrophysics Data System (ADS)
Hamlet, C. L.; Hoffman, K.; Fauci, L.; Tytell, E.
2016-02-01
The lamprey is a model organism for both neurophysiology and locomotion studies. To study the role of sensory feedback as an organism moves through its environment, a 2D, integrative, multi-scale model of an anguilliform swimmer driven by neural activation from a central pattern generator (CPG) is constructed. The CPG in turn drives muscle kinematics and is fully coupled to the surrounding fluid. The system is numerically evolved in time using an immersed boundary framework producing an emergent swimming mode. Proprioceptive feedback to the CPG based on experimental observations adjust the activation signal as the organism interacts with its environment. Effects on the speed, stability and cost (metabolic work) of swimming due to nonlinear dependencies associated with muscle force development combined with proprioceptive feedback to neural activation are estimated and examined.
NEAMS Update. Quarterly Report for October - December 2011.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bradley, K.
2012-02-16
The Advanced Modeling and Simulation Office within the DOE Office of Nuclear Energy (NE) has been charged with revolutionizing the design tools used to build nuclear power plants during the next 10 years. To accomplish this, the DOE has brought together the national laboratories, U.S. universities, and the nuclear energy industry to establish the Nuclear Energy Advanced Modeling and Simulation (NEAMS) Program. The mission of NEAMS is to modernize computer modeling of nuclear energy systems and improve the fidelity and validity of modeling results using contemporary software environments and high-performance computers. NEAMS will create a set of engineering-level codes aimedmore » at designing and analyzing the performance and safety of nuclear power plants and reactor fuels. The truly predictive nature of these codes will be achieved by modeling the governing phenomena at the spatial and temporal scales that dominate the behavior. These codes will be executed within a simulation environment that orchestrates code integration with respect to spatial meshing, computational resources, and execution to give the user a common 'look and feel' for setting up problems and displaying results. NEAMS is building upon a suite of existing simulation tools, including those developed by the federal Scientific Discovery through Advanced Computing and Advanced Simulation and Computing programs. NEAMS also draws upon existing simulation tools for materials and nuclear systems, although many of these are limited in terms of scale, applicability, and portability (their ability to be integrated into contemporary software and hardware architectures). NEAMS investments have directly and indirectly supported additional NE research and development programs, including those devoted to waste repositories, safeguarded separations systems, and long-term storage of used nuclear fuel. NEAMS is organized into two broad efforts, each comprising four elements. The quarterly highlights October-December 2011 are: (1) Version 1.0 of AMP, the fuel assembly performance code, was tested on the JAGUAR supercomputer and released on November 1, 2011, a detailed discussion of this new simulation tool is given; (2) A coolant sub-channel model and a preliminary UO{sub 2} smeared-cracking model were implemented in BISON, the single-pin fuel code, more information on how these models were developed and benchmarked is given; (3) The Object Kinetic Monte Carlo model was implemented to account for nucleation events in meso-scale simulations and a discussion of the significance of this advance is given; (4) The SHARP neutronics module, PROTEUS, was expanded to be applicable to all types of reactors, and a discussion of the importance of PROTEUS is given; (5) A plan has been finalized for integrating the high-fidelity, three-dimensional reactor code SHARP with both the systems-level code RELAP7 and the fuel assembly code AMP. This is a new initiative; (6) Work began to evaluate the applicability of AMP to the problem of dry storage of used fuel and to define a relevant problem to test the applicability; (7) A code to obtain phonon spectra from the force-constant matrix for a crystalline lattice has been completed. This important bridge between subcontinuum and continuum phenomena is discussed; (8) Benchmarking was begun on the meso-scale, finite-element fuels code MARMOT to validate its new variable splitting algorithm; (9) A very computationally demanding simulation of diffusion-driven nucleation of new microstructural features has been completed. An explanation of the difficulty of this simulation is given; (10) Experiments were conducted with deformed steel to validate a crystal plasticity finite-element code for bodycentered cubic iron; (11) The Capability Transfer Roadmap was completed and published as an internal laboratory technical report; (12) The AMP fuel assembly code input generator was integrated into the NEAMS Integrated Computational Environment (NiCE). More details on the planned NEAMS computing environment is given; and (13) The NEAMS program website (neams.energy.gov) is nearly ready to launch.« less
Information Management for a Large Multidisciplinary Project
NASA Technical Reports Server (NTRS)
Jones, Kennie H.; Randall, Donald P.; Cronin, Catherine K.
1992-01-01
In 1989, NASA's Langley Research Center (LaRC) initiated the High-Speed Airframe Integration Research (HiSAIR) Program to develop and demonstrate an integrated environment for high-speed aircraft design using advanced multidisciplinary analysis and optimization procedures. The major goals of this program were to evolve the interactions among disciplines and promote sharing of information, to provide a timely exchange of information among aeronautical disciplines, and to increase the awareness of the effects each discipline has upon other disciplines. LaRC historically has emphasized the advancement of analysis techniques. HiSAIR was founded to synthesize these advanced methods into a multidisciplinary design process emphasizing information feedback among disciplines and optimization. Crucial to the development of such an environment are the definition of the required data exchanges and the methodology for both recording the information and providing the exchanges in a timely manner. These requirements demand extensive use of data management techniques, graphic visualization, and interactive computing. HiSAIR represents the first attempt at LaRC to promote interdisciplinary information exchange on a large scale using advanced data management methodologies combined with state-of-the-art, scientific visualization techniques on graphics workstations in a distributed computing environment. The subject of this paper is the development of the data management system for HiSAIR.
NASA Technical Reports Server (NTRS)
Rasmussen, John
1990-01-01
Structural optimization has attracted the attention since the days of Galileo. Olhoff and Taylor have produced an excellent overview of the classical research within this field. However, the interest in structural optimization has increased greatly during the last decade due to the advent of reliable general numerical analysis methods and the computer power necessary to use them efficiently. This has created the possibility of developing general numerical systems for shape optimization. Several authors, eg., Esping; Braibant & Fleury; Bennet & Botkin; Botkin, Yang, and Bennet; and Stanton have published practical and successful applications of general optimization systems. Ding and Homlein have produced extensive overviews of available systems. Furthermore, a number of commercial optimization systems based on well-established finite element codes have been introduced. Systems like ANSYS, IDEAS, OASIS, and NISAOPT are widely known examples. In parallel to this development, the technology of computer aided design (CAD) has gained a large influence on the design process of mechanical engineering. The CAD technology has already lived through a rapid development driven by the drastically growing capabilities of digital computers. However, the systems of today are still considered as being only the first generation of a long row of computer integrated manufacturing (CIM) systems. These systems to come will offer an integrated environment for design, analysis, and fabrication of products of almost any character. Thus, the CAD system could be regarded as simply a database for geometrical information equipped with a number of tools with the purpose of helping the user in the design process. Among these tools are facilities for structural analysis and optimization as well as present standard CAD features like drawing, modeling, and visualization tools. The state of the art of structural optimization is that a large amount of mathematical and mechanical techniques are available for the solution of single problems. By implementing collections of the available techniques into general software systems, operational environments for structural optimization have been created. The forthcoming years must bring solutions to the problem of integrating such systems into more general design environments. The result of this work should be CAD systems for rational design in which structural optimization is one important design tool among many others.
GATE Monte Carlo simulation in a cloud computing environment
NASA Astrophysics Data System (ADS)
Rowedder, Blake Austin
The GEANT4-based GATE is a unique and powerful Monte Carlo (MC) platform, which provides a single code library allowing the simulation of specific medical physics applications, e.g. PET, SPECT, CT, radiotherapy, and hadron therapy. However, this rigorous yet flexible platform is used only sparingly in the clinic due to its lengthy calculation time. By accessing the powerful computational resources of a cloud computing environment, GATE's runtime can be significantly reduced to clinically feasible levels without the sizable investment of a local high performance cluster. This study investigated a reliable and efficient execution of GATE MC simulations using a commercial cloud computing services. Amazon's Elastic Compute Cloud was used to launch several nodes equipped with GATE. Job data was initially broken up on the local computer, then uploaded to the worker nodes on the cloud. The results were automatically downloaded and aggregated on the local computer for display and analysis. Five simulations were repeated for every cluster size between 1 and 20 nodes. Ultimately, increasing cluster size resulted in a decrease in calculation time that could be expressed with an inverse power model. Comparing the benchmark results to the published values and error margins indicated that the simulation results were not affected by the cluster size and thus that integrity of a calculation is preserved in a cloud computing environment. The runtime of a 53 minute long simulation was decreased to 3.11 minutes when run on a 20-node cluster. The ability to improve the speed of simulation suggests that fast MC simulations are viable for imaging and radiotherapy applications. With high power computing continuing to lower in price and accessibility, implementing Monte Carlo techniques with cloud computing for clinical applications will continue to become more attractive.
Automation of multi-agent control for complex dynamic systems in heterogeneous computational network
NASA Astrophysics Data System (ADS)
Oparin, Gennady; Feoktistov, Alexander; Bogdanova, Vera; Sidorov, Ivan
2017-01-01
The rapid progress of high-performance computing entails new challenges related to solving large scientific problems for various subject domains in a heterogeneous distributed computing environment (e.g., a network, Grid system, or Cloud infrastructure). The specialists in the field of parallel and distributed computing give the special attention to a scalability of applications for problem solving. An effective management of the scalable application in the heterogeneous distributed computing environment is still a non-trivial issue. Control systems that operate in networks, especially relate to this issue. We propose a new approach to the multi-agent management for the scalable applications in the heterogeneous computational network. The fundamentals of our approach are the integrated use of conceptual programming, simulation modeling, network monitoring, multi-agent management, and service-oriented programming. We developed a special framework for an automation of the problem solving. Advantages of the proposed approach are demonstrated on the parametric synthesis example of the static linear regulator for complex dynamic systems. Benefits of the scalable application for solving this problem include automation of the multi-agent control for the systems in a parallel mode with various degrees of its detailed elaboration.
Control Law Design in a Computational Aeroelasticity Environment
NASA Technical Reports Server (NTRS)
Newsom, Jerry R.; Robertshaw, Harry H.; Kapania, Rakesh K.
2003-01-01
A methodology for designing active control laws in a computational aeroelasticity environment is given. The methodology involves employing a systems identification technique to develop an explicit state-space model for control law design from the output of a computational aeroelasticity code. The particular computational aeroelasticity code employed in this paper solves the transonic small disturbance aerodynamic equation using a time-accurate, finite-difference scheme. Linear structural dynamics equations are integrated simultaneously with the computational fluid dynamics equations to determine the time responses of the structure. These structural responses are employed as the input to a modern systems identification technique that determines the Markov parameters of an "equivalent linear system". The Eigensystem Realization Algorithm is then employed to develop an explicit state-space model of the equivalent linear system. The Linear Quadratic Guassian control law design technique is employed to design a control law. The computational aeroelasticity code is modified to accept control laws and perform closed-loop simulations. Flutter control of a rectangular wing model is chosen to demonstrate the methodology. Various cases are used to illustrate the usefulness of the methodology as the nonlinearity of the aeroelastic system is increased through increased angle-of-attack changes.
Bio and health informatics meets cloud : BioVLab as an example.
Chae, Heejoon; Jung, Inuk; Lee, Hyungro; Marru, Suresh; Lee, Seong-Whan; Kim, Sun
2013-01-01
The exponential increase of genomic data brought by the advent of the next or the third generation sequencing (NGS) technologies and the dramatic drop in sequencing cost have driven biological and medical sciences to data-driven sciences. This revolutionary paradigm shift comes with challenges in terms of data transfer, storage, computation, and analysis of big bio/medical data. Cloud computing is a service model sharing a pool of configurable resources, which is a suitable workbench to address these challenges. From the medical or biological perspective, providing computing power and storage is the most attractive feature of cloud computing in handling the ever increasing biological data. As data increases in size, many research organizations start to experience the lack of computing power, which becomes a major hurdle in achieving research goals. In this paper, we review the features of publically available bio and health cloud systems in terms of graphical user interface, external data integration, security and extensibility of features. We then discuss about issues and limitations of current cloud systems and conclude with suggestion of a biological cloud environment concept, which can be defined as a total workbench environment assembling computational tools and databases for analyzing bio/medical big data in particular application domains.
Computer-aided operations engineering with integrated models of systems and operations
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Ryan, Dan; Fleming, Land
1994-01-01
CONFIG 3 is a prototype software tool that supports integrated conceptual design evaluation from early in the product life cycle, by supporting isolated or integrated modeling, simulation, and analysis of the function, structure, behavior, failures and operation of system designs. Integration and reuse of models is supported in an object-oriented environment providing capabilities for graph analysis and discrete event simulation. Integration is supported among diverse modeling approaches (component view, configuration or flow path view, and procedure view) and diverse simulation and analysis approaches. Support is provided for integrated engineering in diverse design domains, including mechanical and electro-mechanical systems, distributed computer systems, and chemical processing and transport systems. CONFIG supports abstracted qualitative and symbolic modeling, for early conceptual design. System models are component structure models with operating modes, with embedded time-related behavior models. CONFIG supports failure modeling and modeling of state or configuration changes that result in dynamic changes in dependencies among components. Operations and procedure models are activity structure models that interact with system models. CONFIG is designed to support evaluation of system operability, diagnosability and fault tolerance, and analysis of the development of system effects of problems over time, including faults, failures, and procedural or environmental difficulties.
NASA Technical Reports Server (NTRS)
Panczak, Tim; Ring, Steve; Welch, Mark
1999-01-01
Thermal engineering has long been left out of the concurrent engineering environment dominated by CAD (computer aided design) and FEM (finite element method) software. Current tools attempt to force the thermal design process into an environment primarily created to support structural analysis, which results in inappropriate thermal models. As a result, many thermal engineers either build models "by hand" or use geometric user interfaces that are separate from and have little useful connection, if any, to CAD and FEM systems. This paper describes the development of a new thermal design environment called the Thermal Desktop. This system, while fully integrated into a neutral, low cost CAD system, and which utilizes both FEM and FD methods, does not compromise the needs of the thermal engineer. Rather, the features needed for concurrent thermal analysis are specifically addressed by combining traditional parametric surface based radiation and FD based conduction modeling with CAD and FEM methods. The use of flexible and familiar temperature solvers such as SINDA/FLUINT (Systems Improved Numerical Differencing Analyzer/Fluid Integrator) is retained.
Children and Their Changing Media Environment: A European Comparative Study.
ERIC Educational Resources Information Center
Livingstone, Sonia, Ed.; Bovill, Moira, Ed.
Integrating broadcasting, video, computing, games, and the Internet, the domestic television screen is being transformed into the site of a multimedia culture. To address questions about the meaning and uses of such new media, this volume brings together work by researchers in 12 countries--Belgium, Denmark, Finland, France, Germany, the United…
ERIC Educational Resources Information Center
Ocal, Mehmet Fatih
2017-01-01
Integrating the properties of computer algebra systems and dynamic geometry environments, Geogebra became an effective and powerful tool for teaching and learning mathematics. One of the reasons that teachers use Geogebra in mathematics classrooms is to make students learn mathematics meaningfully and conceptually. From this perspective, the…
The Impact of Integrated Coaching and Collaboration within an Inquiry Learning Environment
ERIC Educational Resources Information Center
Dragon, Toby
2013-01-01
This thesis explores the design and evaluation of a collaborative, inquiry learning Intelligent Tutoring System for ill-defined problem spaces. The common ground in the fields of Artificial Intelligence in Education and Computer-Supported Collaborative Learning is investigated to identify ways in which tutoring systems can employ both automated…
ERIC Educational Resources Information Center
Street, Garrett M.; Laubach, Timothy A.
2013-01-01
We provide a 5E structured-inquiry lesson so that students can learn more of the mathematics behind the logistic model of population biology. By using models and mathematics, students understand how population dynamics can be influenced by relatively simple changes in the environment.
Supporting Blended-Learning: Tool Requirements and Solutions with OWLish
ERIC Educational Resources Information Center
Álvarez, Ainhoa; Martín, Maite; Fernández-Castro, Isabel; Urretavizcaya, Maite
2016-01-01
Currently, most of the educational approaches applied to higher education combine face-to-face (F2F) and computer-mediated instruction in a Blended-Learning (B-Learning) approach. One of the main challenges of these approaches is fully integrating the traditional brick-and-mortar classes with online learning environments in an efficient and…
Research efforts by the US Environmental Protection Agency have set out to develop alternative testing programs to prioritize limited testing resources toward chemicals that likely represent the greatest hazard to human health and the environment. Efforts such as EPA’s ToxCast r...
Leading Pedagogical Change with Innovative Web Tools and Social Media
ERIC Educational Resources Information Center
McLoughlin, Catherine
2011-01-01
Today, in a globalised, digital world, leadership challenges in the adoption and integration of emerging social software tools to support learning abound. Today's students, who have grown up in technology saturated environments, have never known a world without the internet, mobile phones, video on demand and personal computers. Leaders and…
Towards the Successful Integration of Design Thinking in Industrial Design Education
ERIC Educational Resources Information Center
Mubin, Omar; Novoa, Mauricio; Al Mahmud, Abdullah
2016-01-01
This paper narrates a case study on design thinking based education work in an industrial design honours program. Student projects were developed in a multi-disciplinary setting across a Computing and Engineering faculty that allowed promoting technologically and user driven innovation strategies. A renewed culture and environment for Industrial…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-28
... environment. Through the IRIS Program, EPA provides the highest quality science- based human health... external review draft human health assessment titled, ``Toxicological Review of Urea: In Support of Summary... register, please indicate if you will need audio-visual equipment (e.g., laptop computer and slide...
2013-12-06
Sulfurous Volcano 1643 Burnt Flesh 219 Burning Rubber 1645 Dead Body 243 Diesel Exhaust 1690 Vomit 252 Oily Machinery/Hydraulic Fluid Food 1432...Vehicle 1664 Cumin 1650 Tar Asphalt 1680 Rosemary Focaccia Bread 1680 Car Bomb 1990 Garlic 1905 Turpentine 1992 Mesquite BBQ Scent System
Integration of an Intelligent Tutoring System in a Course of Computer Network Design
ERIC Educational Resources Information Center
Verdú, Elena; Regueras, Luisa M.; Gal, Eran; de Castro, Juan P.; Verdú, María J.; Kohen-Vacs, Dan
2017-01-01
INTUITEL is a research project aiming to offer a personalized learning environment. The INTUITEL approach includes an Intelligent Tutoring System that gives students recommendations and feedback about what the best learning path is for them according to their profile, learning progress, context and environmental influences. INTUITEL combines…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williamson, Richard L.; Kochunas, Brendan; Adams, Brian M.
The Virtual Environment for Reactor Applications components included in this distribution include selected computational tools and supporting infrastructure that solve neutronics, thermal-hydraulics, fuel performance, and coupled neutronics-thermal hydraulics problems. The infrastructure components provide a simplified common user input capability and provide for the physics integration with data transfer and coupled-physics iterative solution algorithms.
Providing a parallel and distributed capability for JMASS using SPEEDES
NASA Astrophysics Data System (ADS)
Valinski, Maria; Driscoll, Jonathan; McGraw, Robert M.; Meyer, Bob
2002-07-01
The Joint Modeling And Simulation System (JMASS) is a Tri-Service simulation environment that supports engineering and engagement-level simulations. As JMASS is expanded to support other Tri-Service domains, the current set of modeling services must be expanded for High Performance Computing (HPC) applications by adding support for advanced time-management algorithms, parallel and distributed topologies, and high speed communications. By providing support for these services, JMASS can better address modeling domains requiring parallel computationally intense calculations such clutter, vulnerability and lethality calculations, and underwater-based scenarios. A risk reduction effort implementing some HPC services for JMASS using the SPEEDES (Synchronous Parallel Environment for Emulation and Discrete Event Simulation) Simulation Framework has recently concluded. As an artifact of the JMASS-SPEEDES integration, not only can HPC functionality be brought to the JMASS program through SPEEDES, but an additional HLA-based capability can be demonstrated that further addresses interoperability issues. The JMASS-SPEEDES integration provided a means of adding HLA capability to preexisting JMASS scenarios through an implementation of the standard JMASS port communication mechanism that allows players to communicate.
High-Performance Integrated Virtual Environment (HIVE) Tools and Applications for Big Data Analysis.
Simonyan, Vahan; Mazumder, Raja
2014-09-30
The High-performance Integrated Virtual Environment (HIVE) is a high-throughput cloud-based infrastructure developed for the storage and analysis of genomic and associated biological data. HIVE consists of a web-accessible interface for authorized users to deposit, retrieve, share, annotate, compute and visualize Next-generation Sequencing (NGS) data in a scalable and highly efficient fashion. The platform contains a distributed storage library and a distributed computational powerhouse linked seamlessly. Resources available through the interface include algorithms, tools and applications developed exclusively for the HIVE platform, as well as commonly used external tools adapted to operate within the parallel architecture of the system. HIVE is composed of a flexible infrastructure, which allows for simple implementation of new algorithms and tools. Currently, available HIVE tools include sequence alignment and nucleotide variation profiling tools, metagenomic analyzers, phylogenetic tree-building tools using NGS data, clone discovery algorithms, and recombination analysis algorithms. In addition to tools, HIVE also provides knowledgebases that can be used in conjunction with the tools for NGS sequence and metadata analysis.
High-Performance Integrated Virtual Environment (HIVE) Tools and Applications for Big Data Analysis
Simonyan, Vahan; Mazumder, Raja
2014-01-01
The High-performance Integrated Virtual Environment (HIVE) is a high-throughput cloud-based infrastructure developed for the storage and analysis of genomic and associated biological data. HIVE consists of a web-accessible interface for authorized users to deposit, retrieve, share, annotate, compute and visualize Next-generation Sequencing (NGS) data in a scalable and highly efficient fashion. The platform contains a distributed storage library and a distributed computational powerhouse linked seamlessly. Resources available through the interface include algorithms, tools and applications developed exclusively for the HIVE platform, as well as commonly used external tools adapted to operate within the parallel architecture of the system. HIVE is composed of a flexible infrastructure, which allows for simple implementation of new algorithms and tools. Currently, available HIVE tools include sequence alignment and nucleotide variation profiling tools, metagenomic analyzers, phylogenetic tree-building tools using NGS data, clone discovery algorithms, and recombination analysis algorithms. In addition to tools, HIVE also provides knowledgebases that can be used in conjunction with the tools for NGS sequence and metadata analysis. PMID:25271953
Integrating Cache Performance Modeling and Tuning Support in Parallelization Tools
NASA Technical Reports Server (NTRS)
Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)
1998-01-01
With the resurgence of distributed shared memory (DSM) systems based on cache-coherent Non Uniform Memory Access (ccNUMA) architectures and increasing disparity between memory and processors speeds, data locality overheads are becoming the greatest bottlenecks in the way of realizing potential high performance of these systems. While parallelization tools and compilers facilitate the users in porting their sequential applications to a DSM system, a lot of time and effort is needed to tune the memory performance of these applications to achieve reasonable speedup. In this paper, we show that integrating cache performance modeling and tuning support within a parallelization environment can alleviate this problem. The Cache Performance Modeling and Prediction Tool (CPMP), employs trace-driven simulation techniques without the overhead of generating and managing detailed address traces. CPMP predicts the cache performance impact of source code level "what-if" modifications in a program to assist a user in the tuning process. CPMP is built on top of a customized version of the Computer Aided Parallelization Tools (CAPTools) environment. Finally, we demonstrate how CPMP can be applied to tune a real Computational Fluid Dynamics (CFD) application.
NASA Astrophysics Data System (ADS)
Le, Anh H.; Park, Young W.; Ma, Kevin; Jacobs, Colin; Liu, Brent J.
2010-03-01
Multiple Sclerosis (MS) is a progressive neurological disease affecting myelin pathways in the brain. Multiple lesions in the white matter can cause paralysis and severe motor disabilities of the affected patient. To solve the issue of inconsistency and user-dependency in manual lesion measurement of MRI, we have proposed a 3-D automated lesion quantification algorithm to enable objective and efficient lesion volume tracking. The computer-aided detection (CAD) of MS, written in MATLAB, utilizes K-Nearest Neighbors (KNN) method to compute the probability of lesions on a per-voxel basis. Despite the highly optimized algorithm of imaging processing that is used in CAD development, MS CAD integration and evaluation in clinical workflow is technically challenging due to the requirement of high computation rates and memory bandwidth in the recursive nature of the algorithm. In this paper, we present the development and evaluation of using a computing engine in the graphical processing unit (GPU) with MATLAB for segmentation of MS lesions. The paper investigates the utilization of a high-end GPU for parallel computing of KNN in the MATLAB environment to improve algorithm performance. The integration is accomplished using NVIDIA's CUDA developmental toolkit for MATLAB. The results of this study will validate the practicality and effectiveness of the prototype MS CAD in a clinical setting. The GPU method may allow MS CAD to rapidly integrate in an electronic patient record or any disease-centric health care system.
Feature integration and object representations along the dorsal stream visual hierarchy
Perry, Carolyn Jeane; Fallah, Mazyar
2014-01-01
The visual system is split into two processing streams: a ventral stream that receives color and form information and a dorsal stream that receives motion information. Each stream processes that information hierarchically, with each stage building upon the previous. In the ventral stream this leads to the formation of object representations that ultimately allow for object recognition regardless of changes in the surrounding environment. In the dorsal stream, this hierarchical processing has classically been thought to lead to the computation of complex motion in three dimensions. However, there is evidence to suggest that there is integration of both dorsal and ventral stream information into motion computation processes, giving rise to intermediate object representations, which facilitate object selection and decision making mechanisms in the dorsal stream. First we review the hierarchical processing of motion along the dorsal stream and the building up of object representations along the ventral stream. Then we discuss recent work on the integration of ventral and dorsal stream features that lead to intermediate object representations in the dorsal stream. Finally we propose a framework describing how and at what stage different features are integrated into dorsal visual stream object representations. Determining the integration of features along the dorsal stream is necessary to understand not only how the dorsal stream builds up an object representation but also which computations are performed on object representations instead of local features. PMID:25140147
A high performance scientific cloud computing environment for materials simulations
NASA Astrophysics Data System (ADS)
Jorissen, K.; Vila, F. D.; Rehr, J. J.
2012-09-01
We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including tools for execution and monitoring performance, as well as efficient I/O utilities that enable seamless connections to and from the cloud. Our SCC platform is optimized for the Amazon Elastic Compute Cloud (EC2). We present benchmarks for prototypical scientific applications and demonstrate performance comparable to local compute clusters. To facilitate code execution and provide user-friendly access, we have also integrated cloud computing capability in a JAVA-based GUI. Our SCC platform may be an alternative to traditional HPC resources for materials science or quantum chemistry applications.
Attention in a Bayesian Framework
Whiteley, Louise; Sahani, Maneesh
2012-01-01
The behavioral phenomena of sensory attention are thought to reflect the allocation of a limited processing resource, but there is little consensus on the nature of the resource or why it should be limited. Here we argue that a fundamental bottleneck emerges naturally within Bayesian models of perception, and use this observation to frame a new computational account of the need for, and action of, attention – unifying diverse attentional phenomena in a way that goes beyond previous inferential, probabilistic and Bayesian models. Attentional effects are most evident in cluttered environments, and include both selective phenomena, where attention is invoked by cues that point to particular stimuli, and integrative phenomena, where attention is invoked dynamically by endogenous processing. However, most previous Bayesian accounts of attention have focused on describing relatively simple experimental settings, where cues shape expectations about a small number of upcoming stimuli and thus convey “prior” information about clearly defined objects. While operationally consistent with the experiments it seeks to describe, this view of attention as prior seems to miss many essential elements of both its selective and integrative roles, and thus cannot be easily extended to complex environments. We suggest that the resource bottleneck stems from the computational intractability of exact perceptual inference in complex settings, and that attention reflects an evolved mechanism for approximate inference which can be shaped to refine the local accuracy of perception. We show that this approach extends the simple picture of attention as prior, so as to provide a unified and computationally driven account of both selective and integrative attentional phenomena. PMID:22712010
Liu, Charles Y; Spicer, Mark; Apuzzo, Michael L J
2003-01-01
The future development of the neurosurgical operative environment is driven principally by concurrent development in science and technology. In the new millennium, these developments are taking on a Jules Verne quality, with the ability to construct and manipulate the human organism and its surroundings at the level of atoms and molecules seemingly at hand. Thus, an examination of currents in technology advancement from the neurosurgical perspective can provide insight into the evolution of the neurosurgical operative environment. In the future, the optimal design solution for the operative environment requirements of specialized neurosurgery may take the form of composites of venues that are currently mutually distinct. Advances in microfabrication technology and laser optical manipulators are expanding the scope and role of robotics, with novel opportunities for bionic integration. Assimilation of biosensor technology into the operative environment promises to provide neurosurgeons of the future with a vastly expanded set of physiological data, which will require concurrent simplification and optimization of analysis and presentation schemes to facilitate practical usefulness. Nanotechnology derivatives are shattering the maximum limits of resolution and magnification allowed by conventional microscopes. Furthermore, quantum computing and molecular electronics promise to greatly enhance computational power, allowing the emerging reality of simulation and virtual neurosurgery for rehearsal and training purposes. Progressive minimalism is evident throughout, leading ultimately to a paradigm shift as the nanoscale is approached. At the interface between the old and new technological paradigms, issues related to integration may dictate the ultimate emergence of the products of the new paradigm. Once initiated, however, history suggests that the process of change will proceed rapidly and dramatically, with the ultimate neurosurgical operative environment of the future being far more complex in functional capacity but strikingly simple in apparent form.
Dolgov, Igor; Birchfield, David A; McBeath, Michael K; Thornburg, Harvey; Todd, Christopher G
2009-04-01
Perception of floor-projected moving geometric shapes was examined in the context of the Situated Multimedia Arts Learning Laboratory (SMALLab), an immersive, mixed-reality learning environment. As predicted, the projected destinations of shapes which retreated in depth (proximal origin) were judged significantly less accurately than those that approached (distal origin). Participants maintained similar magnitudes of error throughout the session, and no effect of practice was observed. Shape perception in an immersive multimedia environment is comparable to the real world. One may conclude that systematic exploration of basic psychological phenomena in novel mediated environments is integral to an understanding of human behavior in novel human-computer interaction architectures.
Achievements and Challenges in Computational Protein Design.
Samish, Ilan
2017-01-01
Computational protein design (CPD), a yet evolving field, includes computer-aided engineering for partial or full de novo designs of proteins of interest. Designs are defined by a requested structure, function, or working environment. This chapter describes the birth and maturation of the field by presenting 101 CPD examples in a chronological order emphasizing achievements and pending challenges. Integrating these aspects presents the plethora of CPD approaches with the hope of providing a "CPD 101". These reflect on the broader structural bioinformatics and computational biophysics field and include: (1) integration of knowledge-based and energy-based methods, (2) hierarchical designated approach towards local, regional, and global motifs and the integration of high- and low-resolution design schemes that fit each such region, (3) systematic differential approaches towards different protein regions, (4) identification of key hot-spot residues and the relative effect of remote regions, (5) assessment of shape-complementarity, electrostatics and solvation effects, (6) integration of thermal plasticity and functional dynamics, (7) negative design, (8) systematic integration of experimental approaches, (9) objective cross-assessment of methods, and (10) successful ranking of potential designs. Future challenges also include dissemination of CPD software to the general use of life-sciences researchers and the emphasis of success within an in vivo milieu. CPD increases our understanding of protein structure and function and the relationships between the two along with the application of such know-how for the benefit of mankind. Applied aspects range from biological drugs, via healthier and tastier food products to nanotechnology and environmentally friendly enzymes replacing toxic chemicals utilized in the industry.
NASA Astrophysics Data System (ADS)
Murga, Alicia; Sano, Yusuke; Kawamoto, Yoichi; Ito, Kazuhide
2017-10-01
Mechanical and passive ventilation strategies directly impact indoor air quality. Passive ventilation has recently become widespread owing to its ability to reduce energy demand in buildings, such as the case of natural or cross ventilation. To understand the effect of natural ventilation on indoor environmental quality, outdoor-indoor flow paths need to be analyzed as functions of urban atmospheric conditions, topology of the built environment, and indoor conditions. Wind-driven natural ventilation (e.g., cross ventilation) can be calculated through the wind pressure coefficient distributions of outdoor wall surfaces and openings of a building, allowing the study of indoor air parameters and airborne contaminant concentrations. Variations in outside parameters will directly impact indoor air quality and residents' health. Numerical modeling can contribute to comprehend these various parameters because it allows full control of boundary conditions and sampling points. In this study, numerical weather prediction modeling was used to calculate wind profiles/distributions at the atmospheric scale, and computational fluid dynamics was used to model detailed urban and indoor flows, which were then integrated into a dynamic downscaling analysis to predict specific urban wind parameters from the atmospheric to built-environment scale. Wind velocity and contaminant concentration distributions inside a factory building were analyzed to assess the quality of the human working environment by using a computer simulated person. The impact of cross ventilation flows and its variations on local average contaminant concentration around a factory worker, and inhaled contaminant dose, were then discussed.
Integrating CAD modules in a PACS environment using a wide computing infrastructure.
Suárez-Cuenca, Jorge J; Tilve, Amara; López, Ricardo; Ferro, Gonzalo; Quiles, Javier; Souto, Miguel
2017-04-01
The aim of this paper is to describe a project designed to achieve a total integration of different CAD algorithms into the PACS environment by using a wide computing infrastructure. The aim is to build a system for the entire region of Galicia, Spain, to make CAD accessible to multiple hospitals by employing different PACSs and clinical workstations. The new CAD model seeks to connect different devices (CAD systems, acquisition modalities, workstations and PACS) by means of networking based on a platform that will offer different CAD services. This paper describes some aspects related to the health services of the region where the project was developed, CAD algorithms that were either employed or selected for inclusion in the project, and several technical aspects and results. We have built a standard-based platform with which users can request a CAD service and receive the results in their local PACS. The process runs through a web interface that allows sending data to the different CAD services. A DICOM SR object is received with the results of the algorithms stored inside the original study in the proper folder with the original images. As a result, a homogeneous service to the different hospitals of the region will be offered. End users will benefit from a homogeneous workflow and a standardised integration model to request and obtain results from CAD systems in any modality, not dependant on commercial integration models. This new solution will foster the deployment of these technologies in the entire region of Galicia.
FPGA-based real-time embedded system for RISS/GPS integrated navigation.
Abdelfatah, Walid Farid; Georgy, Jacques; Iqbal, Umar; Noureldin, Aboelmagd
2012-01-01
Navigation algorithms integrating measurements from multi-sensor systems overcome the problems that arise from using GPS navigation systems in standalone mode. Algorithms which integrate the data from 2D low-cost reduced inertial sensor system (RISS), consisting of a gyroscope and an odometer or wheel encoders, along with a GPS receiver via a Kalman filter has proved to be worthy in providing a consistent and more reliable navigation solution compared to standalone GPS receivers. It has been also shown to be beneficial, especially in GPS-denied environments such as urban canyons and tunnels. The main objective of this paper is to narrow the idea-to-implementation gap that follows the algorithm development by realizing a low-cost real-time embedded navigation system capable of computing the data-fused positioning solution. The role of the developed system is to synchronize the measurements from the three sensors, relative to the pulse per second signal generated from the GPS, after which the navigation algorithm is applied to the synchronized measurements to compute the navigation solution in real-time. Employing a customizable soft-core processor on an FPGA in the kernel of the navigation system, provided the flexibility for communicating with the various sensors and the computation capability required by the Kalman filter integration algorithm.
FPGA-Based Real-Time Embedded System for RISS/GPS Integrated Navigation
Abdelfatah, Walid Farid; Georgy, Jacques; Iqbal, Umar; Noureldin, Aboelmagd
2012-01-01
Navigation algorithms integrating measurements from multi-sensor systems overcome the problems that arise from using GPS navigation systems in standalone mode. Algorithms which integrate the data from 2D low-cost reduced inertial sensor system (RISS), consisting of a gyroscope and an odometer or wheel encoders, along with a GPS receiver via a Kalman filter has proved to be worthy in providing a consistent and more reliable navigation solution compared to standalone GPS receivers. It has been also shown to be beneficial, especially in GPS-denied environments such as urban canyons and tunnels. The main objective of this paper is to narrow the idea-to-implementation gap that follows the algorithm development by realizing a low-cost real-time embedded navigation system capable of computing the data-fused positioning solution. The role of the developed system is to synchronize the measurements from the three sensors, relative to the pulse per second signal generated from the GPS, after which the navigation algorithm is applied to the synchronized measurements to compute the navigation solution in real-time. Employing a customizable soft-core processor on an FPGA in the kernel of the navigation system, provided the flexibility for communicating with the various sensors and the computation capability required by the Kalman filter integration algorithm. PMID:22368460
Improving the Aircraft Design Process Using Web-Based Modeling and Simulation
NASA Technical Reports Server (NTRS)
Reed, John A.; Follen, Gregory J.; Afjeh, Abdollah A.; Follen, Gregory J. (Technical Monitor)
2000-01-01
Designing and developing new aircraft systems is time-consuming and expensive. Computational simulation is a promising means for reducing design cycle times, but requires a flexible software environment capable of integrating advanced multidisciplinary and multifidelity analysis methods, dynamically managing data across heterogeneous computing platforms, and distributing computationally complex tasks. Web-based simulation, with its emphasis on collaborative composition of simulation models, distributed heterogeneous execution, and dynamic multimedia documentation, has the potential to meet these requirements. This paper outlines the current aircraft design process, highlighting its problems and complexities, and presents our vision of an aircraft design process using Web-based modeling and simulation.
Improving the Aircraft Design Process Using Web-based Modeling and Simulation
NASA Technical Reports Server (NTRS)
Reed, John A.; Follen, Gregory J.; Afjeh, Abdollah A.
2003-01-01
Designing and developing new aircraft systems is time-consuming and expensive. Computational simulation is a promising means for reducing design cycle times, but requires a flexible software environment capable of integrating advanced multidisciplinary and muitifidelity analysis methods, dynamically managing data across heterogeneous computing platforms, and distributing computationally complex tasks. Web-based simulation, with its emphasis on collaborative composition of simulation models, distributed heterogeneous execution, and dynamic multimedia documentation, has the potential to meet these requirements. This paper outlines the current aircraft design process, highlighting its problems and complexities, and presents our vision of an aircraft design process using Web-based modeling and simulation.
NASA Technical Reports Server (NTRS)
Clark, T. A.; Brainard, G.; Salazar, G.; Johnston, S.; Schwing, B.; Litaker, H.; Kolomenski, A.; Venus, D.; Tran, K.; Hanifin, J.;
2017-01-01
NASA has demonstrated an interest in improving astronaut health and performance through the installment of a new lighting countermeasure on the International Space Station. The Solid State Lighting Assembly (SSLA) system is designed to positively influence astronaut health by providing a daily change to light spectrum to improve circadian entrainment. Unfortunately, existing NASA standards and requirements define ambient light level requirements for crew sleep and other tasks, yet the number of light-emitting diode (LED) indicators and displays within a habitable volume is currently uncontrolled. Because each of these light sources has its own unique spectral properties, the additive lighting environment ends up becoming something different from what was planned or researched. Restricting the use of displays and indicators is not a solution because these systems provide beneficial feedback to the crew. The research team for this grant used computer-based computational modeling and real-world lighting mockups to document the impact that light sources other than the ambient lighting system contribute to the ambient spectral lighting environment. In particular, the team was focused on understanding the impacts of long-term tasks located in front of avionics or computer displays. The team also wanted to understand options for mitigating the changes to the ambient light spectrum in the interest of maintaining the performance of a lighting countermeasure. The project utilized a variety of physical and computer-based simulations to determine direct relationships between system implementation and light spectrum. Using real-world data, computer models were built in the commercially available optics analysis software Zemax Optics Studio(c). The team also built a mockup test facility that had the same volume and configuration as one of the Zemax models. The team collected over 1200 spectral irradiance measurements, each representing a different configuration of the mockup. Analysis of the data showed a measurable impact on ambient light spectrum. This data showed that obvious design techniques exist that can be used to bind the ambient light spectrum closer to the planned spectral operating environment for the observer's eye point. The following observations should be considered when designing an operational environment that is dominated by computer displays. When more light is directed into the field of view of the observer, the greater the impact it will make on various human factors issues that depend on spectral shape and intensity. Because viewing angle has a large part to play in the amount of light flux on the crewmember's retina, beam shape, combined with light source location is an important factor for determining percent probable incident flux on the observer from any combination of light sources. Computer graphics design and display lumen output are major factors influencing the amount of spectrally intense light projected into the environment and in the viewer's direction. Use of adjustable white point display software was useful only if the predominant background color was white and if it matched the ambient light system's color. Display graphics that used a predominantly black background had the least influence on unplanned spectral energy projected into the environment. Percent reflectance makes a difference in total energy reflected back into an environment, and within certain architectural geometries, reflectance can be used to control the amount of a light spectrum that is allowed to perpetuate in the environment. Data showed that room volume and distance from significant light sources influence the total spectrum in a room. Smaller environments had a homogenizing effect on total light spectrum, whereas light from multiple sources in larger environments was less mixed. The findings indicated above should be considered when making recommendations for practice or standards for architectural systems. The ambient lighting system, surface reflectance, and display and indicator implementation all factor into the users' spectral environment. A variety of low-cost solutions exist to mitigate the impact of light from non-architectural lighting systems, and much potential for system automation and integration of display systems with the ambient environment. This team believes that proper planning can be used to avoid integration problems and also believes that human-in-the-loop evaluations, real-world test and measurement, and computer modeling can be used to determine how changes to a process, display graphics, and architecture will help maintain the planned spectral operating lighting environment.
Forward and backward inference in spatial cognition.
Penny, Will D; Zeidman, Peter; Burgess, Neil
2013-01-01
This paper shows that the various computations underlying spatial cognition can be implemented using statistical inference in a single probabilistic model. Inference is implemented using a common set of 'lower-level' computations involving forward and backward inference over time. For example, to estimate where you are in a known environment, forward inference is used to optimally combine location estimates from path integration with those from sensory input. To decide which way to turn to reach a goal, forward inference is used to compute the likelihood of reaching that goal under each option. To work out which environment you are in, forward inference is used to compute the likelihood of sensory observations under the different hypotheses. For reaching sensory goals that require a chaining together of decisions, forward inference can be used to compute a state trajectory that will lead to that goal, and backward inference to refine the route and estimate control signals that produce the required trajectory. We propose that these computations are reflected in recent findings of pattern replay in the mammalian brain. Specifically, that theta sequences reflect decision making, theta flickering reflects model selection, and remote replay reflects route and motor planning. We also propose a mapping of the above computational processes onto lateral and medial entorhinal cortex and hippocampus.
Forward and Backward Inference in Spatial Cognition
Penny, Will D.; Zeidman, Peter; Burgess, Neil
2013-01-01
This paper shows that the various computations underlying spatial cognition can be implemented using statistical inference in a single probabilistic model. Inference is implemented using a common set of ‘lower-level’ computations involving forward and backward inference over time. For example, to estimate where you are in a known environment, forward inference is used to optimally combine location estimates from path integration with those from sensory input. To decide which way to turn to reach a goal, forward inference is used to compute the likelihood of reaching that goal under each option. To work out which environment you are in, forward inference is used to compute the likelihood of sensory observations under the different hypotheses. For reaching sensory goals that require a chaining together of decisions, forward inference can be used to compute a state trajectory that will lead to that goal, and backward inference to refine the route and estimate control signals that produce the required trajectory. We propose that these computations are reflected in recent findings of pattern replay in the mammalian brain. Specifically, that theta sequences reflect decision making, theta flickering reflects model selection, and remote replay reflects route and motor planning. We also propose a mapping of the above computational processes onto lateral and medial entorhinal cortex and hippocampus. PMID:24348230
IPython: components for interactive and parallel computing across disciplines. (Invited)
NASA Astrophysics Data System (ADS)
Perez, F.; Bussonnier, M.; Frederic, J. D.; Froehle, B. M.; Granger, B. E.; Ivanov, P.; Kluyver, T.; Patterson, E.; Ragan-Kelley, B.; Sailer, Z.
2013-12-01
Scientific computing is an inherently exploratory activity that requires constantly cycling between code, data and results, each time adjusting the computations as new insights and questions arise. To support such a workflow, good interactive environments are critical. The IPython project (http://ipython.org) provides a rich architecture for interactive computing with: 1. Terminal-based and graphical interactive consoles. 2. A web-based Notebook system with support for code, text, mathematical expressions, inline plots and other rich media. 3. Easy to use, high performance tools for parallel computing. Despite its roots in Python, the IPython architecture is designed in a language-agnostic way to facilitate interactive computing in any language. This allows users to mix Python with Julia, R, Octave, Ruby, Perl, Bash and more, as well as to develop native clients in other languages that reuse the IPython clients. In this talk, I will show how IPython supports all stages in the lifecycle of a scientific idea: 1. Individual exploration. 2. Collaborative development. 3. Production runs with parallel resources. 4. Publication. 5. Education. In particular, the IPython Notebook provides an environment for "literate computing" with a tight integration of narrative and computation (including parallel computing). These Notebooks are stored in a JSON-based document format that provides an "executable paper": notebooks can be version controlled, exported to HTML or PDF for publication, and used for teaching.
Experimental Realization of High-Efficiency Counterfactual Computation.
Kong, Fei; Ju, Chenyong; Huang, Pu; Wang, Pengfei; Kong, Xi; Shi, Fazhan; Jiang, Liang; Du, Jiangfeng
2015-08-21
Counterfactual computation (CFC) exemplifies the fascinating quantum process by which the result of a computation may be learned without actually running the computer. In previous experimental studies, the counterfactual efficiency is limited to below 50%. Here we report an experimental realization of the generalized CFC protocol, in which the counterfactual efficiency can break the 50% limit and even approach unity in principle. The experiment is performed with the spins of a negatively charged nitrogen-vacancy color center in diamond. Taking advantage of the quantum Zeno effect, the computer can remain in the not-running subspace due to the frequent projection by the environment, while the computation result can be revealed by final detection. The counterfactual efficiency up to 85% has been demonstrated in our experiment, which opens the possibility of many exciting applications of CFC, such as high-efficiency quantum integration and imaging.
Experimental Realization of High-Efficiency Counterfactual Computation
NASA Astrophysics Data System (ADS)
Kong, Fei; Ju, Chenyong; Huang, Pu; Wang, Pengfei; Kong, Xi; Shi, Fazhan; Jiang, Liang; Du, Jiangfeng
2015-08-01
Counterfactual computation (CFC) exemplifies the fascinating quantum process by which the result of a computation may be learned without actually running the computer. In previous experimental studies, the counterfactual efficiency is limited to below 50%. Here we report an experimental realization of the generalized CFC protocol, in which the counterfactual efficiency can break the 50% limit and even approach unity in principle. The experiment is performed with the spins of a negatively charged nitrogen-vacancy color center in diamond. Taking advantage of the quantum Zeno effect, the computer can remain in the not-running subspace due to the frequent projection by the environment, while the computation result can be revealed by final detection. The counterfactual efficiency up to 85% has been demonstrated in our experiment, which opens the possibility of many exciting applications of CFC, such as high-efficiency quantum integration and imaging.
A comparison of queueing, cluster and distributed computing systems
NASA Technical Reports Server (NTRS)
Kaplan, Joseph A.; Nelson, Michael L.
1993-01-01
Using workstation clusters for distributed computing has become popular with the proliferation of inexpensive, powerful workstations. Workstation clusters offer both a cost effective alternative to batch processing and an easy entry into parallel computing. However, a number of workstations on a network does not constitute a cluster. Cluster management software is necessary to harness the collective computing power. A variety of cluster management and queuing systems are compared: Distributed Queueing Systems (DQS), Condor, Load Leveler, Load Balancer, Load Sharing Facility (LSF - formerly Utopia), Distributed Job Manager (DJM), Computing in Distributed Networked Environments (CODINE), and NQS/Exec. The systems differ in their design philosophy and implementation. Based on published reports on the different systems and conversations with the system's developers and vendors, a comparison of the systems are made on the integral issues of clustered computing.
SOFIP: A Short Orbital Flux Integration Program
NASA Technical Reports Server (NTRS)
Stassinopoulos, E. G.; Hebert, J. J.; Butler, E. L.; Barth, J. L.
1979-01-01
A computer code was developed to evaluate the space radiation environment encountered by geocentric satellites. The Short Orbital Flux Integration Program (SOFIP) is a compact routine of modular compositions, designed mostly with structured programming techniques in order to provide core and time economy and ease of use. The program in its simplest form produces for a given input trajectory a composite integral orbital spectrum of either protons or electrons. Additional features are available separately or in combination with the inclusion of the corresponding (optional) modules. The code is described in detail, and the function and usage of the various modules are explained. A program listing and sample outputs are attached.
Avola, Danilo; Spezialetti, Matteo; Placidi, Giuseppe
2013-06-01
Rehabilitation is often required after stroke, surgery, or degenerative diseases. It has to be specific for each patient and can be easily calibrated if assisted by human-computer interfaces and virtual reality. Recognition and tracking of different human body landmarks represent the basic features for the design of the next generation of human-computer interfaces. The most advanced systems for capturing human gestures are focused on vision-based techniques which, on the one hand, may require compromises from real-time and spatial precision and, on the other hand, ensure natural interaction experience. The integration of vision-based interfaces with thematic virtual environments encourages the development of novel applications and services regarding rehabilitation activities. The algorithmic processes involved during gesture recognition activity, as well as the characteristics of the virtual environments, can be developed with different levels of accuracy. This paper describes the architectural aspects of a framework supporting real-time vision-based gesture recognition and virtual environments for fast prototyping of customized exercises for rehabilitation purposes. The goal is to provide the therapist with a tool for fast implementation and modification of specific rehabilitation exercises for specific patients, during functional recovery. Pilot examples of designed applications and preliminary system evaluation are reported and discussed. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Surfer: An Extensible Pull-Based Framework for Resource Selection and Ranking
NASA Technical Reports Server (NTRS)
Zolano, Paul Z.
2004-01-01
Grid computing aims to connect large numbers of geographically and organizationally distributed resources to increase computational power; resource utilization, and resource accessibility. In order to effectively utilize grids, users need to be connected to the best available resources at any given time. As grids are in constant flux, users cannot be expected to keep up with the configuration and status of the grid, thus they must be provided with automatic resource brokering for selecting and ranking resources meeting constraints and preferences they specify. This paper presents a new OGSI-compliant resource selection and ranking framework called Surfer that has been implemented as part of NASA's Information Power Grid (IPG) project. Surfer is highly extensible and may be integrated into any grid environment by adding information providers knowledgeable about that environment.
Real-Time Simulation of Ares I Launch Vehicle
NASA Technical Reports Server (NTRS)
Tobbe, Patrick; Matras, Alex; Wilson, Heath; Alday, Nathan; Walker, David; Betts, Kevin; Hughes, Ryan; Turbe, Michael
2009-01-01
The Ares Real-Time Environment for Modeling, Integration, and Simulation (ARTEMIS) has been developed for use by the Ares I launch vehicle System Integration Laboratory (SIL) at the Marshall Space Flight Center (MSFC). The primary purpose of the Ares SIL is to test the vehicle avionics hardware and software in a hardware-in-the-loop (HWIL) environment to certify that the integrated system is prepared for flight. ARTEMIS has been designed to be the real-time software backbone to stimulate all required Ares components through high-fidelity simulation. ARTEMIS has been designed to take full advantage of the advances in underlying computational power now available to support HWIL testing. A modular real-time design relying on a fully distributed computing architecture has been achieved. Two fundamental requirements drove ARTEMIS to pursue the use of high-fidelity simulation models in a real-time environment. First, ARTEMIS must be used to test a man-rated integrated avionics hardware and software system, thus requiring a wide variety of nominal and off-nominal simulation capabilities to certify system robustness. The second driving requirement - derived from a nationwide review of current state-of-the-art HWIL facilities - was that preserving digital model fidelity significantly reduced overall vehicle lifecycle cost by reducing testing time for certification runs and increasing flight tempo through an expanded operational envelope. These two driving requirements necessitated the use of high-fidelity models throughout the ARTEMIS simulation. The nature of the Ares mission profile imposed a variety of additional requirements on the ARTEMIS simulation. The Ares I vehicle is composed of multiple elements, including the First Stage Solid Rocket Booster (SRB), the Upper Stage powered by the J- 2X engine, the Orion Crew Exploration Vehicle (CEV) which houses the crew, the Launch Abort System (LAS), and various secondary elements that separate from the vehicle. At launch, the integrated vehicle stack is composed of these stages, and throughout the mission, various elements separate from the integrated stack and tumble back towards the earth. ARTEMIS must be capable of simulating the integrated stack through the flight as well as propagating each individual element after separation. In addition, abort sequences can lead to other unique configurations of the integrated stack as the timing and sequence of the stage separations are altered.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Qishi; Zhu, Mengxia; Rao, Nageswara S
We propose an intelligent decision support system based on sensor and computer networks that incorporates various component techniques for sensor deployment, data routing, distributed computing, and information fusion. The integrated system is deployed in a distributed environment composed of both wireless sensor networks for data collection and wired computer networks for data processing in support of homeland security defense. We present the system framework and formulate the analytical problems and develop approximate or exact solutions for the subtasks: (i) sensor deployment strategy based on a two-dimensional genetic algorithm to achieve maximum coverage with cost constraints; (ii) data routing scheme tomore » achieve maximum signal strength with minimum path loss, high energy efficiency, and effective fault tolerance; (iii) network mapping method to assign computing modules to network nodes for high-performance distributed data processing; and (iv) binary decision fusion rule that derive threshold bounds to improve system hit rate and false alarm rate. These component solutions are implemented and evaluated through either experiments or simulations in various application scenarios. The extensive results demonstrate that these component solutions imbue the integrated system with the desirable and useful quality of intelligence in decision making.« less
ERIC Educational Resources Information Center
Dickes, Amanda Catherine; Sengupta, Pratim; Farris, Amy Voss; Satabdi, Basu
2016-01-01
In this paper, we present a third-grade ecology learning environment that integrates two forms of modeling--embodied modeling and agent-based modeling (ABMs)--through the generation of mathematical representations that are common to both forms of modeling. The term "agent" in the context of ABMs indicates individual computational objects…
New directions for Artificial Intelligence (AI) methods in optimum design
NASA Technical Reports Server (NTRS)
Hajela, Prabhat
1989-01-01
Developments and applications of artificial intelligence (AI) methods in the design of structural systems is reviewed. Principal shortcomings in the current approach are emphasized, and the need for some degree of formalism in the development environment for such design tools is underscored. Emphasis is placed on efforts to integrate algorithmic computations in expert systems.
ERIC Educational Resources Information Center
Canto, Silvia; Jauregi Ondarra, Kristi
2017-01-01
This article attempts to shed some light on the possible learning benefits for language acquisition and intercultural development of authentic social interaction with expert peers through computer mediated communication (CMC) tools. The environments used in this study are video communication and the 3D virtual world "Second Life." For…
Faculty Attitude towards Integrating Technology in Teaching at a Four-Year Southeastern University
ERIC Educational Resources Information Center
Palmore, Donna Venetta
2011-01-01
Studies have shown that computer technology has brought about a noticeable change in the manner in which education is delivered to students. Further research suggests that the use of technology enables educators to effectively communicate with their students in an interactive learning environment designed to meet their individual needs. Moreover,…
Teaching with a Dual-Channel Classroom Feedback System in the Digital Classroom Environment
ERIC Educational Resources Information Center
Yu, Yuan-Chih
2017-01-01
Teaching with a classroom feedback system can benefit both teaching and learning practices of interactivity. In this paper, we propose a dual-channel classroom feedback system integrated with a back-end e-Learning system. The system consists of learning agents running on the students' computers and a teaching agent running on the instructor's…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-31
... found in the environment. Through the IRIS Program, EPA provides the highest quality science-based human... for the external review draft human health assessment titled, ``Toxicological Review of n-Butanol: In... will need audio-visual equipment (e.g., laptop computer and slide projector). In general, each...
An integrated approach to system design, reliability, and diagnosis
NASA Technical Reports Server (NTRS)
Patterson-Hine, F. A.; Iverson, David L.
1990-01-01
The requirement for ultradependability of computer systems in future avionics and space applications necessitates a top-down, integrated systems engineering approach for design, implementation, testing, and operation. The functional analyses of hardware and software systems must be combined by models that are flexible enough to represent their interactions and behavior. The information contained in these models must be accessible throughout all phases of the system life cycle in order to maintain consistency and accuracy in design and operational decisions. One approach being taken by researchers at Ames Research Center is the creation of an object-oriented environment that integrates information about system components required in the reliability evaluation with behavioral information useful for diagnostic algorithms. Procedures have been developed at Ames that perform reliability evaluations during design and failure diagnoses during system operation. These procedures utilize information from a central source, structured as object-oriented fault trees. Fault trees were selected because they are a flexible model widely used in aerospace applications and because they give a concise, structured representation of system behavior. The utility of this integrated environment for aerospace applications in light of our experiences during its development and use is described. The techniques for reliability evaluation and failure diagnosis are discussed, and current extensions of the environment and areas requiring further development are summarized.
Search and Determine Integrated Environment (SADIE)
NASA Astrophysics Data System (ADS)
Sabol, C.; Schumacher, P.; Segerman, A.; Coffey, S.; Hoskins, A.
2012-09-01
A new and integrated high performance computing software applications package called the Search and Determine Integrated Environment (SADIE) is being jointly developed and refined by the Air Force and Naval Research Laboratories (AFRL and NRL) to automatically resolve uncorrelated tracks (UCTs) and build a more complete space object catalog for improved Space Situational Awareness (SSA). The motivation for SADIE is to respond to very challenging needs identified and guidance received from Air Force Space Command (AFSPC) and other senior leaders to develop this technology to support the evolving Joint Space Operations Center (JSpOC) and Alternate Space Control Center (ASC2)-Dahlgren. The JSpOC and JMS SSA mission requirements and threads flow down from the United States Strategic Command (USSTRATCOM). The SADIE suite includes modification and integration of legacy applications and software components that include Search And Determine (SAD), Satellite Identification (SID), and Parallel Catalog (Parcat), as well as other utilities and scripts to enable end-to-end catalog building and maintenance in a parallel processing environment. SADIE is being developed to handle large catalog building challenges in all orbit regimes and includes the automatic processing of radar, fence, and optical data. Real data results are provided for the processing of Air Force Space Surveillance System fence observations and for the processing of Space Surveillance Telescope optical data.
An integrated approach to system design, reliability, and diagnosis
NASA Astrophysics Data System (ADS)
Patterson-Hine, F. A.; Iverson, David L.
1990-12-01
The requirement for ultradependability of computer systems in future avionics and space applications necessitates a top-down, integrated systems engineering approach for design, implementation, testing, and operation. The functional analyses of hardware and software systems must be combined by models that are flexible enough to represent their interactions and behavior. The information contained in these models must be accessible throughout all phases of the system life cycle in order to maintain consistency and accuracy in design and operational decisions. One approach being taken by researchers at Ames Research Center is the creation of an object-oriented environment that integrates information about system components required in the reliability evaluation with behavioral information useful for diagnostic algorithms. Procedures have been developed at Ames that perform reliability evaluations during design and failure diagnoses during system operation. These procedures utilize information from a central source, structured as object-oriented fault trees. Fault trees were selected because they are a flexible model widely used in aerospace applications and because they give a concise, structured representation of system behavior. The utility of this integrated environment for aerospace applications in light of our experiences during its development and use is described. The techniques for reliability evaluation and failure diagnosis are discussed, and current extensions of the environment and areas requiring further development are summarized.
Life sciences research in space: The requirement for animal models
NASA Technical Reports Server (NTRS)
Fuller, C. A.; Philips, R. W.; Ballard, R. W.
1987-01-01
Use of animals in NASA space programs is reviewed. Animals are needed because life science experimentation frequently requires long-term controlled exposure to environments, statistical validation, invasive instrumentation or biological tissue sampling, tissue destruction, exposure to dangerous or unknown agents, or sacrifice of the subject. The availability and use of human subjects inflight is complicated by the multiple needs and demands upon crew time. Because only living organisms can sense, integrate and respond to the environment around them, the sole use of tissue culture and computer models is insufficient for understanding the influence of the space environment on intact organisms. Equipment for spaceborne experiments with animals is described.
PATHFINDER: Probing Atmospheric Flows in an Integrated and Distributed Environment
NASA Technical Reports Server (NTRS)
Wilhelmson, R. B.; Wojtowicz, D. P.; Shaw, C.; Hagedorn, J.; Koch, S.
1995-01-01
PATHFINDER is a software effort to create a flexible, modular, collaborative, and distributed environment for studying atmospheric, astrophysical, and other fluid flows in the evolving networked metacomputer environment of the 1990s. It uses existing software, such as HDF (Hierarchical Data Format), DTM (Data Transfer Mechanism), GEMPAK (General Meteorological Package), AVS, SGI Explorer, and Inventor to provide the researcher with the ability to harness the latest in desktop to teraflop computing. Software modules developed during the project are available in the public domain via anonymous FTP from the National Center for Supercomputing Applications (NCSA). The address is ftp.ncsa.uiuc.edu, and the directory is /SGI/PATHFINDER.
Exploring Contextual Models in Chemical Patent Search
NASA Astrophysics Data System (ADS)
Urbain, Jay; Frieder, Ophir
We explore the development of probabilistic retrieval models for integrating term statistics with entity search using multiple levels of document context to improve the performance of chemical patent search. A distributed indexing model was developed to enable efficient named entity search and aggregation of term statistics at multiple levels of patent structure including individual words, sentences, claims, descriptions, abstracts, and titles. The system can be scaled to an arbitrary number of compute instances in a cloud computing environment to support concurrent indexing and query processing operations on large patent collections.
Mass storage system experiences and future needs at the National Center for Atmospheric Research
NASA Technical Reports Server (NTRS)
Olear, Bernard T.
1991-01-01
A summary and viewgraphs of a discussion presented at the National Space Science Data Center (NSSDC) Mass Storage Workshop is included. Some of the experiences of the Scientific Computing Division at the National Center for Atmospheric Research (NCAR) dealing the the 'data problem' are discussed. A brief history and a development of some basic mass storage system (MSS) principles are given. An attempt is made to show how these principles apply to the integration of various components into NCAR's MSS. Future MSS needs for future computing environments is discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanfilippo, Antonio P.; Riensche, Roderick M.; Haack, Jereme N.
“Gamification”, the application of gameplay to real-world problems, enables the development of human computation systems that support decision-making through the integration of social and machine intelligence. One of gamification’s major benefits includes the creation of a problem solving environment where the influence of cognitive and cultural biases on human judgment can be curtailed through collaborative and competitive reasoning. By reducing biases on human judgment, gamification allows human computation systems to exploit human creativity relatively unhindered by human error. Operationally, gamification uses simulation to harvest human behavioral data that provide valuable insights for the solution of real-world problems.
Integrated modeling of advanced optical systems
NASA Astrophysics Data System (ADS)
Briggs, Hugh C.; Needels, Laura; Levine, B. Martin
1993-02-01
This poster session paper describes an integrated modeling and analysis capability being developed at JPL under funding provided by the JPL Director's Discretionary Fund and the JPL Control/Structure Interaction Program (CSI). The posters briefly summarize the program capabilities and illustrate them with an example problem. The computer programs developed under this effort will provide an unprecedented capability for integrated modeling and design of high performance optical spacecraft. The engineering disciplines supported include structural dynamics, controls, optics and thermodynamics. Such tools are needed in order to evaluate the end-to-end system performance of spacecraft such as OSI, POINTS, and SMMM. This paper illustrates the proof-of-concept tools that have been developed to establish the technology requirements and demonstrate the new features of integrated modeling and design. The current program also includes implementation of a prototype tool based upon the CAESY environment being developed under the NASA Guidance and Control Research and Technology Computational Controls Program. This prototype will be available late in FY-92. The development plan proposes a major software production effort to fabricate, deliver, support and maintain a national-class tool from FY-93 through FY-95.
Greenes, R A
1991-11-01
Education and decision-support resources useful to radiologists are proliferating for the personal computer/workstation user or are potentially accessible via high-speed networks. These resources are typically made available through a set of application programs that tend to be developed in isolation and operate independently. Nonetheless, there is a growing need for an integrated environment for access to these resources in the context of professional work, during clinical problem-solving and decision-making activities, and for use in conjunction with other information resources. New application development environments are required to provide these capabilities. One such architecture for applications, which we have implemented in a prototype environment called DeSyGNER, is based on separately delineating the component information resources required for an application, termed entities, and the user interface and organizational paradigms, or composition methods, by which the entities are used to provide particular kinds of capability. Examples include composition methods to support query, book browsing, hyperlinking, tutorials, simulations, or question/answer testing. Future steps must address true integration of such applications with existing clinical information systems. We believe that the most viable approach for evolving this capability is based on the use of new software engineering methodologies, open systems, client-server communication, and delineation of standard message protocols.
New frontiers in design synthesis
NASA Technical Reports Server (NTRS)
Goldin, D. S.; Venneri, S. L.; Noor, A. K.
1999-01-01
The Intelligent Synthesis Environment (ISE), which is one of the major strategic technologies under development at NASA centers and the University of Virginia, is described. One of the major objectives of ISE is to significantly enhance the rapid creation of innovative affordable products and missions. ISE uses a synergistic combination of leading-edge technologies, including high performance computing, high capacity communications and networking, human-centered computing, knowledge-based engineering, computational intelligence, virtual product development, and product information management. The environment will link scientists, design teams, manufacturers, suppliers, and consultants who participate in the mission synthesis as well as in the creation and operation of the aerospace system. It will radically advance the process by which complex science missions are synthesized, and high-tech engineering Systems are designed, manufactured and operated. The five major components critical to ISE are human-centered computing, infrastructure for distributed collaboration, rapid synthesis and simulation tools, life cycle integration and validation, and cultural change in both the engineering and science creative process. The five components and their subelements are described. Related U.S. government programs are outlined and the future impact of ISE on engineering research and education is discussed.
PROTO-PLASM: parallel language for adaptive and scalable modelling of biosystems.
Bajaj, Chandrajit; DiCarlo, Antonio; Paoluzzi, Alberto
2008-09-13
This paper discusses the design goals and the first developments of PROTO-PLASM, a novel computational environment to produce libraries of executable, combinable and customizable computer models of natural and synthetic biosystems, aiming to provide a supporting framework for predictive understanding of structure and behaviour through multiscale geometric modelling and multiphysics simulations. Admittedly, the PROTO-PLASM platform is still in its infancy. Its computational framework--language, model library, integrated development environment and parallel engine--intends to provide patient-specific computational modelling and simulation of organs and biosystem, exploiting novel functionalities resulting from the symbolic combination of parametrized models of parts at various scales. PROTO-PLASM may define the model equations, but it is currently focused on the symbolic description of model geometry and on the parallel support of simulations. Conversely, CellML and SBML could be viewed as defining the behavioural functions (the model equations) to be used within a PROTO-PLASM program. Here we exemplify the basic functionalities of PROTO-PLASM, by constructing a schematic heart model. We also discuss multiscale issues with reference to the geometric and physical modelling of neuromuscular junctions.
Proto-Plasm: parallel language for adaptive and scalable modelling of biosystems
Bajaj, Chandrajit; DiCarlo, Antonio; Paoluzzi, Alberto
2008-01-01
This paper discusses the design goals and the first developments of Proto-Plasm, a novel computational environment to produce libraries of executable, combinable and customizable computer models of natural and synthetic biosystems, aiming to provide a supporting framework for predictive understanding of structure and behaviour through multiscale geometric modelling and multiphysics simulations. Admittedly, the Proto-Plasm platform is still in its infancy. Its computational framework—language, model library, integrated development environment and parallel engine—intends to provide patient-specific computational modelling and simulation of organs and biosystem, exploiting novel functionalities resulting from the symbolic combination of parametrized models of parts at various scales. Proto-Plasm may define the model equations, but it is currently focused on the symbolic description of model geometry and on the parallel support of simulations. Conversely, CellML and SBML could be viewed as defining the behavioural functions (the model equations) to be used within a Proto-Plasm program. Here we exemplify the basic functionalities of Proto-Plasm, by constructing a schematic heart model. We also discuss multiscale issues with reference to the geometric and physical modelling of neuromuscular junctions. PMID:18559320
Analysis, Mining and Visualization Service at NCSA
NASA Astrophysics Data System (ADS)
Wilhelmson, R.; Cox, D.; Welge, M.
2004-12-01
NCSA's goal is to create a balanced system that fully supports high-end computing as well as: 1) high-end data management and analysis; 2) visualization of massive, highly complex data collections; 3) large databases; 4) geographically distributed Grid computing; and 5) collaboratories, all based on a secure computational environment and driven with workflow-based services. To this end NCSA has defined a new technology path that includes the integration and provision of cyberservices in support of data analysis, mining, and visualization. NCSA has begun to develop and apply a data mining system-NCSA Data-to-Knowledge (D2K)-in conjunction with both the application and research communities. NCSA D2K will enable the formation of model-based application workflows and visual programming interfaces for rapid data analysis. The Java-based D2K framework, which integrates analytical data mining methods with data management, data transformation, and information visualization tools, will be configurable from the cyberservices (web and grid services, tools, ..) viewpoint to solve a wide range of important data mining problems. This effort will use modules, such as a new classification methods for the detection of high-risk geoscience events, and existing D2K data management, machine learning, and information visualization modules. A D2K cyberservices interface will be developed to seamlessly connect client applications with remote back-end D2K servers, providing computational resources for data mining and integration with local or remote data stores. This work is being coordinated with SDSC's data and services efforts. The new NCSA Visualization embedded workflow environment (NVIEW) will be integrated with D2K functionality to tightly couple informatics and scientific visualization with the data analysis and management services. Visualization services will access and filter disparate data sources, simplifying tasks such as fusing related data from distinct sources into a coherent visual representation. This approach enables collaboration among geographically dispersed researchers via portals and front-end clients, and the coupling with data management services enables recording associations among datasets and building annotation systems into visualization tools and portals, giving scientists a persistent, shareable, virtual lab notebook. To facilitate provision of these cyberservices to the national community, NCSA will be providing a computational environment for large-scale data assimilation, analysis, mining, and visualization. This will be initially implemented on the new 512 processor shared memory SGI's recently purchased by NCSA. In addition to standard batch capabilities, NCSA will provide on-demand capabilities for those projects requiring rapid response (e.g., development of severe weather, earthquake events) for decision makers. It will also be used for non-sequential interactive analysis of data sets where it is important have access to large data volumes over space and time.
Distributed GPU Computing in GIScience
NASA Astrophysics Data System (ADS)
Jiang, Y.; Yang, C.; Huang, Q.; Li, J.; Sun, M.
2013-12-01
Geoscientists strived to discover potential principles and patterns hidden inside ever-growing Big Data for scientific discoveries. To better achieve this objective, more capable computing resources are required to process, analyze and visualize Big Data (Ferreira et al., 2003; Li et al., 2013). Current CPU-based computing techniques cannot promptly meet the computing challenges caused by increasing amount of datasets from different domains, such as social media, earth observation, environmental sensing (Li et al., 2013). Meanwhile CPU-based computing resources structured as cluster or supercomputer is costly. In the past several years with GPU-based technology matured in both the capability and performance, GPU-based computing has emerged as a new computing paradigm. Compare to traditional computing microprocessor, the modern GPU, as a compelling alternative microprocessor, has outstanding high parallel processing capability with cost-effectiveness and efficiency(Owens et al., 2008), although it is initially designed for graphical rendering in visualization pipe. This presentation reports a distributed GPU computing framework for integrating GPU-based computing within distributed environment. Within this framework, 1) for each single computer, computing resources of both GPU-based and CPU-based can be fully utilized to improve the performance of visualizing and processing Big Data; 2) within a network environment, a variety of computers can be used to build up a virtual super computer to support CPU-based and GPU-based computing in distributed computing environment; 3) GPUs, as a specific graphic targeted device, are used to greatly improve the rendering efficiency in distributed geo-visualization, especially for 3D/4D visualization. Key words: Geovisualization, GIScience, Spatiotemporal Studies Reference : 1. Ferreira de Oliveira, M. C., & Levkowitz, H. (2003). From visual data exploration to visual data mining: A survey. Visualization and Computer Graphics, IEEE Transactions on, 9(3), 378-394. 2. Li, J., Jiang, Y., Yang, C., Huang, Q., & Rice, M. (2013). Visualizing 3D/4D Environmental Data Using Many-core Graphics Processing Units (GPUs) and Multi-core Central Processing Units (CPUs). Computers & Geosciences, 59(9), 78-89. 3. Owens, J. D., Houston, M., Luebke, D., Green, S., Stone, J. E., & Phillips, J. C. (2008). GPU computing. Proceedings of the IEEE, 96(5), 879-899.
Dynamic Collaboration Infrastructure for Hydrologic Science
NASA Astrophysics Data System (ADS)
Tarboton, D. G.; Idaszak, R.; Castillo, C.; Yi, H.; Jiang, F.; Jones, N.; Goodall, J. L.
2016-12-01
Data and modeling infrastructure is becoming increasingly accessible to water scientists. HydroShare is a collaborative environment that currently offers water scientists the ability to access modeling and data infrastructure in support of data intensive modeling and analysis. It supports the sharing of and collaboration around "resources" which are social objects defined to include both data and models in a structured standardized format. Users collaborate around these objects via comments, ratings, and groups. HydroShare also supports web services and cloud based computation for the execution of hydrologic models and analysis and visualization of hydrologic data. However, the quantity and variety of data and modeling infrastructure available that can be accessed from environments like HydroShare is increasing. Storage infrastructure can range from one's local PC to campus or organizational storage to storage in the cloud. Modeling or computing infrastructure can range from one's desktop to departmental clusters to national HPC resources to grid and cloud computing resources. How does one orchestrate this vast number of data and computing infrastructure without needing to correspondingly learn each new system? A common limitation across these systems is the lack of efficient integration between data transport mechanisms and the corresponding high-level services to support large distributed data and compute operations. A scientist running a hydrology model from their desktop may require processing a large collection of files across the aforementioned storage and compute resources and various national databases. To address these community challenges a proof-of-concept prototype was created integrating HydroShare with RADII (Resource Aware Data-centric collaboration Infrastructure) to provide software infrastructure to enable the comprehensive and rapid dynamic deployment of what we refer to as "collaborative infrastructure." In this presentation we discuss the results of this proof-of-concept prototype which enabled HydroShare users to readily instantiate virtual infrastructure marshaling arbitrary combinations, varieties, and quantities of distributed data and computing infrastructure in addressing big problems in hydrology.
The Interplanetary Meteoroid Environment for eXploration
NASA Astrophysics Data System (ADS)
Soja, R.; Sommer, M.; Srama, R.; Strub, P.; Grün, E.; Rodmann, J.; Vaubaillon, J.; Hornig, A.; Bausch, L.
2014-07-01
The Interplanetary Meteoroid Environment for eXploration (IMEX) project, funded by the European Space Agency (ESA), aims to characterize dust trails and streams produced by comets in the inner solar system. The goal is to predict meteor showers at any position or time in the solar system, such as at specific spacecraft or planets. This model will allow for the assessment of the dust impact hazard to spacecraft, which is important because hypervelocity impacts of micrometeoroids can damage or destroy spacecraft or their subsystems through physical damage or electromagnetic effects. Such considerations are particularly important in the context of human exploration of the solar system. Additionally, such a model will allow for scientific study of specific trails and their connections to observed dust phenomena, such as cometary trails and new meteor showers at Earth. We have recently expanded the model to include explicit integrations of large numbers of particles from each comet, utilizing the Constellation platform to perform the calculations. This is a distributed computing system, where currently 10,000 users are donating their idle computing time at home and thus generating a virtual supercomputer of 40,000 host PCs connected via the Internet (aerospaceresearch.net). This form of citizen science provides the required computing performance for simulating millions of particles ejected by each of the ˜400 comets, while developing the relationship between scientists and the general public. The result will be a unique set of saved orbital information for a large number of cometary streams, allowing efficient computation of their locations at any point in space and time. Here we will present the results from several test streams and discuss the progress towards obtaining the full set of integrated particles for each of the selected ˜400 short-period comets. individual Constellation users for their computing time.
Adaptation of a Control Center Development Environment for Industrial Process Control
NASA Technical Reports Server (NTRS)
Killough, Ronnie L.; Malik, James M.
1994-01-01
In the control center, raw telemetry data is received for storage, display, and analysis. This raw data must be combined and manipulated in various ways by mathematical computations to facilitate analysis, provide diversified fault detection mechanisms, and enhance display readability. A development tool called the Graphical Computation Builder (GCB) has been implemented which provides flight controllers with the capability to implement computations for use in the control center. The GCB provides a language that contains both general programming constructs and language elements specifically tailored for the control center environment. The GCB concept allows staff who are not skilled in computer programming to author and maintain computer programs. The GCB user is isolated from the details of external subsystem interfaces and has access to high-level functions such as matrix operators, trigonometric functions, and unit conversion macros. The GCB provides a high level of feedback during computation development that improves upon the often cryptic errors produced by computer language compilers. An equivalent need can be identified in the industrial data acquisition and process control domain: that of an integrated graphical development tool tailored to the application to hide the operating system, computer language, and data acquisition interface details. The GCB features a modular design which makes it suitable for technology transfer without significant rework. Control center-specific language elements can be replaced by elements specific to industrial process control.
The Numerical Propulsion System Simulation: An Overview
NASA Technical Reports Server (NTRS)
Lytle, John K.
2000-01-01
Advances in computational technology and in physics-based modeling are making large-scale, detailed simulations of complex systems possible within the design environment. For example, the integration of computing, communications, and aerodynamics has reduced the time required to analyze major propulsion system components from days and weeks to minutes and hours. This breakthrough has enabled the detailed simulation of major propulsion system components to become a routine part of designing systems, providing the designer with critical information about the components early in the design process. This paper describes the development of the numerical propulsion system simulation (NPSS), a modular and extensible framework for the integration of multicomponent and multidisciplinary analysis tools using geographically distributed resources such as computing platforms, data bases, and people. The analysis is currently focused on large-scale modeling of complete aircraft engines. This will provide the product developer with a "virtual wind tunnel" that will reduce the number of hardware builds and tests required during the development of advanced aerospace propulsion systems.
Virtual reality and brain computer interface in neurorehabilitation
Dahdah, Marie; Driver, Simon; Parsons, Thomas D.; Richter, Kathleen M.
2016-01-01
The potential benefit of technology to enhance recovery after central nervous system injuries is an area of increasing interest and exploration. The primary emphasis to date has been motor recovery/augmentation and communication. This paper introduces two original studies to demonstrate how advanced technology may be integrated into subacute rehabilitation. The first study addresses the feasibility of brain computer interface with patients on an inpatient spinal cord injury unit. The second study explores the validity of two virtual environments with acquired brain injury as part of an intensive outpatient neurorehabilitation program. These preliminary studies support the feasibility of advanced technologies in the subacute stage of neurorehabilitation. These modalities were well tolerated by participants and could be incorporated into patients' inpatient and outpatient rehabilitation regimens without schedule disruptions. This paper expands the limited literature base regarding the use of advanced technologies in the early stages of recovery for neurorehabilitation populations and speaks favorably to the potential integration of brain computer interface and virtual reality technologies as part of a multidisciplinary treatment program. PMID:27034541
1982-11-12
File 1/0 Prgram Invocation Other Access M and Control Services KAPSE/Host Interface most Operating System Peripherals/ 01 su ?eetworks 6282318-2 Figure 3...3.2.4.3.8.5 Transitory Windows The TRANSITORY flag is used to prevent permanent dependence on temporary windows created simply for focusing on a part of the...KAPSE/Tool interfaces in terms of these low-level host-independent interfaces. In addition, the KAPSE/Host interface packages prevent the application
Computer-aided acquisition and logistics support (CALS): Concept of Operations for Depot Maintenance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bourgeois, N.C.; Greer, D.K.
1993-04-01
This CALS Concept of Operations for Depot Maintenance provides the foundation strategy and the near term tactical plan for CALS implementation in the depot maintenance environment. The user requirements enumerated and the overarching architecture outlined serve as the primary framework for implementation planning. The seamless integration of depot maintenance business processes and supporting information systems with the emerging global CALS environment will be critical to the efficient realization of depot user's information requirements, and as, such will be a fundamental theme in depot implementations.
Fusion interfaces for tactical environments: An application of virtual reality technology
NASA Technical Reports Server (NTRS)
Haas, Michael W.
1994-01-01
The term Fusion Interface is defined as a class of interface which integrally incorporates both virtual and nonvirtual concepts and devices across the visual, auditory, and haptic sensory modalities. A fusion interface is a multisensory virtually-augmented synthetic environment. A new facility has been developed within the Human Engineering Division of the Armstrong Laboratory dedicated to exploratory development of fusion interface concepts. This new facility, the Fusion Interfaces for Tactical Environments (FITE) Facility is a specialized flight simulator enabling efficient concept development through rapid prototyping and direct experience of new fusion concepts. The FITE Facility also supports evaluation of fusion concepts by operation fighter pilots in an air combat environment. The facility is utilized by a multidisciplinary design team composed of human factors engineers, electronics engineers, computer scientists, experimental psychologists, and oeprational pilots. The FITE computational architecture is composed of twenty-five 80486-based microcomputers operating in real-time. The microcomputers generate out-the-window visuals, in-cockpit and head-mounted visuals, localized auditory presentations, haptic displays on the stick and rudder pedals, as well as executing weapons models, aerodynamic models, and threat models.
Tolaymat, Thabet; El Badawy, Amro; Sequeira, Reynold; Genaidy, Ash
2015-11-15
There is an urgent need for broad and integrated studies that address the risks of engineered nanomaterials (ENMs) along the different endpoints of the society, environment, and economy (SEE) complex adaptive system. This article presents an integrated science-based methodology to assess the potential risks of engineered nanomaterials. To achieve the study objective, two major tasks are accomplished, knowledge synthesis and algorithmic computational methodology. The knowledge synthesis task is designed to capture "what is known" and to outline the gaps in knowledge from ENMs risk perspective. The algorithmic computational methodology is geared toward the provision of decisions and an understanding of the risks of ENMs along different endpoints for the constituents of the SEE complex adaptive system. The approach presented herein allows for addressing the formidable task of assessing the implications and risks of exposure to ENMs, with the long term goal to build a decision-support system to guide key stakeholders in the SEE system towards building sustainable ENMs and nano-enabled products. Published by Elsevier B.V.
Simulating Humans as Integral Parts of Spacecraft Missions
NASA Technical Reports Server (NTRS)
Bruins, Anthony C.; Rice, Robert; Nguyen, Lac; Nguyen, Heidi; Saito, Tim; Russell, Elaine
2006-01-01
The Collaborative-Virtual Environment Simulation Tool (C-VEST) software was developed for use in a NASA project entitled "3-D Interactive Digital Virtual Human." The project is oriented toward the use of a comprehensive suite of advanced software tools in computational simulations for the purposes of human-centered design of spacecraft missions and of the spacecraft, space suits, and other equipment to be used on the missions. The C-VEST software affords an unprecedented suite of capabilities for three-dimensional virtual-environment simulations with plug-in interfaces for physiological data, haptic interfaces, plug-and-play software, realtime control, and/or playback control. Mathematical models of the mechanics of the human body and of the aforementioned equipment are implemented in software and integrated to simulate forces exerted on and by astronauts as they work. The computational results can then support the iterative processes of design, building, and testing in applied systems engineering and integration. The results of the simulations provide guidance for devising measures to counteract effects of microgravity on the human body and for the rapid development of virtual (that is, simulated) prototypes of advanced space suits, cockpits, and robots to enhance the productivity, comfort, and safety of astronauts. The unique ability to implement human-in-the-loop immersion also makes the C-VEST software potentially valuable for use in commercial and academic settings beyond the original space-mission setting.
Automation of the CFD Process on Distributed Computing Systems
NASA Technical Reports Server (NTRS)
Tejnil, Ed; Gee, Ken; Rizk, Yehia M.
2000-01-01
A script system was developed to automate and streamline portions of the CFD process. The system was designed to facilitate the use of CFD flow solvers on supercomputer and workstation platforms within a parametric design event. Integrating solver pre- and postprocessing phases, the fully automated ADTT script system marshalled the required input data, submitted the jobs to available computational resources, and processed the resulting output data. A number of codes were incorporated into the script system, which itself was part of a larger integrated design environment software package. The IDE and scripts were used in a design event involving a wind tunnel test. This experience highlighted the need for efficient data and resource management in all parts of the CFD process. To facilitate the use of CFD methods to perform parametric design studies, the script system was developed using UNIX shell and Perl languages. The goal of the work was to minimize the user interaction required to generate the data necessary to fill a parametric design space. The scripts wrote out the required input files for the user-specified flow solver, transferred all necessary input files to the computational resource, submitted and tracked the jobs using the resource queuing structure, and retrieved and post-processed the resulting dataset. For computational resources that did not run queueing software, the script system established its own simple first-in-first-out queueing structure to manage the workload. A variety of flow solvers were incorporated in the script system, including INS2D, PMARC, TIGER and GASP. Adapting the script system to a new flow solver was made easier through the use of object-oriented programming methods. The script system was incorporated into an ADTT integrated design environment and evaluated as part of a wind tunnel experiment. The system successfully generated the data required to fill the desired parametric design space. This stressed the computational resources required to compute and store the information. The scripts were continually modified to improve the utilization of the computational resources and reduce the likelihood of data loss due to failures. An ad-hoc file server was created to manage the large amount of data being generated as part of the design event. Files were stored and retrieved as needed to create new jobs and analyze the results. Additional information is contained in the original.
Zhan, X.
2005-01-01
A parallel Fortran-MPI (Message Passing Interface) software for numerical inversion of the Laplace transform based on a Fourier series method is developed to meet the need of solving intensive computational problems involving oscillatory water level's response to hydraulic tests in a groundwater environment. The software is a parallel version of ACM (The Association for Computing Machinery) Transactions on Mathematical Software (TOMS) Algorithm 796. Running 38 test examples indicated that implementation of MPI techniques with distributed memory architecture speedups the processing and improves the efficiency. Applications to oscillatory water levels in a well during aquifer tests are presented to illustrate how this package can be applied to solve complicated environmental problems involved in differential and integral equations. The package is free and is easy to use for people with little or no previous experience in using MPI but who wish to get off to a quick start in parallel computing. ?? 2004 Elsevier Ltd. All rights reserved.
BIM Based Virtual Environment for Fire Emergency Evacuation
Rezgui, Yacine; Ong, Hoang N.
2014-01-01
Recent building emergency management research has highlighted the need for the effective utilization of dynamically changing building information. BIM (building information modelling) can play a significant role in this process due to its comprehensive and standardized data format and integrated process. This paper introduces a BIM based virtual environment supported by virtual reality (VR) and a serious game engine to address several key issues for building emergency management, for example, timely two-way information updating and better emergency awareness training. The focus of this paper lies on how to utilize BIM as a comprehensive building information provider to work with virtual reality technologies to build an adaptable immersive serious game environment to provide real-time fire evacuation guidance. The innovation lies on the seamless integration between BIM and a serious game based virtual reality (VR) environment aiming at practical problem solving by leveraging state-of-the-art computing technologies. The system has been tested for its robustness and functionality against the development requirements, and the results showed promising potential to support more effective emergency management. PMID:25197704
NASA Technical Reports Server (NTRS)
Hribar, Michelle R.; Frumkin, Michael; Jin, Haoqiang; Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)
1998-01-01
Over the past decade, high performance computing has evolved rapidly; systems based on commodity microprocessors have been introduced in quick succession from at least seven vendors/families. Porting codes to every new architecture is a difficult problem; in particular, here at NASA, there are many large CFD applications that are very costly to port to new machines by hand. The LCM ("Legacy Code Modernization") Project is the development of an integrated parallelization environment (IPE) which performs the automated mapping of legacy CFD (Fortran) applications to state-of-the-art high performance computers. While most projects to port codes focus on the parallelization of the code, we consider porting to be an iterative process consisting of several steps: 1) code cleanup, 2) serial optimization,3) parallelization, 4) performance monitoring and visualization, 5) intelligent tools for automated tuning using performance prediction and 6) machine specific optimization. The approach for building this parallelization environment is to build the components for each of the steps simultaneously and then integrate them together. The demonstration will exhibit our latest research in building this environment: 1. Parallelizing tools and compiler evaluation. 2. Code cleanup and serial optimization using automated scripts 3. Development of a code generator for performance prediction 4. Automated partitioning 5. Automated insertion of directives. These demonstrations will exhibit the effectiveness of an automated approach for all the steps involved with porting and tuning a legacy code application for a new architecture.
NASA Astrophysics Data System (ADS)
Klieger, Aviva; Ben-Hur, Yehuda; Bar-Yossef, Nurit
2010-04-01
The study examines the professional development of junior-high-school teachers participating in the Israeli "Katom" (Computer for Every Class, Student and Teacher) Program, begun in 2004. A three-circle support and training model was developed for teachers' professional development. The first circle applies to all teachers in the program; the second, to all teachers at individual schools; the third to teachers of specific disciplines. The study reveals and describes the attitudes of science teachers to the integration of laptop computers and to the accompanying professional development model. Semi-structured interviews were conducted with eight science teachers from the four schools participating in the program. The interviews were analyzed according to the internal relational framework taken from the information that arose from the interviews. Two factors influenced science teachers' professional development: (1) Introduction of laptops to the teachers and students. (2) The support and training system. Interview analysis shows that the disciplinary training is most relevant to teachers and they are very interested in belonging to the professional science teachers' community. They also prefer face-to-face meetings in their school. Among the difficulties they noted were the new learning environment, including control of student computers, computer integration in laboratory work and technical problems. Laptop computers contributed significantly to teachers' professional and personal development and to a shift from teacher-centered to student-centered teaching. One-to-One laptops also changed the schools' digital culture. The findings are important for designing concepts and models for professional development when introducing technological innovation into the educational system.
Resilient and Robust High Performance Computing Platforms for Scientific Computing Integrity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Yier
As technology advances, computer systems are subject to increasingly sophisticated cyber-attacks that compromise both their security and integrity. High performance computing platforms used in commercial and scientific applications involving sensitive, or even classified data, are frequently targeted by powerful adversaries. This situation is made worse by a lack of fundamental security solutions that both perform efficiently and are effective at preventing threats. Current security solutions fail to address the threat landscape and ensure the integrity of sensitive data. As challenges rise, both private and public sectors will require robust technologies to protect its computing infrastructure. The research outcomes from thismore » project try to address all these challenges. For example, we present LAZARUS, a novel technique to harden kernel Address Space Layout Randomization (KASLR) against paging-based side-channel attacks. In particular, our scheme allows for fine-grained protection of the virtual memory mappings that implement the randomization. We demonstrate the effectiveness of our approach by hardening a recent Linux kernel with LAZARUS, mitigating all of the previously presented side-channel attacks on KASLR. Our extensive evaluation shows that LAZARUS incurs only 0.943% overhead for standard benchmarks, and is therefore highly practical. We also introduced HA2lloc, a hardware-assisted allocator that is capable of leveraging an extended memory management unit to detect memory errors in the heap. We also perform testing using HA2lloc in a simulation environment and find that the approach is capable of preventing common memory vulnerabilities.« less
Rodriguez, Blanca; Carusi, Annamaria; Abi-Gerges, Najah; Ariga, Rina; Britton, Oliver; Bub, Gil; Bueno-Orovio, Alfonso; Burton, Rebecca A B; Carapella, Valentina; Cardone-Noott, Louie; Daniels, Matthew J; Davies, Mark R; Dutta, Sara; Ghetti, Andre; Grau, Vicente; Harmer, Stephen; Kopljar, Ivan; Lambiase, Pier; Lu, Hua Rong; Lyon, Aurore; Minchole, Ana; Muszkiewicz, Anna; Oster, Julien; Paci, Michelangelo; Passini, Elisa; Severi, Stefano; Taggart, Peter; Tinker, Andy; Valentin, Jean-Pierre; Varro, Andras; Wallman, Mikael; Zhou, Xin
2016-09-01
Both biomedical research and clinical practice rely on complex datasets for the physiological and genetic characterization of human hearts in health and disease. Given the complexity and variety of approaches and recordings, there is now growing recognition of the need to embed computational methods in cardiovascular medicine and science for analysis, integration and prediction. This paper describes a Workshop on Computational Cardiovascular Science that created an international, interdisciplinary and inter-sectorial forum to define the next steps for a human-based approach to disease supported by computational methodologies. The main ideas highlighted were (i) a shift towards human-based methodologies, spurred by advances in new in silico, in vivo, in vitro, and ex vivo techniques and the increasing acknowledgement of the limitations of animal models. (ii) Computational approaches complement, expand, bridge, and integrate in vitro, in vivo, and ex vivo experimental and clinical data and methods, and as such they are an integral part of human-based methodologies in pharmacology and medicine. (iii) The effective implementation of multi- and interdisciplinary approaches, teams, and training combining and integrating computational methods with experimental and clinical approaches across academia, industry, and healthcare settings is a priority. (iv) The human-based cross-disciplinary approach requires experts in specific methodologies and domains, who also have the capacity to communicate and collaborate across disciplines and cross-sector environments. (v) This new translational domain for human-based cardiology and pharmacology requires new partnerships supported financially and institutionally across sectors. Institutional, organizational, and social barriers must be identified, understood and overcome in each specific setting. © The Author 2015. Published by Oxford University Press on behalf of the European Society of Cardiology.
Construction of dynamic stochastic simulation models using knowledge-based techniques
NASA Technical Reports Server (NTRS)
Williams, M. Douglas; Shiva, Sajjan G.
1990-01-01
Over the past three decades, computer-based simulation models have proven themselves to be cost-effective alternatives to the more structured deterministic methods of systems analysis. During this time, many techniques, tools and languages for constructing computer-based simulation models have been developed. More recently, advances in knowledge-based system technology have led many researchers to note the similarities between knowledge-based programming and simulation technologies and to investigate the potential application of knowledge-based programming techniques to simulation modeling. The integration of conventional simulation techniques with knowledge-based programming techniques is discussed to provide a development environment for constructing knowledge-based simulation models. A comparison of the techniques used in the construction of dynamic stochastic simulation models and those used in the construction of knowledge-based systems provides the requirements for the environment. This leads to the design and implementation of a knowledge-based simulation development environment. These techniques were used in the construction of several knowledge-based simulation models including the Advanced Launch System Model (ALSYM).
Development of a Space Radiation Monte Carlo Computer Simulation
NASA Technical Reports Server (NTRS)
Pinsky, Lawrence S.
1997-01-01
The ultimate purpose of this effort is to undertake the development of a computer simulation of the radiation environment encountered in spacecraft which is based upon the Monte Carlo technique. The current plan is to adapt and modify a Monte Carlo calculation code known as FLUKA, which is presently used in high energy and heavy ion physics, to simulate the radiation environment present in spacecraft during missions. The initial effort would be directed towards modeling the MIR and Space Shuttle environments, but the long range goal is to develop a program for the accurate prediction of the radiation environment likely to be encountered on future planned endeavors such as the Space Station, a Lunar Return Mission, or a Mars Mission. The longer the mission, especially those which will not have the shielding protection of the earth's magnetic field, the more critical the radiation threat will be. The ultimate goal of this research is to produce a code that will be useful to mission planners and engineers who need to have detailed projections of radiation exposures at specified locations within the spacecraft and for either specific times during the mission or integrated over the entire mission. In concert with the development of the simulation, it is desired to integrate it with a state-of-the-art interactive 3-D graphics-capable analysis package known as ROOT, to allow easy investigation and visualization of the results. The efforts reported on here include the initial development of the program and the demonstration of the efficacy of the technique through a model simulation of the MIR environment. This information was used to write a proposal to obtain follow-on permanent funding for this project.
LXtoo: an integrated live Linux distribution for the bioinformatics community
2012-01-01
Background Recent advances in high-throughput technologies dramatically increase biological data generation. However, many research groups lack computing facilities and specialists. This is an obstacle that remains to be addressed. Here, we present a Linux distribution, LXtoo, to provide a flexible computing platform for bioinformatics analysis. Findings Unlike most of the existing live Linux distributions for bioinformatics limiting their usage to sequence analysis and protein structure prediction, LXtoo incorporates a comprehensive collection of bioinformatics software, including data mining tools for microarray and proteomics, protein-protein interaction analysis, and computationally complex tasks like molecular dynamics. Moreover, most of the programs have been configured and optimized for high performance computing. Conclusions LXtoo aims to provide well-supported computing environment tailored for bioinformatics research, reducing duplication of efforts in building computing infrastructure. LXtoo is distributed as a Live DVD and freely available at http://bioinformatics.jnu.edu.cn/LXtoo. PMID:22813356
LXtoo: an integrated live Linux distribution for the bioinformatics community.
Yu, Guangchuang; Wang, Li-Gen; Meng, Xiao-Hua; He, Qing-Yu
2012-07-19
Recent advances in high-throughput technologies dramatically increase biological data generation. However, many research groups lack computing facilities and specialists. This is an obstacle that remains to be addressed. Here, we present a Linux distribution, LXtoo, to provide a flexible computing platform for bioinformatics analysis. Unlike most of the existing live Linux distributions for bioinformatics limiting their usage to sequence analysis and protein structure prediction, LXtoo incorporates a comprehensive collection of bioinformatics software, including data mining tools for microarray and proteomics, protein-protein interaction analysis, and computationally complex tasks like molecular dynamics. Moreover, most of the programs have been configured and optimized for high performance computing. LXtoo aims to provide well-supported computing environment tailored for bioinformatics research, reducing duplication of efforts in building computing infrastructure. LXtoo is distributed as a Live DVD and freely available at http://bioinformatics.jnu.edu.cn/LXtoo.
Lahav, Orly; Schloerb, David W.; Srinivasan, Mandayam A.
2014-01-01
This paper presents the integration of a virtual environment (BlindAid) in an orientation and mobility rehabilitation program as a training aid for people who are blind. BlindAid allows the users to interact with different virtual structures and objects through auditory and haptic feedback. This research explores if and how use of the BlindAid in conjunction with a rehabilitation program can help people who are blind train themselves in familiar and unfamiliar spaces. The study, focused on nine participants who were congenitally, adventitiously, and newly blind, during their orientation and mobility rehabilitation program at the Carroll Center for the Blind (Newton, Massachusetts, USA). The research was implemented using virtual environment (VE) exploration tasks and orientation tasks in virtual environments and real spaces. The methodology encompassed both qualitative and quantitative methods, including interviews, a questionnaire, videotape recording, and user computer logs. The results demonstrated that the BlindAid training gave participants additional time to explore the virtual environment systematically. Secondly, it helped elucidate several issues concerning the potential strengths of the BlindAid system as a training aid for orientation and mobility for both adults and teenagers who are congenitally, adventitiously, and newly blind. PMID:25284952
NASA Technical Reports Server (NTRS)
Babrauckas, Theresa
2000-01-01
The Affordable High Performance Computing (AHPC) project demonstrated that high-performance computing based on a distributed network of computer workstations is a cost-effective alternative to vector supercomputers for running CPU and memory intensive design and analysis tools. The AHPC project created an integrated system called a Network Supercomputer. By connecting computer work-stations through a network and utilizing the workstations when they are idle, the resulting distributed-workstation environment has the same performance and reliability levels as the Cray C90 vector Supercomputer at less than 25 percent of the C90 cost. In fact, the cost comparison between a Cray C90 Supercomputer and Sun workstations showed that the number of distributed networked workstations equivalent to a C90 costs approximately 8 percent of the C90.
A computing method for spatial accessibility based on grid partition
NASA Astrophysics Data System (ADS)
Ma, Linbing; Zhang, Xinchang
2007-06-01
An accessibility computing method and process based on grid partition was put forward in the paper. As two important factors impacting on traffic, density of road network and relative spatial resistance for difference land use was integrated into computing traffic cost in each grid. A* algorithms was inducted to searching optimum traffic cost of grids path, a detailed searching process and definition of heuristic evaluation function was described in the paper. Therefore, the method can be implemented more simply and its data source is obtained more easily. Moreover, by changing heuristic searching information, more reasonable computing result can be obtained. For confirming our research, a software package was developed with C# language under ArcEngine9 environment. Applying the computing method, a case study on accessibility of business districts in Guangzhou city was carried out.
PCSIM: A Parallel Simulation Environment for Neural Circuits Fully Integrated with Python
Pecevski, Dejan; Natschläger, Thomas; Schuch, Klaus
2008-01-01
The Parallel Circuit SIMulator (PCSIM) is a software package for simulation of neural circuits. It is primarily designed for distributed simulation of large scale networks of spiking point neurons. Although its computational core is written in C++, PCSIM's primary interface is implemented in the Python programming language, which is a powerful programming environment and allows the user to easily integrate the neural circuit simulator with data analysis and visualization tools to manage the full neural modeling life cycle. The main focus of this paper is to describe PCSIM's full integration into Python and the benefits thereof. In particular we will investigate how the automatically generated bidirectional interface and PCSIM's object-oriented modular framework enable the user to adopt a hybrid modeling approach: using and extending PCSIM's functionality either employing pure Python or C++ and thus combining the advantages of both worlds. Furthermore, we describe several supplementary PCSIM packages written in pure Python and tailored towards setting up and analyzing neural simulations. PMID:19543450
NASA Astrophysics Data System (ADS)
Jana, Suman; Biswas, Pabitra Kumar; Das, Upama
2018-04-01
The analytical and simulation-based study in this presented paper shows a comparative report between two level inverter and five-level inverter with the integration of Supercapacitive storage in Renewable Energy system. Sometime dependent numerical models are used to measure the voltage and current response of two level and five level inverter in MATLAB Simulink based environment. In this study supercapacitive sources, which are fed by solar cells are used as input sources to experiment the response of multilevel inverter with integration of su-percapacitor as a storage device of Renewable Energy System. The RL load is used to compute the time response in MATLABSimulink based environment. With the simulation results a comparative study has been made of two different level types of inverters. Two basic types of inverter are discussed in the study with reference to their electrical behavior. It is also simulated that multilevel inverter can convert stored energy within supercapacitor which is extracted from Renewable Energy System.
CONDUIT: A New Multidisciplinary Integration Environment for Flight Control Development
NASA Technical Reports Server (NTRS)
Tischler, Mark B.; Colbourne, Jason D.; Morel, Mark R.; Biezad, Daniel J.; Levine, William S.; Moldoveanu, Veronica
1997-01-01
A state-of-the-art computational facility for aircraft flight control design, evaluation, and integration called CONDUIT (Control Designer's Unified Interface) has been developed. This paper describes the CONDUIT tool and case study applications to complex rotary- and fixed-wing fly-by-wire flight control problems. Control system analysis and design optimization methods are presented, including definition of design specifications and system models within CONDUIT, and the multi-objective function optimization (CONSOL-OPTCAD) used to tune the selected design parameters. Design examples are based on flight test programs for which extensive data are available for validation. CONDUIT is used to analyze baseline control laws against pertinent military handling qualities and control system specifications. In both case studies, CONDUIT successfully exploits trade-offs between forward loop and feedback dynamics to significantly improve the expected handling, qualities and minimize the required actuator authority. The CONDUIT system provides a new environment for integrated control system analysis and design, and has potential for significantly reducing the time and cost of control system flight test optimization.
Topological Schemas of Cognitive Maps and Spatial Learning.
Babichev, Andrey; Cheng, Sen; Dabaghian, Yuri A
2016-01-01
Spatial navigation in mammals is based on building a mental representation of their environment-a cognitive map. However, both the nature of this cognitive map and its underpinning in neural structures and activity remains vague. A key difficulty is that these maps are collective, emergent phenomena that cannot be reduced to a simple combination of inputs provided by individual neurons. In this paper we suggest computational frameworks for integrating the spiking signals of individual cells into a spatial map, which we call schemas. We provide examples of four schemas defined by different types of topological relations that may be neurophysiologically encoded in the brain and demonstrate that each schema provides its own large-scale characteristics of the environment-the schema integrals. Moreover, we find that, in all cases, these integrals are learned at a rate which is faster than the rate of complete training of neural networks. Thus, the proposed schema framework differentiates between the cognitive aspect of spatial learning and the physiological aspect at the neural network level.
Community Coordinated Modeling Center Support of Science Needs for Integrated Data Environment
NASA Technical Reports Server (NTRS)
Kuznetsova, M. M.; Hesse, M.; Rastatter, L.; Maddox, M.
2007-01-01
Space science models are essential component of integrated data environment. Space science models are indispensable tools to facilitate effective use of wide variety of distributed scientific sources and to place multi-point local measurements into global context. The Community Coordinated Modeling Center (CCMC) hosts a set of state-of-the- art space science models ranging from the solar atmosphere to the Earth's upper atmosphere. The majority of models residing at CCMC are comprehensive computationally intensive physics-based models. To allow the models to be driven by data relevant to particular events, the CCMC developed an online data file generation tool that automatically downloads data from data providers and transforms them to required format. CCMC provides a tailored web-based visualization interface for the model output, as well as the capability to download simulations output in portable standard format with comprehensive metadata and user-friendly model output analysis library of routines that can be called from any C supporting language. CCMC is developing data interpolation tools that enable to present model output in the same format as observations. CCMC invite community comments and suggestions to better address science needs for the integrated data environment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hayes, Birchard P; Michel, Kelly D; Few, Douglas A
From stereophonic, positional sound to high-definition imagery that is crisp and clean, high fidelity computer graphics enhance our view, insight, and intuition regarding our environments and conditions. Contemporary 3-D modeling tools offer an open architecture framework that enables integration with other technologically innovative arenas. One innovation of great interest is Augmented Reality, the merging of virtual, digital environments with physical, real-world environments creating a mixed reality where relevant data and information augments the real or actual experience in real-time by spatial or semantic context. Pairing 3-D virtual immersive models with a dynamic platform such as semi-autonomous robotics or personnel odometrymore » systems to create a mixed reality offers a new and innovative design information verification inspection capability, evaluation accuracy, and information gathering capability for nuclear facilities. Our paper discusses the integration of two innovative technologies, 3-D visualizations with inertial positioning systems, and the resulting augmented reality offered to the human inspector. The discussion in the paper includes an exploration of human and non-human (surrogate) inspections of a nuclear facility, integrated safeguards knowledge within a synchronized virtual model operated, or worn, by a human inspector, and the anticipated benefits to safeguards evaluations of facility operations.« less
Computer based human-centered display system
NASA Technical Reports Server (NTRS)
Temme, Leonard A. (Inventor); Still, David L. (Inventor)
2002-01-01
A human centered informational display is disclosed that can be used with vehicles (e.g. aircraft) and in other operational environments where rapid human centered comprehension of an operational environment is required. The informational display integrates all cockpit information into a single display in such a way that the pilot can clearly understand with a glance, his or her spatial orientation, flight performance, engine status and power management issues, radio aids, and the location of other air traffic, runways, weather, and terrain features. With OZ the information is presented as an integrated whole, the pilot instantaneously recognizes flight path deviations, and is instinctively drawn to the corrective maneuvers. Our laboratory studies indicate that OZ transfers to the pilot all of the integrated display information in less than 200 milliseconds. The reacquisition of scan can be accomplished just as quickly. Thus, the time constants for forming a mental model are near instantaneous. The pilot's ability to keep up with rapidly changing and threatening environments is tremendously enhanced. OZ is most easily compatible with aircraft that has flight path information coded electronically. With the correct sensors (which are currently available) OZ can be installed in essentially all current aircraft.
Computer simulation as a teaching aid in pharmacy management--Part 1: Principles of accounting.
Morrison, D J
1987-06-01
The need for pharmacists to develop management expertise through participation in formal courses is now widely acknowledged. Many schools of pharmacy lay the foundations for future management training by providing introductory courses as an integral or elective part of the undergraduate syllabus. The benefit of such courses may, however, be limited by the lack of opportunity for the student to apply the concepts and procedures in a practical working environment. Computer simulations provide a means to overcome this problem, particularly in the field of resource management. In this, the first of two articles, the use of a computer model to demonstrate basic accounting principles is described.
Access control and privacy in large distributed systems
NASA Technical Reports Server (NTRS)
Leiner, B. M.; Bishop, M.
1986-01-01
Large scale distributed systems consists of workstations, mainframe computers, supercomputers and other types of servers, all connected by a computer network. These systems are being used in a variety of applications including the support of collaborative scientific research. In such an environment, issues of access control and privacy arise. Access control is required for several reasons, including the protection of sensitive resources and cost control. Privacy is also required for similar reasons, including the protection of a researcher's proprietary results. A possible architecture for integrating available computer and communications security technologies into a system that meet these requirements is described. This architecture is meant as a starting point for discussion, rather that the final answer.
Vision Based Autonomous Robotic Control for Advanced Inspection and Repair
NASA Technical Reports Server (NTRS)
Wehner, Walter S.
2014-01-01
The advanced inspection system is an autonomous control and analysis system that improves the inspection and remediation operations for ground and surface systems. It uses optical imaging technology with intelligent computer vision algorithms to analyze physical features of the real-world environment to make decisions and learn from experience. The advanced inspection system plans to control a robotic manipulator arm, an unmanned ground vehicle and cameras remotely, automatically and autonomously. There are many computer vision, image processing and machine learning techniques available as open source for using vision as a sensory feedback in decision-making and autonomous robotic movement. My responsibilities for the advanced inspection system are to create a software architecture that integrates and provides a framework for all the different subsystem components; identify open-source algorithms and techniques; and integrate robot hardware.
Growth Control and Disease Mechanisms in Computational Embryogeny
NASA Technical Reports Server (NTRS)
Shapiro, Andrew A.; Yogev, Or; Antonsson, Erik K.
2008-01-01
This paper presents novel approach to applying growth control and diseases mechanisms in computational embryogeny. Our method, which mimics fundamental processes from biology, enables individuals to reach maturity in a controlled process through a stochastic environment. Three different mechanisms were implemented; disease mechanisms, gene suppression, and thermodynamic balancing. This approach was integrated as part of a structural evolutionary model. The model evolved continuum 3-D structures which support an external load. By using these mechanisms we were able to evolve individuals that reached a fixed size limit through the growth process. The growth process was an integral part of the complete development process. The size of the individuals was determined purely by the evolutionary process where different individuals matured to different sizes. Individuals which evolved with these characteristics have been found to be very robust for supporting a wide range of external loads.
Evolving software reengineering technology for the emerging innovative-competitive era
NASA Technical Reports Server (NTRS)
Hwang, Phillip Q.; Lock, Evan; Prywes, Noah
1994-01-01
This paper reports on a multi-tool commercial/military environment combining software Domain Analysis techniques with Reusable Software and Reengineering of Legacy Software. It is based on the development of a military version for the Department of Defense (DOD). The integrated tools in the military version are: Software Specification Assistant (SSA) and Software Reengineering Environment (SRE), developed by Computer Command and Control Company (CCCC) for Naval Surface Warfare Center (NSWC) and Joint Logistics Commanders (JLC), and the Advanced Research Project Agency (ARPA) STARS Software Engineering Environment (SEE) developed by Boeing for NAVAIR PMA 205. The paper describes transitioning these integrated tools to commercial use. There is a critical need for the transition for the following reasons: First, to date, 70 percent of programmers' time is applied to software maintenance. The work of these users has not been facilitated by existing tools. The addition of Software Reengineering will also facilitate software maintenance and upgrading. In fact, the integrated tools will support the entire software life cycle. Second, the integrated tools are essential to Business Process Reengineering, which seeks radical process innovations to achieve breakthrough results. Done well, process reengineering delivers extraordinary gains in process speed, productivity and profitability. Most importantly, it discovers new opportunities for products and services in collaboration with other organizations. Legacy computer software must be changed rapidly to support innovative business processes. The integrated tools will provide commercial organizations important competitive advantages. This, in turn, will increase employment by creating new business opportunities. Third, the integrated system will produce much higher quality software than use of the tools separately. The reason for this is that producing or upgrading software requires keen understanding of extremely complex applications which is facilitated by the integrated tools. The radical savings in the time and cost associated with software, due to use of CASE tools that support combined Reuse of Software and Reengineering of Legacy Code, will add an important impetus to improving the automation of enterprises. This will be reflected in continuing operations, as well as in innovating new business processes. The proposed multi-tool software development is based on state of the art technology, which will be further advanced through the use of open systems for adding new tools and experience in their use.
Computational biology for cardiovascular biomarker discovery.
Azuaje, Francisco; Devaux, Yvan; Wagner, Daniel
2009-07-01
Computational biology is essential in the process of translating biological knowledge into clinical practice, as well as in the understanding of biological phenomena based on the resources and technologies originating from the clinical environment. One such key contribution of computational biology is the discovery of biomarkers for predicting clinical outcomes using 'omic' information. This process involves the predictive modelling and integration of different types of data and knowledge for screening, diagnostic or prognostic purposes. Moreover, this requires the design and combination of different methodologies based on statistical analysis and machine learning. This article introduces key computational approaches and applications to biomarker discovery based on different types of 'omic' data. Although we emphasize applications in cardiovascular research, the computational requirements and advances discussed here are also relevant to other domains. We will start by introducing some of the contributions of computational biology to translational research, followed by an overview of methods and technologies used for the identification of biomarkers with predictive or classification value. The main types of 'omic' approaches to biomarker discovery will be presented with specific examples from cardiovascular research. This will include a review of computational methodologies for single-source and integrative data applications. Major computational methods for model evaluation will be described together with recommendations for reporting models and results. We will present recent advances in cardiovascular biomarker discovery based on the combination of gene expression and functional network analyses. The review will conclude with a discussion of key challenges for computational biology, including perspectives from the biosciences and clinical areas.