ERIC Educational Resources Information Center
Koszalka, Tiffany A.; Wu, Yiyan
2010-01-01
Changes in engineering practices have spawned changes in engineering education and prompted the use of distributed learning environments. A distributed collaborative engineering design (CED) course was designed to engage engineering students in learning about and solving engineering design problems. The CED incorporated an advanced interactive…
Designing Distributed Learning Environments with Intelligent Software Agents
ERIC Educational Resources Information Center
Lin, Fuhua, Ed.
2005-01-01
"Designing Distributed Learning Environments with Intelligent Software Agents" reports on the most recent advances in agent technologies for distributed learning. Chapters are devoted to the various aspects of intelligent software agents in distributed learning, including the methodological and technical issues on where and how intelligent agents…
Using IMPRINT to Guide Experimental Design with Simulated Task Environments
2015-06-18
USING IMPRINT TO GUIDE EXPERIMENTAL DESIGN OF SIMULATED TASK ENVIRONMENTS THESIS Gregory...ENG-MS-15-J-052 USING IMPRINT TO GUIDE EXPERIMENTAL DESIGN WITH SIMULATED TASK ENVIRONMENTS THESIS Presented to the Faculty Department...Civilian, USAF June 2015 DISTRIBUTION STATEMENT A. APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED. AFIT-ENG-MS-15-J-052 USING IMPRINT
Arcade: A Web-Java Based Framework for Distributed Computing
NASA Technical Reports Server (NTRS)
Chen, Zhikai; Maly, Kurt; Mehrotra, Piyush; Zubair, Mohammad; Bushnell, Dennis M. (Technical Monitor)
2000-01-01
Distributed heterogeneous environments are being increasingly used to execute a variety of large size simulations and computational problems. We are developing Arcade, a web-based environment to design, execute, monitor, and control distributed applications. These targeted applications consist of independent heterogeneous modules which can be executed on a distributed heterogeneous environment. In this paper we describe the overall design of the system and discuss the prototype implementation of the core functionalities required to support such a framework.
ERIC Educational Resources Information Center
Fells, Stephanie
2012-01-01
The design of online or distributed problem-based learning (dPBL) is a nascent, complex design problem. Instructional designers are challenged to effectively unite the constructivist principles of problem-based learning (PBL) with appropriate media in order to create quality dPBL environments. While computer-mediated communication (CMC) tools and…
Advanced air distribution: improving health and comfort while reducing energy use.
Melikov, A K
2016-02-01
Indoor environment affects the health, comfort, and performance of building occupants. The energy used for heating, cooling, ventilating, and air conditioning of buildings is substantial. Ventilation based on total volume air distribution in spaces is not always an efficient way to provide high-quality indoor environments at the same time as low-energy consumption. Advanced air distribution, designed to supply clean air where, when, and as much as needed, makes it possible to efficiently achieve thermal comfort, control exposure to contaminants, provide high-quality air for breathing and minimizing the risk of airborne cross-infection while reducing energy use. This study justifies the need for improving the present air distribution design in occupied spaces, and in general the need for a paradigm shift from the design of collective environments to the design of individually controlled environments. The focus is on advanced air distribution in spaces, its guiding principles and its advantages and disadvantages. Examples of advanced air distribution solutions in spaces for different use, such as offices, hospital rooms, vehicle compartments, are presented. The potential of advanced air distribution, and individually controlled macro-environment in general, for achieving shared values, that is, improved health, comfort, and performance, energy saving, reduction of healthcare costs and improved well-being is demonstrated. Performance criteria are defined and further research in the field is outlined. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Software Tools for Formal Specification and Verification of Distributed Real-Time Systems
1994-07-29
time systems and to evaluate the design. The evaluation of the design includes investigation of both the capability and potential usefulness of the toolkit environment and the feasibility of its implementation....The goals of Phase 1 are to design in detail a toolkit environment based on formal methods for the specification and verification of distributed real
NASA Technical Reports Server (NTRS)
Amonlirdviman, Keith; Farley, Todd C.; Hansman, R. John, Jr.; Ladik, John F.; Sherer, Dana Z.
1998-01-01
A distributed real-time simulation of the civil air traffic environment developed to support human factors research in advanced air transportation technology is presented. The distributed environment is based on a custom simulation architecture designed for simplicity and flexibility in human experiments. Standard Internet protocols are used to create the distributed environment, linking all advanced cockpit simulator, all Air Traffic Control simulator, and a pseudo-aircraft control and simulation management station. The pseudo-aircraft control station also functions as a scenario design tool for coordinating human factors experiments. This station incorporates a pseudo-pilot interface designed to reduce workload for human operators piloting multiple aircraft simultaneously in real time. The application of this distributed simulation facility to support a study of the effect of shared information (via air-ground datalink) on pilot/controller shared situation awareness and re-route negotiation is also presented.
One approach for evaluating the Distributed Computing Design System (DCDS)
NASA Technical Reports Server (NTRS)
Ellis, J. T.
1985-01-01
The Distributed Computer Design System (DCDS) provides an integrated environment to support the life cycle of developing real-time distributed computing systems. The primary focus of DCDS is to significantly increase system reliability and software development productivity, and to minimize schedule and cost risk. DCDS consists of integrated methodologies, languages, and tools to support the life cycle of developing distributed software and systems. Smooth and well-defined transistions from phase to phase, language to language, and tool to tool provide a unique and unified environment. An approach to evaluating DCDS highlights its benefits.
An Exploratory Review of Design Principles in Constructivist Gaming Learning Environments
ERIC Educational Resources Information Center
Rosario, Roberto A. Munoz; Widmeyer, George R.
2009-01-01
Creating a design theory for Constructivist Gaming Learning Environment necessitates, among other things, the establishment of design principles. These principles have the potential to help designers produce games, where users achieve higher levels of learning. This paper focuses on twelve design principles: Probing, Distributed, Multiple Routes,…
CoLeMo: A Collaborative Learning Environment for UML Modelling
ERIC Educational Resources Information Center
Chen, Weiqin; Pedersen, Roger Heggernes; Pettersen, Oystein
2006-01-01
This paper presents the design, implementation, and evaluation of a distributed collaborative UML modelling environment, CoLeMo. CoLeMo is designed for students studying UML modelling. It can also be used as a platform for collaborative design of software. We conducted formative evaluations and a summative evaluation to improve the environment and…
ERIC Educational Resources Information Center
Friedman, Robert S.; Deek, Fadi P.
2002-01-01
Discusses how the design and implementation of problem-solving tools used in programming instruction are complementary with both the theories of problem-based learning (PBL), including constructivism, and the practices of distributed education environments. Examines how combining PBL, Web-based distributed education, and a problem-solving…
ERIC Educational Resources Information Center
Dillenbourg, Pierre
1996-01-01
Maintains that diagnosis, explanation, and tutoring, the functions of an interactive learning environment, are collaborative processes. Examines how human-computer interaction can be improved using a distributed cognition framework. Discusses situational and distributed knowledge theories and provides a model on how they can be used to redesign…
NASA Astrophysics Data System (ADS)
de Faria Scheidt, Rafael; Vilain, Patrícia; Dantas, M. A. R.
2014-10-01
Petroleum reservoir engineering is a complex and interesting field that requires large amount of computational facilities to achieve successful results. Usually, software environments for this field are developed without taking care out of possible interactions and extensibilities required by reservoir engineers. In this paper, we present a research work which it is characterized by the design and implementation based on a software product line model for a real distributed reservoir engineering environment. Experimental results indicate successfully the utilization of this approach for the design of distributed software architecture. In addition, all components from the proposal provided greater visibility of the organization and processes for the reservoir engineers.
CLINICAL SURFACES - Activity-Based Computing for Distributed Multi-Display Environments in Hospitals
NASA Astrophysics Data System (ADS)
Bardram, Jakob E.; Bunde-Pedersen, Jonathan; Doryab, Afsaneh; Sørensen, Steffen
A multi-display environment (MDE) is made up of co-located and networked personal and public devices that form an integrated workspace enabling co-located group work. Traditionally, MDEs have, however, mainly been designed to support a single “smart room”, and have had little sense of the tasks and activities that the MDE is being used for. This paper presents a novel approach to support activity-based computing in distributed MDEs, where displays are physically distributed across a large building. CLINICAL SURFACES was designed for clinical work in hospitals, and enables context-sensitive retrieval and browsing of patient data on public displays. We present the design and implementation of CLINICAL SURFACES, and report from an evaluation of the system at a large hospital. The evaluation shows that using distributed public displays to support activity-based computing inside a hospital is very useful for clinical work, and that the apparent contradiction between maintaining privacy of medical data in a public display environment can be mitigated by the use of CLINICAL SURFACES.
Center for the Built Environment: UFAD Cooling Load Design Tool
Energy Publications Project Title: Underfloor Air Distribution (UFAD) Cooling Load Design Tool Providing . Webster, 2010. Development of a simplified cooling load design tool for underfloor air distribution Near-ZNE Buildings Setpoint Energy Savings Calculator UFAD Case Studies UFAD Cooling Design Tool UFAD
The impact of distributed computing on education
NASA Technical Reports Server (NTRS)
Utku, S.; Lestingi, J.; Salama, M.
1982-01-01
In this paper, developments in digital computer technology since the early Fifties are reviewed briefly, and the parallelism which exists between these developments and developments in analysis and design procedures of structural engineering is identified. The recent trends in digital computer technology are examined in order to establish the fact that distributed processing is now an accepted philosophy for further developments. The impact of this on the analysis and design practices of structural engineering is assessed by first examining these practices from a data processing standpoint to identify the key operations and data bases, and then fitting them to the characteristics of distributed processing. The merits and drawbacks of the present philosophy in educating structural engineers are discussed and projections are made for the industry-academia relations in the distributed processing environment of structural analysis and design. An ongoing experiment of distributed computing in a university environment is described.
Center for the Built Environment: Research on Building HVAC Systems
, and lessons learned. Underfloor Air Distribution (UFAD) Cooling Airflow Design Tool Developing simplified design tools for optimization of underfloor systems. Underfloor Air Distribution (UFAD) Cost Near-ZNE Buildings Setpoint Energy Savings Calculator UFAD Case Studies UFAD Cooling Design Tool UFAD
Distributed Collaborative Homework Activities in a Problem-Based Usability Engineering Course
ERIC Educational Resources Information Center
Carroll, John M.; Jiang, Hao; Borge, Marcela
2015-01-01
Teams of students in an upper-division undergraduate Usability Engineering course used a collaborative environment to carry out a series of three distributed collaborative homework assignments. Assignments were case-based analyses structured using a jigsaw design; students were provided a collaborative software environment and introduced to a…
Aerostructural interaction in a collaborative MDO environment
NASA Astrophysics Data System (ADS)
Ciampa, Pier Davide; Nagel, Björn
2014-10-01
The work presents an approach for aircraft design and optimization, developed to account for fluid-structure interactions in MDO applications. The approach makes use of a collaborative distributed design environment, and focuses on the influence of multiple physics based aerostructural models, on the overall aircraft synthesis and optimization. The approach is tested for the design of large transportation aircraft.
Design of compact freeform LED flashlight capable of two different light distributions
NASA Astrophysics Data System (ADS)
Isaac, Annie Shalom; Neumann, Cornelius
2016-04-01
Free-form optical surfaces are designed for desired intensity requirements for applications ranging from general to automotive lighting. But a single compact free-form optics which satisfies two different intensity distributions is not presented so far. In this work, a compact LED flashlight fulfilling two different intensity requirements that could be used in potentially explosive atmospheres is designed and validated. The first target is selected after a study on visibility analysis in fog, dust, and smoke environments. Studies showed that a ring-like distribution (5°- 10°) have better visual recognition for short distances in smoky environments. The second target is selected to have a maximum intensity at the peak to provide visibility for longer distances. We realized these two different intensity requirements by moving the LED with respect to the optics along the optical axis. To fulfill the above- required intensity distributions, hybrid TIR optics was designed as free-form curves calculated by combining several geometric optic methods. We validated the free-form TIR hybrid optics using Monte Carlo ray trace simulation. The overall diameter of the optics is 29 mm and 10 mm in thickness. The simulated results showed an optical efficiency of about 84% to realize both target light distributions in a single optics. Then we designed a whole flashlight consisting of LED, PMMA hybrid optics, PC glass casing and a housing including the critical thermal management for explosive environments. To validate the results, a prototype for the designed optics was made. The measured results showed an overall agreement with the simulated results.
Modeling of Radiowave Propagation in a Forested Environment
2014-09-01
is unlimited 12b. DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words) Propagation models used in wireless communication system design play an...domains. Applications in both domains require communication devices and sensors to be operated in forested environments. Various methods have been...wireless communication system design play an important role in overall link performance. Propagation models in a forested environment, in particular
A distributed programming environment for Ada
NASA Technical Reports Server (NTRS)
Brennan, Peter; Mcdonnell, Tom; Mcfarland, Gregory; Timmins, Lawrence J.; Litke, John D.
1986-01-01
Despite considerable commercial exploitation of fault tolerance systems, significant and difficult research problems remain in such areas as fault detection and correction. A research project is described which constructs a distributed computing test bed for loosely coupled computers. The project is constructing a tool kit to support research into distributed control algorithms, including a distributed Ada compiler, distributed debugger, test harnesses, and environment monitors. The Ada compiler is being written in Ada and will implement distributed computing at the subsystem level. The design goal is to provide a variety of control mechanics for distributed programming while retaining total transparency at the code level.
Advanced S-Band studies using the TDRSS communications satellite
NASA Technical Reports Server (NTRS)
Jenkins, Jeffrey D.; Osborne, William P.; Fan, Yiping
1994-01-01
This report will describe the design, implementation, and results of a propagation experiment which used TDRSS to transmit spread signals at S-Band to an instrumented mobile receiver. The results consist of fade measurements and distribution functions in 21 environments across the Continental United States (CONUS). From these distribution functions, some idea may be gained about what system designers should expect for excess path loss in many mobile environments. Some of these results may be compared against similar measurements made with narrowband beacon measurements. Such comparisons provide insight into what gains the spread signaling system may or may not have in multipath and shadowing environments.
FODEM: Developing Digital Learning Environments in Widely Dispersed Learning Communities
ERIC Educational Resources Information Center
Suhonen, Jarkko; Sutinen, Erkki
2006-01-01
FODEM (FOrmative DEvelopment Method) is a design method for developing digital learning environments for widely dispersed learning communities. These are communities in which the geographical distribution and density of learners is low when compared to the kind of learning communities in which there is a high distribution and density of learners…
Integrating labview into a distributed computing environment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kasemir, K. U.; Pieck, M.; Dalesio, L. R.
2001-01-01
Being easy to learn and well suited for a selfcontained desktop laboratory setup, many casual programmers prefer to use the National Instruments Lab-VIEW environment to develop their logic. An ActiveX interface is presented that allows integration into a plant-wide distributed environment based on the Experimental Physics and Industrial Control System (EPICS). This paper discusses the design decisions and provides performance information, especially considering requirements for the Spallation Neutron Source (SNS) diagnostics system.
LaRC local area networks to support distributed computing
NASA Technical Reports Server (NTRS)
Riddle, E. P.
1984-01-01
The Langley Research Center's (LaRC) Local Area Network (LAN) effort is discussed. LaRC initiated the development of a LAN to support a growing distributed computing environment at the Center. The purpose of the network is to provide an improved capability (over inteactive and RJE terminal access) for sharing multivendor computer resources. Specifically, the network will provide a data highway for the transfer of files between mainframe computers, minicomputers, work stations, and personal computers. An important influence on the overall network design was the vital need of LaRC researchers to efficiently utilize the large CDC mainframe computers in the central scientific computing facility. Although there was a steady migration from a centralized to a distributed computing environment at LaRC in recent years, the work load on the central resources increased. Major emphasis in the network design was on communication with the central resources within the distributed environment. The network to be implemented will allow researchers to utilize the central resources, distributed minicomputers, work stations, and personal computers to obtain the proper level of computing power to efficiently perform their jobs.
Virtual Collaborative Environments for System of Systems Engineering and Applications for ISAT
NASA Technical Reports Server (NTRS)
Dryer, David A.
2002-01-01
This paper describes an system of systems or metasystems approach and models developed to help prepare engineering organizations for distributed engineering environments. These changes in engineering enterprises include competition in increasingly global environments; new partnering opportunities caused by advances in information and communication technologies, and virtual collaboration issues associated with dispersed teams. To help address challenges and needs in this environment, a framework is proposed that can be customized and adapted for NASA to assist in improved engineering activities conducted in distributed, enhanced engineering environments. The approach is designed to prepare engineers for such distributed collaborative environments by learning and applying e-engineering methods and tools to a real-world engineering development scenario. The approach consists of two phases: an e-engineering basics phase and e-engineering application phase. The e-engineering basics phase addresses skills required for e-engineering. The e-engineering application phase applies these skills in a distributed collaborative environment to system development projects.
Data management in an object-oriented distributed aircraft conceptual design environment
NASA Astrophysics Data System (ADS)
Lu, Zhijie
In the competitive global market place, aerospace companies are forced to deliver the right products to the right market, with the right cost, and at the right time. However, the rapid development of technologies and new business opportunities, such as mergers, acquisitions, supply chain management, etc., have dramatically increased the complexity of designing an aircraft. Therefore, the pressure to reduce design cycle time and cost is enormous. One way to solve such a dilemma is to develop and apply advanced engineering environments (AEEs), which are distributed collaborative virtual design environments linking researchers, technologists, designers, etc., together by incorporating application tools and advanced computational, communications, and networking facilities. Aircraft conceptual design, as the first design stage, provides major opportunity to compress design cycle time and is the cheapest place for making design changes. However, traditional aircraft conceptual design programs, which are monolithic programs, cannot provide satisfactory functionality to meet new design requirements due to the lack of domain flexibility and analysis scalability. Therefore, we are in need of the next generation aircraft conceptual design environment (NextADE). To build the NextADE, the framework and the data management problem are two major problems that need to be addressed at the forefront. Solving these two problems, particularly the data management problem, is the focus of this research. In this dissertation, in light of AEEs, a distributed object-oriented framework is firstly formulated and tested for the NextADE. In order to improve interoperability and simplify the integration of heterogeneous application tools, data management is one of the major problems that need to be tackled. To solve this problem, taking into account the characteristics of aircraft conceptual design data, a robust, extensible object-oriented data model is then proposed according to the distributed object-oriented framework. By overcoming the shortcomings of the traditional approach of modeling aircraft conceptual design data, this data model makes it possible to capture specific detailed information of aircraft conceptual design without sacrificing generality, which is one of the most desired features of a data model for aircraft conceptual design. Based upon this data model, a prototype of the data management system, which is one of the fundamental building blocks of the NextADE, is implemented utilizing the state of the art information technologies. Using a general-purpose integration software package to demonstrate the efficacy of the proposed framework and the data management system, the NextADE is initially implemented by integrating the prototype of the data management system with other building blocks of the design environment, such as disciplinary analyses programs and mission analyses programs. As experiments, two case studies are conducted in the integrated design environments. One is based upon a simplified conceptual design of a notional conventional aircraft; the other is a simplified conceptual design of an unconventional aircraft. As a result of the experiments, the proposed framework and the data management approach are shown to be feasible solutions to the research problems.
Learner-Controlled Scaffolding Linked to Open-Ended Problems in a Digital Learning Environment
ERIC Educational Resources Information Center
Edson, Alden Jack
2017-01-01
This exploratory study reports on how students activated learner-controlled scaffolding and navigated through sequences of connected problems in a digital learning environment. A design experiment was completed to (re)design, iteratively develop, test, and evaluate a digital version of an instructional unit focusing on binomial distributions and…
Indiva: a middleware for managing distributed media environment
NASA Astrophysics Data System (ADS)
Ooi, Wei-Tsang; Pletcher, Peter; Rowe, Lawrence A.
2003-12-01
This paper presents a unified set of abstractions and operations for hardware devices, software processes, and media data in a distributed audio and video environment. These abstractions, which are provided through a middleware layer called Indiva, use a file system metaphor to access resources and high-level commands to simplify the development of Internet webcast and distributed collaboration control applications. The design and implementation of Indiva are described and examples are presented to illustrate the usefulness of the abstractions.
Optimized distributed computing environment for mask data preparation
NASA Astrophysics Data System (ADS)
Ahn, Byoung-Sup; Bang, Ju-Mi; Ji, Min-Kyu; Kang, Sun; Jang, Sung-Hoon; Choi, Yo-Han; Ki, Won-Tai; Choi, Seong-Woon; Han, Woo-Sung
2005-11-01
As the critical dimension (CD) becomes smaller, various resolution enhancement techniques (RET) are widely adopted. In developing sub-100nm devices, the complexity of optical proximity correction (OPC) is severely increased and applied OPC layers are expanded to non-critical layers. The transformation of designed pattern data by OPC operation causes complexity, which cause runtime overheads to following steps such as mask data preparation (MDP), and collapse of existing design hierarchy. Therefore, many mask shops exploit the distributed computing method in order to reduce the runtime of mask data preparation rather than exploit the design hierarchy. Distributed computing uses a cluster of computers that are connected to local network system. However, there are two things to limit the benefit of the distributing computing method in MDP. First, every sequential MDP job, which uses maximum number of available CPUs, is not efficient compared to parallel MDP job execution due to the input data characteristics. Second, the runtime enhancement over input cost is not sufficient enough since the scalability of fracturing tools is limited. In this paper, we will discuss optimum load balancing environment that is useful in increasing the uptime of distributed computing system by assigning appropriate number of CPUs for each input design data. We will also describe the distributed processing (DP) parameter optimization to obtain maximum throughput in MDP job processing.
Flexible radiator thermal vacuum test report
NASA Technical Reports Server (NTRS)
Oren, J. A.; Hixon, C. W.
1982-01-01
Two flexible, deployable/retraction radiators were designed and fabricated. The two radiator panels are distinguishable by their mission life design. One panel is designed with a 90 percent probability of withstanding the micrometeoroid environment of a low earth orbit for 30 days. This panel is designated the soft tube radiator after the PFA Teflon tubes which distribute the transport fluid over the panel. The second panel is designed with armored flow tubes to withstand the same micrometeoroid environment for 5 years. It is designated the hard tube radiator after its stainless steel flow tubes. The thermal performance of the radiators was tested under anticipated environmental conditions. The two deployment systems of the radiators were evaluated in a thermal vacuum environment.
About Distributed Simulation-based Optimization of Forming Processes using a Grid Architecture
NASA Astrophysics Data System (ADS)
Grauer, Manfred; Barth, Thomas
2004-06-01
Permanently increasing complexity of products and their manufacturing processes combined with a shorter "time-to-market" leads to more and more use of simulation and optimization software systems for product design. Finding a "good" design of a product implies the solution of computationally expensive optimization problems based on the results of simulation. Due to the computational load caused by the solution of these problems, the requirements on the Information&Telecommunication (IT) infrastructure of an enterprise or research facility are shifting from stand-alone resources towards the integration of software and hardware resources in a distributed environment for high-performance computing. Resources can either comprise software systems, hardware systems, or communication networks. An appropriate IT-infrastructure must provide the means to integrate all these resources and enable their use even across a network to cope with requirements from geographically distributed scenarios, e.g. in computational engineering and/or collaborative engineering. Integrating expert's knowledge into the optimization process is inevitable in order to reduce the complexity caused by the number of design variables and the high dimensionality of the design space. Hence, utilization of knowledge-based systems must be supported by providing data management facilities as a basis for knowledge extraction from product data. In this paper, the focus is put on a distributed problem solving environment (PSE) capable of providing access to a variety of necessary resources and services. A distributed approach integrating simulation and optimization on a network of workstations and cluster systems is presented. For geometry generation the CAD-system CATIA is used which is coupled with the FEM-simulation system INDEED for simulation of sheet-metal forming processes and the problem solving environment OpTiX for distributed optimization.
Design and Implementation of a Distributed Version of the NASA Engine Performance Program
NASA Technical Reports Server (NTRS)
Cours, Jeffrey T.
1994-01-01
Distributed NEPP is a new version of the NASA Engine Performance Program that runs in parallel on a collection of Unix workstations connected through a network. The program is fault-tolerant, efficient, and shows significant speed-up in a multi-user, heterogeneous environment. This report describes the issues involved in designing distributed NEPP, the algorithms the program uses, and the performance distributed NEPP achieves. It develops an analytical model to predict and measure the performance of the simple distribution, multiple distribution, and fault-tolerant distribution algorithms that distributed NEPP incorporates. Finally, the appendices explain how to use distributed NEPP and document the organization of the program's source code.
Design and Implement of Astronomical Cloud Computing Environment In China-VO
NASA Astrophysics Data System (ADS)
Li, Changhua; Cui, Chenzhou; Mi, Linying; He, Boliang; Fan, Dongwei; Li, Shanshan; Yang, Sisi; Xu, Yunfei; Han, Jun; Chen, Junyi; Zhang, Hailong; Yu, Ce; Xiao, Jian; Wang, Chuanjun; Cao, Zihuang; Fan, Yufeng; Liu, Liang; Chen, Xiao; Song, Wenming; Du, Kangyu
2017-06-01
Astronomy cloud computing environment is a cyber-Infrastructure for Astronomy Research initiated by Chinese Virtual Observatory (China-VO) under funding support from NDRC (National Development and Reform commission) and CAS (Chinese Academy of Sciences). Based on virtualization technology, astronomy cloud computing environment was designed and implemented by China-VO team. It consists of five distributed nodes across the mainland of China. Astronomer can get compuitng and storage resource in this cloud computing environment. Through this environments, astronomer can easily search and analyze astronomical data collected by different telescopes and data centers , and avoid the large scale dataset transportation.
NASA Technical Reports Server (NTRS)
Kamhawi, Hilmi N.
2011-01-01
This report documents the work performed during from March 2010 October 2011. The Integrated Design and Engineering Analysis (IDEA) environment is a collaborative environment based on an object-oriented, multidisciplinary, distributed environment using the Adaptive Modeling Language (AML) as the underlying framework. This report will focus on describing the work done in the area of extending the aerodynamics, and aerothermodynamics module using S/HABP, CBAERO, PREMIN and LANMIN. It will also detail the work done integrating EXITS as the TPS sizing tool.
Using PVM to host CLIPS in distributed environments
NASA Technical Reports Server (NTRS)
Myers, Leonard; Pohl, Kym
1994-01-01
It is relatively easy to enhance CLIPS (C Language Integrated Production System) to support multiple expert systems running in a distributed environment with heterogeneous machines. The task is minimized by using the PVM (Parallel Virtual Machine) code from Oak Ridge Labs to provide the distributed utility. PVM is a library of C and FORTRAN subprograms that supports distributive computing on many different UNIX platforms. A PVM deamon is easily installed on each CPU that enters the virtual machine environment. Any user with rsh or rexec access to a machine can use the one PVM deamon to obtain a generous set of distributed facilities. The ready availability of both CLIPS and PVM makes the combination of software particularly attractive for budget conscious experimentation of heterogeneous distributive computing with multiple CLIPS executables. This paper presents a design that is sufficient to provide essential message passing functions in CLIPS and enable the full range of PVM facilities.
Distributed Scaffolding: Synergy in Technology-Enhanced Learning Environments
ERIC Educational Resources Information Center
Ustunel, Hale H.; Tokel, Saniye Tugba
2018-01-01
When technology is employed challenges increase in learning environments. Kim et al. ("Sci Educ" 91(6):1010-1030, 2007) presented a pedagogical framework that provides a valid technology-enhanced learning environment. The purpose of the present design-based study was to investigate the micro context dimension of this framework and to…
1994-04-18
because they represent a microkernel and monolithic kernel approach to MLS operating system issues. TMACH is I based on MACH, a distributed operating...the operating system is [L.sed on a microkernel design or a monolithic kernel design. This distinction requires some caution since monolithic operating...are provided by 3 user-level processes, in contrast to standard UNIX, which has a large monolithic kernel that pro- I - 22 - Distributed O)perating
ERIC Educational Resources Information Center
Hsiao, E-Ling; Moore, David Richard
2009-01-01
Instruction is increasingly being delivered through distributed multimedia applications. Instruction delivered through these online environments creates robust opportunities for content presentation and learner interaction. These environments give the designer control over every aspect of the instructional experience. With some simple…
A distributed data acquisition software scheme for the Laboratory Telerobotic Manipulator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Butler, P.L.; Glassell, R.L.; Rowe, J.C.
1990-01-01
A custom software architecture was developed for use in the Laboratory Telerobotic Manipulator (LTM) to provide support for the distributed data acquisition electronics. This architecture was designed to provide a comprehensive development environment that proved to be useful for both hardware and software debugging. This paper describes the development environment and the operational characteristics of the real-time data acquisition software. 8 refs., 5 figs.
WaveJava: Wavelet-based network computing
NASA Astrophysics Data System (ADS)
Ma, Kun; Jiao, Licheng; Shi, Zhuoer
1997-04-01
Wavelet is a powerful theory, but its successful application still needs suitable programming tools. Java is a simple, object-oriented, distributed, interpreted, robust, secure, architecture-neutral, portable, high-performance, multi- threaded, dynamic language. This paper addresses the design and development of a cross-platform software environment for experimenting and applying wavelet theory. WaveJava, a wavelet class library designed by the object-orient programming, is developed to take advantage of the wavelets features, such as multi-resolution analysis and parallel processing in the networking computing. A new application architecture is designed for the net-wide distributed client-server environment. The data are transmitted with multi-resolution packets. At the distributed sites around the net, these data packets are done the matching or recognition processing in parallel. The results are fed back to determine the next operation. So, the more robust results can be arrived quickly. The WaveJava is easy to use and expand for special application. This paper gives a solution for the distributed fingerprint information processing system. It also fits for some other net-base multimedia information processing, such as network library, remote teaching and filmless picture archiving and communications.
1983-11-01
transmission, FM(R) will only have to hold one message. 3. Program Control Block (PCB) The PCB ( Deitel 82] will be maintained by the Executive in...and Use of Kernel to Process Interrupts 35 10. Layered Operating System Design 38 11. Program Control Block Table 43 12. Ready List Data Structure 45 13...examples of fully distributed systems in operation. An objective of the NPS research program for SPLICE is to advance our knowledge of distributed
Less is More: DoD’s Strategy for Facility Energy Security and Environmental Sustainability
2012-05-22
Installations & Environment) E2S2 Symposium May 22, 2012 1 Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden...DISTRIBUTION/AVAILABILITY STATEMENT Approved for public release; distribution unlimited 13. SUPPLEMENTARY NOTES Presented at the NDIA Environment...Ensure compliance with federal mandates – Draw elements from ASHRAE 189.1 – Require life-cycle cost analysis of building design – Due out in
Internet-based distributed collaborative environment for engineering education and design
NASA Astrophysics Data System (ADS)
Sun, Qiuli
2001-07-01
This research investigates the use of the Internet for engineering education, design, and analysis through the presentation of a Virtual City environment. The main focus of this research was to provide an infrastructure for engineering education, test the concept of distributed collaborative design and analysis, develop and implement the Virtual City environment, and assess the environment's effectiveness in the real world. A three-tier architecture was adopted in the development of the prototype, which contains an online database server, a Web server as well as multi-user servers, and client browsers. The environment is composed of five components, a 3D virtual world, multiple Internet-based multimedia modules, an online database, a collaborative geometric modeling module, and a collaborative analysis module. The environment was designed using multiple Intenet-based technologies, such as Shockwave, Java, Java 3D, VRML, Perl, ASP, SQL, and a database. These various technologies together formed the basis of the environment and were programmed to communicate smoothly with each other. Three assessments were conducted over a period of three semesters. The Virtual City is open to the public at www.vcity.ou.edu. The online database was designed to manage the changeable data related to the environment. The virtual world was used to implement 3D visualization and tie the multimedia modules together. Students are allowed to build segments of the 3D virtual world upon completion of appropriate undergraduate courses in civil engineering. The end result is a complete virtual world that contains designs from all of their coursework and is viewable on the Internet. The environment is a content-rich educational system, which can be used to teach multiple engineering topics with the help of 3D visualization, animations, and simulations. The concept of collaborative design and analysis using the Internet was investigated and implemented. Geographically dispersed users can build the same geometric model simultaneously over the Internet and communicate with each other through a chat room. They can also conduct finite element analysis collaboratively on the same object over the Internet. They can mesh the same object, apply and edit the same boundary conditions and forces, obtain the same analysis results, and then discuss the results through the Internet.
An Automatic Instrument to Study the Spatial Scaling Behavior of Emissivity
Tian, Jing; Zhang, Renhua; Su, Hongbo; Sun, Xiaomin; Chen, Shaohui; Xia, Jun
2008-01-01
In this paper, the design of an automatic instrument for measuring the spatial distribution of land surface emissivity is presented, which makes the direct in situ measurement of the spatial distribution of emissivity possible. The significance of this new instrument lies in two aspects. One is that it helps to investigate the spatial scaling behavior of emissivity and temperature; the other is that, the design of the instrument provides theoretical and practical foundations for the implement of measuring distribution of surface emissivity on airborne or spaceborne. To improve the accuracy of the measurements, the emissivity measurement and its uncertainty are examined in a series of carefully designed experiments. The impact of the variation of target temperature and the environmental irradiance on the measurement of emissivity is analyzed as well. In addition, the ideal temperature difference between hot environment and cool environment is obtained based on numerical simulations. Finally, the scaling behavior of surface emissivity caused by the heterogeneity of target is discussed. PMID:27879735
DAVE: A plug and play model for distributed multimedia application development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mines, R.F.; Friesen, J.A.; Yang, C.L.
1994-07-01
This paper presents a model being used for the development of distributed multimedia applications. The Distributed Audio Video Environment (DAVE) was designed to support the development of a wide range of distributed applications. The implementation of this model is described. DAVE is unique in that it combines a simple ``plug and play`` programming interface, supports both centralized and fully distributed applications, provides device and media extensibility, promotes object reuseability, and supports interoperability and network independence. This model enables application developers to easily develop distributed multimedia applications and create reusable multimedia toolkits. DAVE was designed for developing applications such as videomore » conferencing, media archival, remote process control, and distance learning.« less
The R-Shell approach - Using scheduling agents in complex distributed real-time systems
NASA Technical Reports Server (NTRS)
Natarajan, Swaminathan; Zhao, Wei; Goforth, Andre
1993-01-01
Large, complex real-time systems such as space and avionics systems are extremely demanding in their scheduling requirements. The current OS design approaches are quite limited in the capabilities they provide for task scheduling. Typically, they simply implement a particular uniprocessor scheduling strategy and do not provide any special support for network scheduling, overload handling, fault tolerance, distributed processing, etc. Our design of the R-Shell real-time environment fcilitates the implementation of a variety of sophisticated but efficient scheduling strategies, including incorporation of all these capabilities. This is accomplished by the use of scheduling agents which reside in the application run-time environment and are responsible for coordinating the scheduling of the application.
A model for the distribution of watermarked digital content on mobile networks
NASA Astrophysics Data System (ADS)
Frattolillo, Franco; D'Onofrio, Salvatore
2006-10-01
Although digital watermarking can be considered one of the key technologies to implement the copyright protection of digital contents distributed on the Internet, most of the content distribution models based on watermarking protocols proposed in literature have been purposely designed for fixed networks and cannot be easily adapted to mobile networks. On the contrary, the use of mobile devices currently enables new types of services and business models, and this makes the development of new content distribution models for mobile environments strategic in the current scenario of the Internet. This paper presents and discusses a distribution model of watermarked digital contents for such environments able to achieve a trade-off between the needs of efficiency and security.
NASA Integrated Services Environment
NASA Technical Reports Server (NTRS)
Ing, Sharon
2005-01-01
This slide presentation will begin with a discussion on NASA's current distributed environment for directories, identity management and account management. We will follow with information concerning the drivers, design, reviews and implementation of the NISE Project. The final component of the presentation discusses processes used, status and conclusions.
Communication Needs Assessment for Distributed Turbine Engine Control
NASA Technical Reports Server (NTRS)
Culley, Dennis E.; Behbahani, Alireza R.
2008-01-01
Control system architecture is a major contributor to future propulsion engine performance enhancement and life cycle cost reduction. The control system architecture can be a means to effect net weight reduction in future engine systems, provide a streamlined approach to system design and implementation, and enable new opportunities for performance optimization and increased awareness about system health. The transition from a centralized, point-to-point analog control topology to a modular, networked, distributed system is paramount to extracting these system improvements. However, distributed engine control systems are only possible through the successful design and implementation of a suitable communication system. In a networked system, understanding the data flow between control elements is a fundamental requirement for specifying the communication architecture which, itself, is dependent on the functional capability of electronics in the engine environment. This paper presents an assessment of the communication needs for distributed control using strawman designs and relates how system design decisions relate to overall goals as we progress from the baseline centralized architecture, through partially distributed and fully distributed control systems.
Distributed semantic networks and CLIPS
NASA Technical Reports Server (NTRS)
Snyder, James; Rodriguez, Tony
1991-01-01
Semantic networks of frames are commonly used as a method of reasoning in many problems. In most of these applications the semantic network exists as a single entity in a single process environment. Advances in workstation hardware provide support for more sophisticated applications involving multiple processes, interacting in a distributed environment. In these applications the semantic network may well be distributed over several concurrently executing tasks. This paper describes the design and implementation of a frame based, distributed semantic network in which frames are accessed both through C Language Integrated Production System (CLIPS) expert systems and procedural C++ language programs. The application area is a knowledge based, cooperative decision making model utilizing both rule based and procedural experts.
NASA Technical Reports Server (NTRS)
Murphy, James R.; Otto, Neil M.
2017-01-01
NASA's Unmanned Aircraft Systems Integration in the National Airspace System Project is conducting human in the loop simulations and flight testing intended to reduce barriers associated with enabling routine airspace access for unmanned aircraft. The primary focus of these tests is interaction of the unmanned aircraft pilot with the display of detect and avoid alerting and guidance information. The project's integrated test and evaluation team was charged with developing the test infrastructure. As with any development effort, compromises in the underlying system architecture and design were made to allow for the rapid prototyping and open-ended nature of the research. In order to accommodate these design choices, a distributed test environment was developed incorporating Live, Virtual, Constructive, (LVC) concepts. The LVC components form the core infrastructure support simulation of UAS operations by integrating live and virtual aircraft in a realistic air traffic environment. This LVC infrastructure enables efficient testing by leveraging the use of existing assets distributed across multiple NASA Centers. Using standard LVC concepts enable future integration with existing simulation infrastructure.
NASA Technical Reports Server (NTRS)
Murphy, Jim; Otto, Neil
2017-01-01
NASA's Unmanned Aircraft Systems Integration in the National Airspace System Project is conducting human in the loop simulations and flight testing intended to reduce barriers associated with enabling routine airspace access for unmanned aircraft. The primary focus of these tests is interaction of the unmanned aircraft pilot with the display of detect and avoid alerting and guidance information. The projects integrated test and evaluation team was charged with developing the test infrastructure. As with any development effort, compromises in the underlying system architecture and design were made to allow for the rapid prototyping and open-ended nature of the research. In order to accommodate these design choices, a distributed test environment was developed incorporating Live, Virtual, Constructive, (LVC) concepts. The LVC components form the core infrastructure support simulation of UAS operations by integrating live and virtual aircraft in a realistic air traffic environment. This LVC infrastructure enables efficient testing by leveraging the use of existing assets distributed across multiple NASA Centers. Using standard LVC concepts enable future integration with existing simulation infrastructure.
Methods for Combining Payload Parameter Variations with Input Environment
NASA Technical Reports Server (NTRS)
Merchant, D. H.; Straayer, J. W.
1975-01-01
Methods are presented for calculating design limit loads compatible with probabilistic structural design criteria. The approach is based on the concept that the desired limit load, defined as the largest load occuring in a mission, is a random variable having a specific probability distribution which may be determined from extreme-value theory. The design limit load, defined as a particular value of this random limit load, is the value conventionally used in structural design. Methods are presented for determining the limit load probability distributions from both time-domain and frequency-domain dynamic load simulations. Numerical demonstrations of the methods are also presented.
Ada(R) Test and Verification System (ATVS)
NASA Technical Reports Server (NTRS)
Strelich, Tom
1986-01-01
The Ada Test and Verification System (ATVS) functional description and high level design are completed and summarized. The ATVS will provide a comprehensive set of test and verification capabilities specifically addressing the features of the Ada language, support for embedded system development, distributed environments, and advanced user interface capabilities. Its design emphasis was on effective software development environment integration and flexibility to ensure its long-term use in the Ada software development community.
NASA Astrophysics Data System (ADS)
Megherbi, Dalila B.; Yan, Yin; Tanmay, Parikh; Khoury, Jed; Woods, C. L.
2004-11-01
Recently surveillance and Automatic Target Recognition (ATR) applications are increasing as the cost of computing power needed to process the massive amount of information continues to fall. This computing power has been made possible partly by the latest advances in FPGAs and SOPCs. In particular, to design and implement state-of-the-Art electro-optical imaging systems to provide advanced surveillance capabilities, there is a need to integrate several technologies (e.g. telescope, precise optics, cameras, image/compute vision algorithms, which can be geographically distributed or sharing distributed resources) into a programmable system and DSP systems. Additionally, pattern recognition techniques and fast information retrieval, are often important components of intelligent systems. The aim of this work is using embedded FPGA as a fast, configurable and synthesizable search engine in fast image pattern recognition/retrieval in a distributed hardware/software co-design environment. In particular, we propose and show a low cost Content Addressable Memory (CAM)-based distributed embedded FPGA hardware architecture solution with real time recognition capabilities and computing for pattern look-up, pattern recognition, and image retrieval. We show how the distributed CAM-based architecture offers a performance advantage of an order-of-magnitude over RAM-based architecture (Random Access Memory) search for implementing high speed pattern recognition for image retrieval. The methods of designing, implementing, and analyzing the proposed CAM based embedded architecture are described here. Other SOPC solutions/design issues are covered. Finally, experimental results, hardware verification, and performance evaluations using both the Xilinx Virtex-II and the Altera Apex20k are provided to show the potential and power of the proposed method for low cost reconfigurable fast image pattern recognition/retrieval at the hardware/software co-design level.
Derived virtual devices: a secure distributed file system mechanism
NASA Technical Reports Server (NTRS)
VanMeter, Rodney; Hotz, Steve; Finn, Gregory
1996-01-01
This paper presents the design of derived virtual devices (DVDs). DVDs are the mechanism used by the Netstation Project to provide secure shared access to network-attached peripherals distributed in an untrusted network environment. DVDs improve Input/Output efficiency by allowing user processes to perform I/O operations directly from devices without intermediate transfer through the controlling operating system kernel. The security enforced at the device through the DVD mechanism includes resource boundary checking, user authentication, and restricted operations, e.g., read-only access. To illustrate the application of DVDs, we present the interactions between a network-attached disk and a file system designed to exploit the DVD abstraction. We further discuss third-party transfer as a mechanism intended to provide for efficient data transfer in a typical NAP environment. We show how DVDs facilitate third-party transfer, and provide the security required in a more open network environment.
Improving the Aircraft Design Process Using Web-Based Modeling and Simulation
NASA Technical Reports Server (NTRS)
Reed, John A.; Follen, Gregory J.; Afjeh, Abdollah A.; Follen, Gregory J. (Technical Monitor)
2000-01-01
Designing and developing new aircraft systems is time-consuming and expensive. Computational simulation is a promising means for reducing design cycle times, but requires a flexible software environment capable of integrating advanced multidisciplinary and multifidelity analysis methods, dynamically managing data across heterogeneous computing platforms, and distributing computationally complex tasks. Web-based simulation, with its emphasis on collaborative composition of simulation models, distributed heterogeneous execution, and dynamic multimedia documentation, has the potential to meet these requirements. This paper outlines the current aircraft design process, highlighting its problems and complexities, and presents our vision of an aircraft design process using Web-based modeling and simulation.
Improving the Aircraft Design Process Using Web-based Modeling and Simulation
NASA Technical Reports Server (NTRS)
Reed, John A.; Follen, Gregory J.; Afjeh, Abdollah A.
2003-01-01
Designing and developing new aircraft systems is time-consuming and expensive. Computational simulation is a promising means for reducing design cycle times, but requires a flexible software environment capable of integrating advanced multidisciplinary and muitifidelity analysis methods, dynamically managing data across heterogeneous computing platforms, and distributing computationally complex tasks. Web-based simulation, with its emphasis on collaborative composition of simulation models, distributed heterogeneous execution, and dynamic multimedia documentation, has the potential to meet these requirements. This paper outlines the current aircraft design process, highlighting its problems and complexities, and presents our vision of an aircraft design process using Web-based modeling and simulation.
Distributed dynamic simulations of networked control and building performance applications.
Yahiaoui, Azzedine
2018-02-01
The use of computer-based automation and control systems for smart sustainable buildings, often so-called Automated Buildings (ABs), has become an effective way to automatically control, optimize, and supervise a wide range of building performance applications over a network while achieving the minimum energy consumption possible, and in doing so generally refers to Building Automation and Control Systems (BACS) architecture. Instead of costly and time-consuming experiments, this paper focuses on using distributed dynamic simulations to analyze the real-time performance of network-based building control systems in ABs and improve the functions of the BACS technology. The paper also presents the development and design of a distributed dynamic simulation environment with the capability of representing the BACS architecture in simulation by run-time coupling two or more different software tools over a network. The application and capability of this new dynamic simulation environment are demonstrated by an experimental design in this paper.
Distributed dynamic simulations of networked control and building performance applications
Yahiaoui, Azzedine
2017-01-01
The use of computer-based automation and control systems for smart sustainable buildings, often so-called Automated Buildings (ABs), has become an effective way to automatically control, optimize, and supervise a wide range of building performance applications over a network while achieving the minimum energy consumption possible, and in doing so generally refers to Building Automation and Control Systems (BACS) architecture. Instead of costly and time-consuming experiments, this paper focuses on using distributed dynamic simulations to analyze the real-time performance of network-based building control systems in ABs and improve the functions of the BACS technology. The paper also presents the development and design of a distributed dynamic simulation environment with the capability of representing the BACS architecture in simulation by run-time coupling two or more different software tools over a network. The application and capability of this new dynamic simulation environment are demonstrated by an experimental design in this paper. PMID:29568135
Data analysis environment (DASH2000) for the Subaru telescope
NASA Astrophysics Data System (ADS)
Mizumoto, Yoshihiko; Yagi, Masafumi; Chikada, Yoshihiro; Ogasawara, Ryusuke; Kosugi, George; Takata, Tadafumi; Yoshida, Michitoshi; Ishihara, Yasuhide; Yanaka, Hiroshi; Yamamoto, Tadahiro; Morita, Yasuhiro; Nakamoto, Hiroyuki
2000-06-01
New framework of data analysis system (DASH) has been developed for the SUBARU Telescope. It is designed using object-oriented methodology and adopted a restaurant model. DASH shares the load of CPU and I/O among distributed heterogeneous computers. The distributed object environment of the system is implemented with JAVA and CORBA. DASH has been evaluated by several prototypings. DASH2000 is the latest version, which will be released as the beta version of data analysis system for the SUBARU Telescope.
NASA Technical Reports Server (NTRS)
Kamhawi, Hilmi N.
2012-01-01
This report documents the work performed from March 2010 to March 2012. The Integrated Design and Engineering Analysis (IDEA) environment is a collaborative environment based on an object-oriented, multidisciplinary, distributed framework using the Adaptive Modeling Language (AML) as a framework and supporting the configuration design and parametric CFD grid generation. This report will focus on describing the work in the area of parametric CFD grid generation using novel concepts for defining the interaction between the mesh topology and the geometry in such a way as to separate the mesh topology from the geometric topology while maintaining the link between the mesh topology and the actual geometry.
NASA Technical Reports Server (NTRS)
Merchant, D. H.
1976-01-01
Methods are presented for calculating design limit loads compatible with probabilistic structural design criteria. The approach is based on the concept that the desired limit load, defined as the largest load occurring in a mission, is a random variable having a specific probability distribution which may be determined from extreme-value theory. The design limit load, defined as a particular of this random limit load, is the value conventionally used in structural design. Methods are presented for determining the limit load probability distributions from both time-domain and frequency-domain dynamic load simulations. Numerical demonstrations of the method are also presented.
Thinking Globally and Acting Locally: Environmental Education Teaching Activities.
ERIC Educational Resources Information Center
Mann, Lori D.; Stapp, William B.
Provided are teaching activities related to: (1) food production and distribution; (2) energy; (3) transportation; (4) solid waste; (5) chemicals in the environment; (6) resource management; (7) pollution; (8) population; (9) world linkages; (10) endangered species; and (11) lifestyle and environment. The activities, designed to help learners…
ERIC Educational Resources Information Center
Moller, Leslie; Prestera, Gustavo E.; Harvey, Douglas; Downs-Keller, Margaret; McCausland, Jo-Ann
2002-01-01
Discusses organic architecture and suggests that learning environments should be designed and constructed using an organic approach, so that learning is not viewed as a distinct human activity but incorporated into everyday performance. Highlights include an organic knowledge-building model; information objects; scaffolding; discourse action…
Proceedings of Tenth Annual Software Engineering Workshop
NASA Technical Reports Server (NTRS)
1985-01-01
Papers are presented on the following topics: measurement of software technology, recent studies of the Software Engineering Lab, software management tools, expert systems, error seeding as a program validation technique, software quality assurance, software engineering environments (including knowledge-based environments), the Distributed Computing Design System, and various Ada experiments.
Exploiting virtual synchrony in distributed systems
NASA Technical Reports Server (NTRS)
Birman, Kenneth P.; Joseph, Thomas A.
1987-01-01
Applications of a virtually synchronous environment are described for distributed programming, which underlies a collection of distributed programming tools in the ISIS2 system. A virtually synchronous environment allows processes to be structured into process groups, and makes events like broadcasts to the group as an entity, group membership changes, and even migration of an activity from one place to another appear to occur instantaneously, in other words, synchronously. A major advantage to this approach is that many aspects of a distributed application can be treated independently without compromising correctness. Moreover, user code that is designed as if the system were synchronous can often be executed concurrently. It is argued that this approach to building distributed and fault tolerant software is more straightforward, more flexible, and more likely to yield correct solutions than alternative approaches.
Distributed Motor Controller (DMC) for Operation in Extreme Environments
NASA Technical Reports Server (NTRS)
McKinney, Colin M.; Yager, Jeremy A.; Mojarradi, Mohammad M.; Some, Rafi; Sirota, Allen; Kopf, Ted; Stern, Ryan; Hunter, Don
2012-01-01
This paper presents an extreme environment capable Distributed Motor Controller (DMC) module suitable for operation with a distributed architecture of future spacecraft systems. This motor controller is designed to be a bus-based electronics module capable of operating a single Brushless DC motor in extreme space environments: temperature (-120 C to +85 C required, -180 C to +100 C stretch goal); radiation (>;20K required, >;100KRad stretch goal); >;360 cycles of operation. Achieving this objective will result in a scalable modular configuration for motor control with enhanced reliability that will greatly lower cost during the design, fabrication and ATLO phases of future missions. Within the heart of the DMC lies a pair of cold-capable Application Specific Integrated Circuits (ASICs) and a Field Programmable Gate Array (FPGA) that enable its miniaturization and operation in extreme environments. The ASICs are fabricated in the IBM 0.5 micron Silicon Germanium (SiGe) BiCMOS process and are comprised of Analog circuitry to provide telemetry information, sensor interface, and health and status of DMC. The FPGA contains logic to provide motor control, status monitoring and spacecraft interface. The testing and characterization of these ASICs have yielded excellent functionality in cold temperatures (-135 C). The DMC module has demonstrated successful operation of a motor at temperature.
NASA Technical Reports Server (NTRS)
Campbell, R. H.; Essick, R. B.; Grass, J.; Johnston, G.; Kenny, K.; Russo, V.
1986-01-01
The EOS project is investigating the design and construction of a family of real-time distributed embedded operating systems for reliable, distributed aerospace applications. Using the real-time programming techniques developed in co-operation with NASA in earlier research, the project staff is building a kernel for a multiple processor networked system. The first six months of the grant included a study of scheduling in an object-oriented system, the design philosophy of the kernel, and the architectural overview of the operating system. In this report, the operating system and kernel concepts are described. An environment for the experiments has been built and several of the key concepts of the system have been prototyped. The kernel and operating system is intended to support future experimental studies in multiprocessing, load-balancing, routing, software fault-tolerance, distributed data base design, and real-time processing.
GLobal Integrated Design Environment (GLIDE): A Concurrent Engineering Application
NASA Technical Reports Server (NTRS)
McGuire, Melissa L.; Kunkel, Matthew R.; Smith, David A.
2010-01-01
The GLobal Integrated Design Environment (GLIDE) is a client-server software application purpose-built to mitigate issues associated with real time data sharing in concurrent engineering environments and to facilitate discipline-to-discipline interaction between multiple engineers and researchers. GLIDE is implemented in multiple programming languages utilizing standardized web protocols to enable secure parameter data sharing between engineers and researchers across the Internet in closed and/or widely distributed working environments. A well defined, HyperText Transfer Protocol (HTTP) based Application Programming Interface (API) to the GLIDE client/server environment enables users to interact with GLIDE, and each other, within common and familiar tools. One such common tool, Microsoft Excel (Microsoft Corporation), paired with its add-in API for GLIDE, is discussed in this paper. The top-level examples given demonstrate how this interface improves the efficiency of the design process of a concurrent engineering study while reducing potential errors associated with manually sharing information between study participants.
Integrating Computing Resources: A Shared Distributed Architecture for Academics and Administrators.
ERIC Educational Resources Information Center
Beltrametti, Monica; English, Will
1994-01-01
Development and implementation of a shared distributed computing architecture at the University of Alberta (Canada) are described. Aspects discussed include design of the architecture, users' views of the electronic environment, technical and managerial challenges, and the campuswide human infrastructures needed to manage such an integrated…
Distributed Emotions in the Design of Learning Technologies
ERIC Educational Resources Information Center
Kim, Beaumie; Kim, Mi Song
2010-01-01
Learning is a social activity, which requires interactions with the environment, tools, people, and also ourselves (e.g., our previous experiences). Each interaction provides different meanings to learners, and the associated emotion affects their learning and performance. With the premise that emotion and cognition are distributed, the authors…
Education of Engineering Students within a Multimedia/Hypermedia Environment--A Review.
ERIC Educational Resources Information Center
Anderl, R.; Vogel, U. R.
This paper summarizes the activities of the Darmstadt University Department of Computer Integrated Design (Germany) related to: (1) distributed lectures (i.e., lectures distributed online through computer networks), including equipment used and ensuring sound and video quality; (2) lectures on demand, including providing access through the World…
PERTS: A Prototyping Environment for Real-Time Systems
NASA Technical Reports Server (NTRS)
Liu, Jane W. S.; Lin, Kwei-Jay; Liu, C. L.
1991-01-01
We discuss an ongoing project to build a Prototyping Environment for Real-Time Systems, called PERTS. PERTS is a unique prototyping environment in that it has (1) tools and performance models for the analysis and evaluation of real-time prototype systems, (2) building blocks for flexible real-time programs and the support system software, (3) basic building blocks of distributed and intelligent real time applications, and (4) an execution environment. PERTS will make the recent and future theoretical advances in real-time system design and engineering readily usable to practitioners. In particular, it will provide an environment for the use and evaluation of new design approaches, for experimentation with alternative system building blocks and for the analysis and performance profiling of prototype real-time systems.
Knebel, Harley J.; Circe, Ronald C.
1995-01-01
This report illustrates, describes, and briefly discusses the acoustic and textural characteristics and the distribution of bottom sedimentary environments in Boston Harbor and Massachusetts Bay. The study is an outgrowth of a larger research program designed to understand the regional processes that distribute sediments and related contaminants in the area. The report highlights the major findings presented in recent papers by Knebel and others (1991), Knebel, (1993), and Knebel and Circe (1995). The reader is urged to consult the full text of these earlier papers for a more definitive treatment of the data and for appropriate supporting references.
Architecture for distributed design and fabrication
NASA Astrophysics Data System (ADS)
McIlrath, Michael B.; Boning, Duane S.; Troxel, Donald E.
1997-01-01
We describe a flexible, distributed system architecture capable of supporting collaborative design and fabrication of semi-conductor devices and integrated circuits. Such capabilities are of particular importance in the development of new technologies, where both equipment and expertise are limited. Distributed fabrication enables direct, remote, physical experimentation in the development of leading edge technology, where the necessary manufacturing resources are new, expensive, and scarce. Computational resources, software, processing equipment, and people may all be widely distributed; their effective integration is essential in order to achieve the realization of new technologies for specific product requirements. Our architecture leverages is essential in order to achieve the realization of new technologies for specific product requirements. Our architecture leverages current vendor and consortia developments to define software interfaces and infrastructure based on existing and merging networking, CIM, and CAD standards. Process engineers and product designers access processing and simulation results through a common interface and collaborate across the distributed manufacturing environment.
NASA Technical Reports Server (NTRS)
Jovic, Srboljub
2015-01-01
This document provides the software design description for the two core software components, the LVC Gateway, the LVC Gateway Toolbox, and two participants, the LVC Gateway Data Logger and the SAA Processor (SaaProc).
Teachers as Designers of Collaborative Distance Learning.
ERIC Educational Resources Information Center
Spector, J. Michael
There is an obvious growth in the use of distributed and online learning environments. There is some evidence to believe that collaborative learning environments can be effective, especially when using advanced technology to support learning in and about complex domains. There is also an extensive body of research literature in the areas of…
PROVIDE: A Pedagogical Reference Oracle for Virtual IntegrateD E-ducation
ERIC Educational Resources Information Center
Narasimhan, V. Lakshmi; Zhao, Shuxin; Liang, Hailong; Zhang, Shuangyi
2006-01-01
This paper presents an interactive educational environment for use over both "in situ" and distance-based modalities of teaching. Several technological issues relating to the design and development of the distributed virtual learning environment have also been raised. The PROVIDE framework proposed in this paper is a seamless distributed…
Virtual Collaborative Simulation Environment for Integrated Product and Process Development
NASA Technical Reports Server (NTRS)
Gulli, Michael A.
1997-01-01
Deneb Robotics is a leader in the development of commercially available, leading edge three- dimensional simulation software tools for virtual prototyping,, simulation-based design, manufacturing process simulation, and factory floor simulation and training applications. Deneb has developed and commercially released a preliminary Virtual Collaborative Engineering (VCE) capability for Integrated Product and Process Development (IPPD). This capability allows distributed, real-time visualization and evaluation of design concepts, manufacturing processes, and total factory and enterprises in one seamless simulation environment.
NASA Technical Reports Server (NTRS)
Davarian, Faramaz; Bishop, Dennis
1993-01-01
Propagation models that can be used for the design of earth-space land mobile-satellite telecommunications systems are presented. These models include: empirical roadside shadowing, attenuation frequency scaling, fade and non-fade duration distribution, multipath in a mountain environment, and multipath in a roadside tree environment. Propagation data from helicopter-mobile and satellite-mobile measurements in Australia and the United States were used to develop the models.
NASA Technical Reports Server (NTRS)
Davarian, F.; Bishop, D.
1993-01-01
Propogation models that can be used for the design of Earth-space land mobile-satellite telecommunications systems are presented. These models include: empirical roadside shadowing, attenuation frequency scaling, fade and non-fade duration distribution, multipath in a mountain environment, and multipath in a roadside tree environment. Propogation data from helicopter-mobile and satellite-mobile measurements in Australia and the United States were used to develop the models.
A development framework for distributed artificial intelligence
NASA Technical Reports Server (NTRS)
Adler, Richard M.; Cottman, Bruce H.
1989-01-01
The authors describe distributed artificial intelligence (DAI) applications in which multiple organizations of agents solve multiple domain problems. They then describe work in progress on a DAI system development environment, called SOCIAL, which consists of three primary language-based components. The Knowledge Object Language defines models of knowledge representation and reasoning. The metaCourier language supplies the underlying functionality for interprocess communication and control access across heterogeneous computing environments. The metaAgents language defines models for agent organization coordination, control, and resource management. Application agents and agent organizations will be constructed by combining metaAgents and metaCourier building blocks with task-specific functionality such as diagnostic or planning reasoning. This architecture hides implementation details of communications, control, and integration in distributed processing environments, enabling application developers to concentrate on the design and functionality of the intelligent agents and agent networks themselves.
A Virtual Hosting Environment for Distributed Online Gaming
NASA Astrophysics Data System (ADS)
Brossard, David; Prieto Martinez, Juan Luis
With enterprise boundaries becoming fuzzier, it’s become clear that businesses need to share resources, expose services, and interact in many different ways. In order to achieve such a distribution in a dynamic, flexible, and secure way, we have designed and implemented a virtual hosting environment (VHE) which aims at integrating business services across enterprise boundaries and virtualising the ICT environment within which these services operate in order to exploit economies of scale for the businesses as well as achieve shorter concept-to-market time scales. To illustrate the relevance of the VHE, we have applied it to the online gaming world. Online gaming is an early adopter of distributed computing and more than 30% of gaming developer companies, being aware of the shift, are focusing on developing high performance platforms for the new online trend.
Wang, Y.; Boyd, E.; Crane, S.; Lu-Irving, P.; Krabbenhoft, D.; King, S.; Dighton, J.; Geesey, G.; Barkay, T.
2011-01-01
The distribution and phylogeny of extant protein-encoding genes recovered from geochemically diverse environments can provide insight into the physical and chemical parameters that led to the origin and which constrained the evolution of a functional process. Mercuric reductase (MerA) plays an integral role in mercury (Hg) biogeochemistry by catalyzing the transformation of Hg(II) to Hg(0). Putative merA sequences were amplified from DNA extracts of microbial communities associated with mats and sulfur precipitates from physicochemically diverse Hg-containing springs in Yellowstone National Park, Wyoming, using four PCR primer sets that were designed to capture the known diversity of merA. The recovery of novel and deeply rooted MerA lineages from these habitats supports previous evidence that indicates merA originated in a thermophilic environment. Generalized linear models indicate that the distribution of putative archaeal merA lineages was constrained by a combination of pH, dissolved organic carbon, dissolved total mercury and sulfide. The models failed to identify statistically well supported trends for the distribution of putative bacterial merA lineages as a function of these or other measured environmental variables, suggesting that these lineages were either influenced by environmental parameters not considered in the present study, or the bacterial primer sets were designed to target too broad of a class of genes which may have responded differently to environmental stimuli. The widespread occurrence of merA in the geothermal environments implies a prominent role for Hg detoxification in these environments. Moreover, the differences in the distribution of the merA genes amplified with the four merA primer sets suggests that the organisms putatively engaged in this activity have evolved to occupy different ecological niches within the geothermal gradient. ?? 2011 Springer Science+Business Media, LLC.
Wang, Yanping; Boyd, Eric; Crane, Sharron; Lu-Irving, Patricia; Krabbenhoft, David; King, Susan; Dighton, John; Geesey, Gill; Barkay, Tamar
2011-11-01
The distribution and phylogeny of extant protein-encoding genes recovered from geochemically diverse environments can provide insight into the physical and chemical parameters that led to the origin and which constrained the evolution of a functional process. Mercuric reductase (MerA) plays an integral role in mercury (Hg) biogeochemistry by catalyzing the transformation of Hg(II) to Hg(0). Putative merA sequences were amplified from DNA extracts of microbial communities associated with mats and sulfur precipitates from physicochemically diverse Hg-containing springs in Yellowstone National Park, Wyoming, using four PCR primer sets that were designed to capture the known diversity of merA. The recovery of novel and deeply rooted MerA lineages from these habitats supports previous evidence that indicates merA originated in a thermophilic environment. Generalized linear models indicate that the distribution of putative archaeal merA lineages was constrained by a combination of pH, dissolved organic carbon, dissolved total mercury and sulfide. The models failed to identify statistically well supported trends for the distribution of putative bacterial merA lineages as a function of these or other measured environmental variables, suggesting that these lineages were either influenced by environmental parameters not considered in the present study, or the bacterial primer sets were designed to target too broad of a class of genes which may have responded differently to environmental stimuli. The widespread occurrence of merA in the geothermal environments implies a prominent role for Hg detoxification in these environments. Moreover, the differences in the distribution of the merA genes amplified with the four merA primer sets suggests that the organisms putatively engaged in this activity have evolved to occupy different ecological niches within the geothermal gradient.
NASA Astrophysics Data System (ADS)
van Rooij, Michael P. C.
Current turbomachinery design systems increasingly rely on multistage Computational Fluid Dynamics (CFD) as a means to assess performance of designs. However, design weaknesses attributed to improper stage matching are addressed using often ineffective strategies involving a costly iterative loop between blading modification, revision of design intent, and evaluation of aerodynamic performance. A design methodology is presented which greatly improves the process of achieving design-point aerodynamic matching. It is based on a three-dimensional viscous inverse design method which generates the blade camber surface based on prescribed pressure loading, thickness distribution and stacking line. This inverse design method has been extended to allow blading analysis and design in a multi-blade row environment. Blade row coupling was achieved through a mixing plane approximation. Parallel computing capability in the form of MPI has been implemented to reduce the computational time for multistage calculations. Improvements have been made to the flow solver to reach the level of accuracy required for multistage calculations. These include inclusion of heat flux, temperature-dependent treatment of viscosity, and improved calculation of stress components and artificial dissipation near solid walls. A validation study confirmed that the obtained accuracy is satisfactory at design point conditions. Improvements have also been made to the inverse method to increase robustness and design fidelity. These include the possibility to exclude spanwise sections of the blade near the endwalls from the design process, and a scheme that adjusts the specified loading area for changes resulting from the leading and trailing edge treatment. Furthermore, a pressure loading manager has been developed. Its function is to automatically adjust the pressure loading area distribution during the design calculation in order to achieve a specified design objective. Possible objectives are overall mass flow and compression ratio, and radial distribution of exit flow angle. To supplement the loading manager, mass flow inlet and exit boundary conditions have been implemented. Through appropriate combination of pressure or mass flow inflow/outflow boundary conditions and loading manager objectives, increased control over the design intent can be obtained. The three-dimensional multistage inverse design method with pressure loading manager was demonstrated to offer greatly enhanced blade row matching capabilities. Multistage design allows for simultaneous design of blade rows in a mutually interacting environment, which permits the redesigned blading to adapt to changing aerodynamic conditions resulting from the redesign. This ensures that the obtained blading geometry and performance implied by the prescribed pressure loading distribution are consistent with operation in the multi-blade row environment. The developed methodology offers high aerodynamic design quality and productivity, and constitutes a significant improvement over existing approaches used to address design-point aerodynamic matching.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fadika, Zacharia; Dede, Elif; Govindaraju, Madhusudhan
MapReduce is increasingly becoming a popular framework, and a potent programming model. The most popular open source implementation of MapReduce, Hadoop, is based on the Hadoop Distributed File System (HDFS). However, as HDFS is not POSIX compliant, it cannot be fully leveraged by applications running on a majority of existing HPC environments such as Teragrid and NERSC. These HPC environments typicallysupport globally shared file systems such as NFS and GPFS. On such resourceful HPC infrastructures, the use of Hadoop not only creates compatibility issues, but also affects overall performance due to the added overhead of the HDFS. This paper notmore » only presents a MapReduce implementation directly suitable for HPC environments, but also exposes the design choices for better performance gains in those settings. By leveraging inherent distributed file systems' functions, and abstracting them away from its MapReduce framework, MARIANE (MApReduce Implementation Adapted for HPC Environments) not only allows for the use of the model in an expanding number of HPCenvironments, but also allows for better performance in such settings. This paper shows the applicability and high performance of the MapReduce paradigm through MARIANE, an implementation designed for clustered and shared-disk file systems and as such not dedicated to a specific MapReduce solution. The paper identifies the components and trade-offs necessary for this model, and quantifies the performance gains exhibited by our approach in distributed environments over Apache Hadoop in a data intensive setting, on the Magellan testbed at the National Energy Research Scientific Computing Center (NERSC).« less
Parallel processing for scientific computations
NASA Technical Reports Server (NTRS)
Alkhatib, Hasan S.
1995-01-01
The scope of this project dealt with the investigation of the requirements to support distributed computing of scientific computations over a cluster of cooperative workstations. Various experiments on computations for the solution of simultaneous linear equations were performed in the early phase of the project to gain experience in the general nature and requirements of scientific applications. A specification of a distributed integrated computing environment, DICE, based on a distributed shared memory communication paradigm has been developed and evaluated. The distributed shared memory model facilitates porting existing parallel algorithms that have been designed for shared memory multiprocessor systems to the new environment. The potential of this new environment is to provide supercomputing capability through the utilization of the aggregate power of workstations cooperating in a cluster interconnected via a local area network. Workstations, generally, do not have the computing power to tackle complex scientific applications, making them primarily useful for visualization, data reduction, and filtering as far as complex scientific applications are concerned. There is a tremendous amount of computing power that is left unused in a network of workstations. Very often a workstation is simply sitting idle on a desk. A set of tools can be developed to take advantage of this potential computing power to create a platform suitable for large scientific computations. The integration of several workstations into a logical cluster of distributed, cooperative, computing stations presents an alternative to shared memory multiprocessor systems. In this project we designed and evaluated such a system.
7 CFR 1780.57 - Design policies.
Code of Federal Regulations, 2014 CFR
2014-01-01
... et seq.). (c) Energy/environment. Facility design should consider cost effective energy-efficient and... distribution system water losses do not exceed reasonable levels. (g) Conformity with State drinking water... title XIV of the Public Health Service Act (commonly known as the ‘Safe Drinking Water Act’) (42 U.S.C...
7 CFR 1780.57 - Design policies.
Code of Federal Regulations, 2010 CFR
2010-01-01
... et seq.). (c) Energy/environment. Facility design should consider cost effective energy-efficient and... distribution system water losses do not exceed reasonable levels. (g) Conformity with State drinking water... title XIV of the Public Health Service Act (commonly known as the ‘Safe Drinking Water Act’) (42 U.S.C...
7 CFR 1780.57 - Design policies.
Code of Federal Regulations, 2011 CFR
2011-01-01
... et seq.). (c) Energy/environment. Facility design should consider cost effective energy-efficient and... distribution system water losses do not exceed reasonable levels. (g) Conformity with State drinking water... title XIV of the Public Health Service Act (commonly known as the ‘Safe Drinking Water Act’) (42 U.S.C...
7 CFR 1780.57 - Design policies.
Code of Federal Regulations, 2013 CFR
2013-01-01
... et seq.). (c) Energy/environment. Facility design should consider cost effective energy-efficient and... distribution system water losses do not exceed reasonable levels. (g) Conformity with State drinking water... title XIV of the Public Health Service Act (commonly known as the ‘Safe Drinking Water Act’) (42 U.S.C...
7 CFR 1780.57 - Design policies.
Code of Federal Regulations, 2012 CFR
2012-01-01
... et seq.). (c) Energy/environment. Facility design should consider cost effective energy-efficient and... distribution system water losses do not exceed reasonable levels. (g) Conformity with State drinking water... title XIV of the Public Health Service Act (commonly known as the ‘Safe Drinking Water Act’) (42 U.S.C...
Toolpack mathematical software development environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osterweil, L.
1982-07-21
The purpose of this research project was to produce a well integrated set of tools for the support of numerical computation. The project entailed the specification, design and implementation of both a diversity of tools and an innovative tool integration mechanism. This large configuration of tightly integrated tools comprises an environment for numerical software development, and has been named Toolpack/IST (Integrated System of Tools). Following the creation of this environment in prototype form, the environment software was readied for widespread distribution by transitioning it to a development organization for systematization, documentation and distribution. It is expected that public release ofmore » Toolpack/IST will begin imminently and will provide a basis for evaluation of the innovative software approaches taken as well as a uniform set of development tools for the numerical software community.« less
The development of a collaborative virtual environment for finite element simulation
NASA Astrophysics Data System (ADS)
Abdul-Jalil, Mohamad Kasim
Communication between geographically distributed designers has been a major hurdle in traditional engineering design. Conventional methods of communication, such as video conferencing, telephone, and email, are less efficient especially when dealing with complex design models. Complex shapes, intricate features and hidden parts are often difficult to describe verbally or even using traditional 2-D or 3-D visual representations. Virtual Reality (VR) and Internet technologies have provided a substantial potential to bridge the present communication barrier. VR technology allows designers to immerse themselves in a virtual environment to view and manipulate this model just as in real-life. Fast Internet connectivity has enabled fast data transfer between remote locations. Although various collaborative virtual environment (CVE) systems have been developed in the past decade, they are limited to high-end technology that is not accessible to typical designers. The objective of this dissertation is to discover and develop a new approach to increase the efficiency of the design process, particularly for large-scale applications wherein participants are geographically distributed. A multi-platform and easily accessible collaborative virtual environment (CVRoom), is developed to accomplish the stated research objective. Geographically dispersed designers can meet in a single shared virtual environment to discuss issues pertaining to the engineering design process and to make trade-off decisions more quickly than before, thereby speeding the entire process. This 'faster' design process will be achieved through the development of capabilities to better enable the multidisciplinary and modeling the trade-off decisions that are so critical before launching into a formal detailed design. The features of the environment developed as a result of this research include the ability to view design models, use voice interaction, and to link engineering analysis modules (such as Finite Element Analysis module, such as is demonstrated in this work). One of the major issues in developing a CVE system for engineering design purposes is to obtain any pertinent simulation results in real-time. This is critical so that the designers can make decisions based on these results quickly. For example, in a finite element analysis, if a design model is changed or perturbed, the analysis results must be obtained in real-time or near real-time to make the virtual meeting environment realistic. In this research, the finite difference-based Design Sensitivity Analysis (DSA) approach is employed to approximate structural responses (i.e. stress, displacement, etc), so as to demonstrate the applicability of CVRoom for engineering design trade-offs. This DSA approach provides for fast approximation and is well-suited for the virtual meeting environment where fast response time is required. The DSA-based approach is tested on several example test problems to show its applicability and limitations. This dissertation demonstrates that an increase in efficiency and reduction of time required for a complex design processing can be accomplished using the approach developed in this dissertation research. Several implementations of CVRoom by students working on common design tasks were investigated. All participants confirmed the preference of using the collaborative virtual environment developed in this dissertation work (CVRoom) over other modes of interactions. It is proposed here that CVRoom is representative of the type of collaborative virtual environment that will be used by most designers in the future to reduce the time required in a design cycle and thereby reduce the associated cost.
Design & implementation of distributed spatial computing node based on WPS
NASA Astrophysics Data System (ADS)
Liu, Liping; Li, Guoqing; Xie, Jibo
2014-03-01
Currently, the research work of SIG (Spatial Information Grid) technology mostly emphasizes on the spatial data sharing in grid environment, while the importance of spatial computing resources is ignored. In order to implement the sharing and cooperation of spatial computing resources in grid environment, this paper does a systematical research of the key technologies to construct Spatial Computing Node based on the WPS (Web Processing Service) specification by OGC (Open Geospatial Consortium). And a framework of Spatial Computing Node is designed according to the features of spatial computing resources. Finally, a prototype of Spatial Computing Node is implemented and the relevant verification work under the environment is completed.
Modeling knee joint endoprosthesis mode of deformation
NASA Astrophysics Data System (ADS)
Skeeba, V. Yu; Ivancivsky, V. V.
2018-03-01
The purpose of the work was to define the efficient design of the endoprosthesis, working in a multiple-cycle loading environment. Methodology and methods: triangulated surfaces of the base contact surfaces of endoprosthesis butt elements have been created using the PowerShape and SolidWorks software functional environment, and the assemblies of the possible combinations of the knee joint prosthetic designs have been prepared. The mode of deformation modeling took place in the multipurpose program complex ANSYS. Results and discussion: as a result of the numerical modeling, the following data were obtained for each of the developed knee joint versions: the distribution fields of absolute (total) and relative deformations; equivalent stress distribution fields; fatigue strength coefficient distribution fields. In the course of the studies, the following efficient design assembly has been established: 1) Ti-Al-V alloy composite femoral component with polymer inserts; 2) ceramic liners of the compound separator; 3) a Ti-Al-V alloy composite tibial component. The fatigue strength coefficient for the femoral component is 4.2; for the femoral component polymer inserts is 1.2; for the ceramic liners of the compound separator is 3.1; for the tibial component is 2.7. This promising endoprosthesis structure is recommended for further design and technological development.
NASA Technical Reports Server (NTRS)
Townsend, James C.; Weston, Robert P.; Eidson, Thomas M.
1993-01-01
The Framework for Interdisciplinary Design Optimization (FIDO) is a general programming environment for automating the distribution of complex computing tasks over a networked system of heterogeneous computers. For example, instead of manually passing a complex design problem between its diverse specialty disciplines, the FIDO system provides for automatic interactions between the discipline tasks and facilitates their communications. The FIDO system networks all the computers involved into a distributed heterogeneous computing system, so they have access to centralized data and can work on their parts of the total computation simultaneously in parallel whenever possible. Thus, each computational task can be done by the most appropriate computer. Results can be viewed as they are produced and variables changed manually for steering the process. The software is modular in order to ease migration to new problems: different codes can be substituted for each of the current code modules with little or no effect on the others. The potential for commercial use of FIDO rests in the capability it provides for automatically coordinating diverse computations on a networked system of workstations and computers. For example, FIDO could provide the coordination required for the design of vehicles or electronics or for modeling complex systems.
NASA Technical Reports Server (NTRS)
Allard, R.; Mack, B.; Bayoumi, M. M.
1989-01-01
Most robot systems lack a suitable hardware and software environment for the efficient research of new control and sensing schemes. Typically, engineers and researchers need to be experts in control, sensing, programming, communication and robotics in order to implement, integrate and test new ideas in a robot system. In order to reduce this time, the Robot Controller Test Station (RCTS) has been developed. It uses a modular hardware and software architecture allowing easy physical and functional reconfiguration of a robot. This is accomplished by emphasizing four major design goals: flexibility, portability, ease of use, and ease of modification. An enhanced distributed processing version of RCTS is described. It features an expanded and more flexible communication system design. Distributed processing results in the availability of more local computing power and retains the low cost of microprocessors. A large number of possible communication, control and sensing schemes can therefore be easily introduced and tested, using the same basic software structure.
Contingency theoretic methodology for agent-based web-oriented manufacturing systems
NASA Astrophysics Data System (ADS)
Durrett, John R.; Burnell, Lisa J.; Priest, John W.
2000-12-01
The development of distributed, agent-based, web-oriented, N-tier Information Systems (IS) must be supported by a design methodology capable of responding to the convergence of shifts in business process design, organizational structure, computing, and telecommunications infrastructures. We introduce a contingency theoretic model for the use of open, ubiquitous software infrastructure in the design of flexible organizational IS. Our basic premise is that developers should change in the way they view the software design process from a view toward the solution of a problem to one of the dynamic creation of teams of software components. We postulate that developing effective, efficient, flexible, component-based distributed software requires reconceptualizing the current development model. The basic concepts of distributed software design are merged with the environment-causes-structure relationship from contingency theory; the task-uncertainty of organizational- information-processing relationships from information processing theory; and the concept of inter-process dependencies from coordination theory. Software processes are considered as employees, groups of processes as software teams, and distributed systems as software organizations. Design techniques already used in the design of flexible business processes and well researched in the domain of the organizational sciences are presented. Guidelines that can be utilized in the creation of component-based distributed software will be discussed.
The WorkPlace distributed processing environment
NASA Technical Reports Server (NTRS)
Ames, Troy; Henderson, Scott
1993-01-01
Real time control problems require robust, high performance solutions. Distributed computing can offer high performance through parallelism and robustness through redundancy. Unfortunately, implementing distributed systems with these characteristics places a significant burden on the applications programmers. Goddard Code 522 has developed WorkPlace to alleviate this burden. WorkPlace is a small, portable, embeddable network interface which automates message routing, failure detection, and re-configuration in response to failures in distributed systems. This paper describes the design and use of WorkPlace, and its application in the construction of a distributed blackboard system.
XML-Based Visual Specification of Multidisciplinary Applications
NASA Technical Reports Server (NTRS)
Al-Theneyan, Ahmed; Jakatdar, Amol; Mehrotra, Piyush; Zubair, Mohammad
2001-01-01
The advancements in the Internet and Web technologies have fueled a growing interest in developing a web-based distributed computing environment. We have designed and developed Arcade, a web-based environment for designing, executing, monitoring, and controlling distributed heterogeneous applications, which is easy to use and access, portable, and provides support through all phases of the application development and execution. A major focus of the environment is the specification of heterogeneous, multidisciplinary applications. In this paper we focus on the visual and script-based specification interface of Arcade. The web/browser-based visual interface is designed to be intuitive to use and can also be used for visual monitoring during execution. The script specification is based on XML to: (1) make it portable across different frameworks, and (2) make the development of our tools easier by using the existing freely available XML parsers and editors. There is a one-to-one correspondence between the visual and script-based interfaces allowing users to go back and forth between the two. To support this we have developed translators that translate a script-based specification to a visual-based specification, and vice-versa. These translators are integrated with our tools and are transparent to users.
Hierarchical resilience with lightweight threads.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wheeler, Kyle Bruce
2011-10-01
This paper proposes methodology for providing robustness and resilience for a highly threaded distributed- and shared-memory environment based on well-defined inputs and outputs to lightweight tasks. These inputs and outputs form a failure 'barrier', allowing tasks to be restarted or duplicated as necessary. These barriers must be expanded based on task behavior, such as communication between tasks, but do not prohibit any given behavior. One of the trends in high-performance computing codes seems to be a trend toward self-contained functions that mimic functional programming. Software designers are trending toward a model of software design where their core functions are specifiedmore » in side-effect free or low-side-effect ways, wherein the inputs and outputs of the functions are well-defined. This provides the ability to copy the inputs to wherever they need to be - whether that's the other side of the PCI bus or the other side of the network - do work on that input using local memory, and then copy the outputs back (as needed). This design pattern is popular among new distributed threading environment designs. Such designs include the Barcelona STARS system, distributed OpenMP systems, the Habanero-C and Habanero-Java systems from Vivek Sarkar at Rice University, the HPX/ParalleX model from LSU, as well as our own Scalable Parallel Runtime effort (SPR) and the Trilinos stateless kernels. This design pattern is also shared by CUDA and several OpenMP extensions for GPU-type accelerators (e.g. the PGI OpenMP extensions).« less
ERIC Educational Resources Information Center
Broyles, Iris A.
2011-01-01
This study evaluated the effectiveness of a distributed cognition hybrid minicourse in increasing a 1-day camp's ability to effect long-term knowledge retention and pro-environment attitudes and behaviors in sixth graders. The preevent-postevent minicourse was designed to reduce cognitive overload generated by an intense immersion into the…
Reproducible Large-Scale Neuroimaging Studies with the OpenMOLE Workflow Management System.
Passerat-Palmbach, Jonathan; Reuillon, Romain; Leclaire, Mathieu; Makropoulos, Antonios; Robinson, Emma C; Parisot, Sarah; Rueckert, Daniel
2017-01-01
OpenMOLE is a scientific workflow engine with a strong emphasis on workload distribution. Workflows are designed using a high level Domain Specific Language (DSL) built on top of Scala. It exposes natural parallelism constructs to easily delegate the workload resulting from a workflow to a wide range of distributed computing environments. OpenMOLE hides the complexity of designing complex experiments thanks to its DSL. Users can embed their own applications and scale their pipelines from a small prototype running on their desktop computer to a large-scale study harnessing distributed computing infrastructures, simply by changing a single line in the pipeline definition. The construction of the pipeline itself is decoupled from the execution context. The high-level DSL abstracts the underlying execution environment, contrary to classic shell-script based pipelines. These two aspects allow pipelines to be shared and studies to be replicated across different computing environments. Workflows can be run as traditional batch pipelines or coupled with OpenMOLE's advanced exploration methods in order to study the behavior of an application, or perform automatic parameter tuning. In this work, we briefly present the strong assets of OpenMOLE and detail recent improvements targeting re-executability of workflows across various Linux platforms. We have tightly coupled OpenMOLE with CARE, a standalone containerization solution that allows re-executing on a Linux host any application that has been packaged on another Linux host previously. The solution is evaluated against a Python-based pipeline involving packages such as scikit-learn as well as binary dependencies. All were packaged and re-executed successfully on various HPC environments, with identical numerical results (here prediction scores) obtained on each environment. Our results show that the pair formed by OpenMOLE and CARE is a reliable solution to generate reproducible results and re-executable pipelines. A demonstration of the flexibility of our solution showcases three neuroimaging pipelines harnessing distributed computing environments as heterogeneous as local clusters or the European Grid Infrastructure (EGI).
Reproducible Large-Scale Neuroimaging Studies with the OpenMOLE Workflow Management System
Passerat-Palmbach, Jonathan; Reuillon, Romain; Leclaire, Mathieu; Makropoulos, Antonios; Robinson, Emma C.; Parisot, Sarah; Rueckert, Daniel
2017-01-01
OpenMOLE is a scientific workflow engine with a strong emphasis on workload distribution. Workflows are designed using a high level Domain Specific Language (DSL) built on top of Scala. It exposes natural parallelism constructs to easily delegate the workload resulting from a workflow to a wide range of distributed computing environments. OpenMOLE hides the complexity of designing complex experiments thanks to its DSL. Users can embed their own applications and scale their pipelines from a small prototype running on their desktop computer to a large-scale study harnessing distributed computing infrastructures, simply by changing a single line in the pipeline definition. The construction of the pipeline itself is decoupled from the execution context. The high-level DSL abstracts the underlying execution environment, contrary to classic shell-script based pipelines. These two aspects allow pipelines to be shared and studies to be replicated across different computing environments. Workflows can be run as traditional batch pipelines or coupled with OpenMOLE's advanced exploration methods in order to study the behavior of an application, or perform automatic parameter tuning. In this work, we briefly present the strong assets of OpenMOLE and detail recent improvements targeting re-executability of workflows across various Linux platforms. We have tightly coupled OpenMOLE with CARE, a standalone containerization solution that allows re-executing on a Linux host any application that has been packaged on another Linux host previously. The solution is evaluated against a Python-based pipeline involving packages such as scikit-learn as well as binary dependencies. All were packaged and re-executed successfully on various HPC environments, with identical numerical results (here prediction scores) obtained on each environment. Our results show that the pair formed by OpenMOLE and CARE is a reliable solution to generate reproducible results and re-executable pipelines. A demonstration of the flexibility of our solution showcases three neuroimaging pipelines harnessing distributed computing environments as heterogeneous as local clusters or the European Grid Infrastructure (EGI). PMID:28381997
NASA Technical Reports Server (NTRS)
Yates, Amy M.; Torres-Pomales, Wilfredo; Malekpour, Mahyar R.; Gonzalez, Oscar R.; Gray, W. Steven
2010-01-01
Safety-critical distributed flight control systems require robustness in the presence of faults. In general, these systems consist of a number of input/output (I/O) and computation nodes interacting through a fault-tolerant data communication system. The communication system transfers sensor data and control commands and can handle most faults under typical operating conditions. However, the performance of the closed-loop system can be adversely affected as a result of operating in harsh environments. In particular, High-Intensity Radiated Field (HIRF) environments have the potential to cause random fault manifestations in individual avionic components and to generate simultaneous system-wide communication faults that overwhelm existing fault management mechanisms. This paper presents the design of an experiment conducted at the NASA Langley Research Center's HIRF Laboratory to statistically characterize the faults that a HIRF environment can trigger on a single node of a distributed flight control system.
Distributed computing testbed for a remote experimental environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Butner, D.N.; Casper, T.A.; Howard, B.C.
1995-09-18
Collaboration is increasing as physics research becomes concentrated on a few large, expensive facilities, particularly in magnetic fusion energy research, with national and international participation. These facilities are designed for steady state operation and interactive, real-time experimentation. We are developing tools to provide for the establishment of geographically distant centers for interactive operations; such centers would allow scientists to participate in experiments from their home institutions. A testbed is being developed for a Remote Experimental Environment (REE), a ``Collaboratory.`` The testbed will be used to evaluate the ability of a remotely located group of scientists to conduct research on themore » DIII-D Tokamak at General Atomics. The REE will serve as a testing environment for advanced control and collaboration concepts applicable to future experiments. Process-to-process communications over high speed wide area networks provide real-time synchronization and exchange of data among multiple computer networks, while the ability to conduct research is enhanced by adding audio/video communication capabilities. The Open Software Foundation`s Distributed Computing Environment is being used to test concepts in distributed control, security, naming, remote procedure calls and distributed file access using the Distributed File Services. We are exploring the technology and sociology of remotely participating in the operation of a large scale experimental facility.« less
Guidelines for developing distributed virtual environment applications
NASA Astrophysics Data System (ADS)
Stytz, Martin R.; Banks, Sheila B.
1998-08-01
We have conducted a variety of projects that served to investigate the limits of virtual environments and distributed virtual environment (DVE) technology for the military and medical professions. The projects include an application that allows the user to interactively explore a high-fidelity, dynamic scale model of the Solar System and a high-fidelity, photorealistic, rapidly reconfigurable aircraft simulator. Additional projects are a project for observing, analyzing, and understanding the activity in a military distributed virtual environment, a project to develop a distributed threat simulator for training Air Force pilots, a virtual spaceplane to determine user interface requirements for a planned military spaceplane system, and an automated wingman for use in supplementing or replacing human-controlled systems in a DVE. The last two projects are a virtual environment user interface framework; and a project for training hospital emergency department personnel. In the process of designing and assembling the DVE applications in support of these projects, we have developed rules of thumb and insights into assembling DVE applications and the environment itself. In this paper, we open with a brief review of the applications that were the source for our insights and then present the lessons learned as a result of these projects. The lessons we have learned fall primarily into five areas. These areas are requirements development, software architecture, human-computer interaction, graphical database modeling, and construction of computer-generated forces.
Immersive Simulations for Smart Classrooms: Exploring Evolutionary Concepts in Secondary Science
ERIC Educational Resources Information Center
Lui, Michelle; Slotta, James D.
2014-01-01
This article presents the design of an immersive simulation and inquiry activity for technology-enhanced classrooms. Using a co-design method, researchers worked with a high school biology teacher to create a rainforest simulation, distributed across several large displays in the room to immerse students in the environment. The authors created and…
ERIC Educational Resources Information Center
Lee, Fong-Lok; Liang, Steven; Chan, Tak-Wai
1999-01-01
Describes the design, implementation, and preliminary evaluation of three synchronous distributed learning prototype systems: Co-Working System, Working Along System, and Hybrid System. Each supports a particular style of interaction, referred to a socio-activity learning model, between members of student dyads (pairs). All systems were…
Design and Pedagogical Issues in the Development of the InSight Series of Instructional Software.
ERIC Educational Resources Information Center
Baro, John A.; Lehmkulke, Stephen
1993-01-01
Design issues in development of InSight software for optometric education include choice of hardware, identification of audience, definition of scope and limitations of content, selection of user interface and programing environment, obtaining user feedback, and software distribution. Pedagogical issues include practicality and improvement on…
Overview of the LINCS architecture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fletcher, J.G.; Watson, R.W.
1982-01-13
Computing at the Lawrence Livermore National Laboratory (LLNL) has evolved over the past 15 years with a computer network based resource sharing environment. The increasing use of low cost and high performance micro, mini and midi computers and commercially available local networking systems will accelerate this trend. Further, even the large scale computer systems, on which much of the LLNL scientific computing depends, are evolving into multiprocessor systems. It is our belief that the most cost effective use of this environment will depend on the development of application systems structured into cooperating concurrent program modules (processes) distributed appropriately over differentmore » nodes of the environment. A node is defined as one or more processors with a local (shared) high speed memory. Given the latter view, the environment can be characterized as consisting of: multiple nodes communicating over noisy channels with arbitrary delays and throughput, heterogenous base resources and information encodings, no single administration controlling all resources, distributed system state, and no uniform time base. The system design problem is - how to turn the heterogeneous base hardware/firmware/software resources of this environment into a coherent set of resources that facilitate development of cost effective, reliable, and human engineered applications. We believe the answer lies in developing a layered, communication oriented distributed system architecture; layered and modular to support ease of understanding, reconfiguration, extensibility, and hiding of implementation or nonessential local details; communication oriented because that is a central feature of the environment. The Livermore Interactive Network Communication System (LINCS) is a hierarchical architecture designed to meet the above needs. While having characteristics in common with other architectures, it differs in several respects.« less
Pressure Distribution and Air Data System for the Aeroassist Flight Experiment
NASA Technical Reports Server (NTRS)
Gibson, Lorelei S.; Siemers, Paul M., III; Kern, Frederick A.
1989-01-01
The Aeroassist Flight Experiment (AFE) is designed to provide critical flight data necessary for the design of future Aeroassist Space Transfer Vehicles (ASTV). This flight experiment will provide aerodynamic, aerothermodynamic, and environmental data for verification of experimental and computational flow field techniques. The Pressure Distribution and Air Data System (PD/ADS), one of the measurement systems incorporated into the AFE spacecraft, is designed to provide accurate pressure measurements on the windward surface of the vehicle. These measurements will be used to determine the pressure distribution and air data parameters (angle of attack, angle of sideslip, and free-stream dynamic pressure) encountered by the blunt-bodied vehicle over an altitude range of 76.2 km to 94.5 km. Design and development data are presented and include: measurement requirements, measurement heritage, theoretical studies to define the vehicle environment, flush-mounted orifice configuration, pressure transducer selection and performance evaluation data, and pressure tubing response analysis.
Neuroimaging Study Designs, Computational Analyses and Data Provenance Using the LONI Pipeline
Dinov, Ivo; Lozev, Kamen; Petrosyan, Petros; Liu, Zhizhong; Eggert, Paul; Pierce, Jonathan; Zamanyan, Alen; Chakrapani, Shruthi; Van Horn, John; Parker, D. Stott; Magsipoc, Rico; Leung, Kelvin; Gutman, Boris; Woods, Roger; Toga, Arthur
2010-01-01
Modern computational neuroscience employs diverse software tools and multidisciplinary expertise to analyze heterogeneous brain data. The classical problems of gathering meaningful data, fitting specific models, and discovering appropriate analysis and visualization tools give way to a new class of computational challenges—management of large and incongruous data, integration and interoperability of computational resources, and data provenance. We designed, implemented and validated a new paradigm for addressing these challenges in the neuroimaging field. Our solution is based on the LONI Pipeline environment [3], [4], a graphical workflow environment for constructing and executing complex data processing protocols. We developed study-design, database and visual language programming functionalities within the LONI Pipeline that enable the construction of complete, elaborate and robust graphical workflows for analyzing neuroimaging and other data. These workflows facilitate open sharing and communication of data and metadata, concrete processing protocols, result validation, and study replication among different investigators and research groups. The LONI Pipeline features include distributed grid-enabled infrastructure, virtualized execution environment, efficient integration, data provenance, validation and distribution of new computational tools, automated data format conversion, and an intuitive graphical user interface. We demonstrate the new LONI Pipeline features using large scale neuroimaging studies based on data from the International Consortium for Brain Mapping [5] and the Alzheimer's Disease Neuroimaging Initiative [6]. User guides, forums, instructions and downloads of the LONI Pipeline environment are available at http://pipeline.loni.ucla.edu. PMID:20927408
System For Research On Multiple-Arm Robots
NASA Technical Reports Server (NTRS)
Backes, Paul G.; Hayati, Samad; Tso, Kam S.; Hayward, Vincent
1991-01-01
Kali system of computer programs and equipment provides environment for research on distributed programming and distributed control of coordinated-multiple-arm robots. Suitable for telerobotics research involving sensing and execution of low level tasks. Software and configuration of hardware designed flexible so system modified easily to test various concepts in control and programming of robots, including multiple-arm control, redundant-arm control, shared control, traded control, force control, force/position hybrid control, design and integration of sensors, teleoperation, task-space description and control, methods of adaptive control, control of flexible arms, and human factors.
Electro-Mechanical Systems for Extreme Space Environments
NASA Technical Reports Server (NTRS)
Mojarradi, Mohammad M.; Tyler, Tony R.; Abel, Phillip B.; Levanas, Greg
2011-01-01
Exploration beyond low earth orbit presents challenges for hardware that must operate in extreme environments. The current state of the art is to isolate and provide heating for sensitive hardware in order to survive. However, this protection results in penalties of weight and power for the spacecraft. This is particularly true for electro-mechanical based technology such as electronics, actuators and sensors. Especially when considering distributed electronics, many electro-mechanical systems need to be located in appendage type locations, making it much harder to protect from the extreme environments. The purpose of this paper to describe the advances made in the area of developing electro-mechanical technology to survive these environments with minimal protection. The Jet Propulsion Lab (JPL), the Glenn Research Center (GRC), the Langley Research Center (LaRC), and Aeroflex, Inc. over the last few years have worked to develop and test electro-mechanical hardware that will meet the stringent environmental demands of the moon, and which can also be leveraged for other challenging space exploration missions. Prototype actuators and electronics have been built and tested. Brushless DC actuators designed by Aeroflex, Inc have been tested with interface temperatures as low as 14 degrees Kelvin. Testing of the Aeroflex design has shown that a brushless DC motor with a single stage planetary gearbox can operate in low temperature environments for at least 120 million cycles (measured at motor) if long life is considered as part of the design. A motor control distributed electronics concept developed by JPL was built and operated at temperatures as low as -160 C, with many components still operational down to -245 C. Testing identified the components not capable of meeting the low temperature goal of -230 C. This distributed controller is universal in design with the ability to control different types of motors and read many different types of sensors. The controller form factor was designed to surround or be at the actuator. Communication with the slave controllers is accomplished by a bus, thus limiting the number of wires that must be routed to the extremity locations. Efforts have also been made to increase the power capability of these electronics for the ability to power and control actuators up to 2.5KW and still meet the environmental challenges. For commutation and control of the actuator, a resolver was integrated and tested with the actuator. Testing of this resolver demonstrated temperature limitations. Subsequent failure analysis isolated the low temperature failure mechanism and a design solution was negotiated with the manufacturer. Several years of work have resulted in specialized electro-mechanical hardware to meet extreme space exploration environments, a test history that verifies and finds limitations of the designs and a growing knowledge base that can be leveraged by future space exploration missions.
Live Virtual Constructive Distributed Test Environment Characterization Report
NASA Technical Reports Server (NTRS)
Murphy, Jim; Kim, Sam K.
2013-01-01
This report documents message latencies observed over various Live, Virtual, Constructive, (LVC) simulation environment configurations designed to emulate possible system architectures for the Unmanned Aircraft Systems (UAS) Integration in the National Airspace System (NAS) Project integrated tests. For each configuration, four scenarios with progressively increasing air traffic loads were used to determine system throughput and bandwidth impacts on message latency.
Combatting Inherent Vulnerabilities of CFAR Algorithms and a New Robust CFAR Design
1993-09-01
elements of any automatic radar system. Unfortunately, CFAR systems are inherently vulnerable to degradation caused by large clutter edges, multiple ...edges, multiple targets, and electronic countermeasures (ECM) environments. 20 Distribution, Availability of Abstract 21 Abstract Security...inherently vulnerable to degradation caused by large clutter edges, multiple targets and jamming environments. This thesis presents eight popular and studied
Dynamic Distribution and Layouting of Model-Based User Interfaces in Smart Environments
NASA Astrophysics Data System (ADS)
Roscher, Dirk; Lehmann, Grzegorz; Schwartze, Veit; Blumendorf, Marco; Albayrak, Sahin
The developments in computer technology in the last decade change the ways of computer utilization. The emerging smart environments make it possible to build ubiquitous applications that assist users during their everyday life, at any time, in any context. But the variety of contexts-of-use (user, platform and environment) makes the development of such ubiquitous applications for smart environments and especially its user interfaces a challenging and time-consuming task. We propose a model-based approach, which allows adapting the user interface at runtime to numerous (also unknown) contexts-of-use. Based on a user interface modelling language, defining the fundamentals and constraints of the user interface, a runtime architecture exploits the description to adapt the user interface to the current context-of-use. The architecture provides automatic distribution and layout algorithms for adapting the applications also to contexts unforeseen at design time. Designers do not specify predefined adaptations for each specific situation, but adaptation constraints and guidelines. Furthermore, users are provided with a meta user interface to influence the adaptations according to their needs. A smart home energy management system serves as running example to illustrate the approach.
An approach to a real-time distribution system
NASA Technical Reports Server (NTRS)
Kittle, Frank P., Jr.; Paddock, Eddie J.; Pocklington, Tony; Wang, Lui
1990-01-01
The requirements of a real-time data distribution system are to provide fast, reliable delivery of data from source to destination with little or no impact to the data source. In this particular case, the data sources are inside an operational environment, the Mission Control Center (MCC), and any workstation receiving data directly from the operational computer must conform to the software standards of the MCC. In order to supply data to development workstations outside of the MCC, it is necessary to use gateway computers that prevent unauthorized data transfer back to the operational computers. Many software programs produced on the development workstations are targeted for real-time operation. Therefore, these programs must migrate from the development workstation to the operational workstation. It is yet another requirement for the Data Distribution System to ensure smooth transition of the data interfaces for the application developers. A standard data interface model has already been set up for the operational environment, so the interface between the distribution system and the application software was developed to match that model as closely as possible. The system as a whole therefore allows the rapid development of real-time applications without impacting the data sources. In summary, this approach to a real-time data distribution system provides development users outside of the MCC with an interface to MCC real-time data sources. In addition, the data interface was developed with a flexible and portable software design. This design allows for the smooth transition of new real-time applications to the MCC operational environment.
NASA Technical Reports Server (NTRS)
Kamhawi, Hilmi N.
2013-01-01
This report documents the work performed during the period from May 2011 - October 2012 on the Integrated Design and Engineering Analysis (IDEA) environment. IDEA is a collaborative environment based on an object-oriented, multidisciplinary, distributed framework using the Adaptive Modeling Language (AML). This report will focus on describing the work done in the areas of: (1) Integrating propulsion data (turbines, rockets, and scramjets) in the system, and using the data to perform trajectory analysis; (2) Developing a parametric packaging strategy for a hypersonic air breathing vehicles allowing for tank resizing when multiple fuels and/or oxidizer are part of the configuration; and (3) Vehicle scaling and closure strategies.
Mars mission science operations facilities design
NASA Technical Reports Server (NTRS)
Norris, Jeffrey S.; Wales, Roxana; Powell, Mark W.; Backes, Paul G.; Steinke, Robert C.
2002-01-01
A variety of designs for Mars rover and lander science operations centers are discussed in this paper, beginning with a brief description of the Pathfinder science operations facility and its strengths and limitations. Particular attention is then paid to lessons learned in the design and use of operations facilities for a series of mission-like field tests of the FIDO prototype Mars rover. These lessons are then applied to a proposed science operations facilities design for the 2003 Mars Exploration Rover (MER) mission. Issues discussed include equipment selection, facilities layout, collaborative interfaces, scalability, and dual-purpose environments. The paper concludes with a discussion of advanced concepts for future mission operations centers, including collaborative immersive interfaces and distributed operations. This paper's intended audience includes operations facility and situation room designers and the users of these environments.
Statistical, Graphical, and Learning Methods for Sensing, Surveillance, and Navigation Systems
2016-06-28
harsh propagation environments. Conventional filtering techniques fail to provide satisfactory performance in many important nonlinear or non...Gaussian scenarios. In addition, there is a lack of a unified methodology for the design and analysis of different filtering techniques. To address...these problems, we have proposed a new filtering methodology called belief condensation (BC) DISTRIBUTION A: Distribution approved for public release
A novel resource sharing algorithm based on distributed construction for radiant enclosure problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Finzell, Peter; Bryden, Kenneth M.
This study demonstrates a novel approach to solving inverse radiant enclosure problems based on distributed construction. Specifically, the problem of determining the temperature distribution needed on the heater surfaces to achieve a desired design surface temperature profile is recast as a distributed construction problem in which a shared resource, temperature, is distributed by computational agents moving blocks. The sharing of blocks between agents enables them to achieve their desired local state, which in turn achieves the desired global state. Each agent uses the current state of their local environment and a simple set of rules to determine when to exchangemore » blocks, each block representing a discrete unit of temperature change. This algorithm is demonstrated using the established two-dimensional inverse radiation enclosure problem. The temperature profile on the heater surfaces is adjusted to achieve a desired temperature profile on the design surfaces. The resource sharing algorithm was able to determine the needed temperatures on the heater surfaces to obtain the desired temperature distribution on the design surfaces in the nine cases examined.« less
A novel resource sharing algorithm based on distributed construction for radiant enclosure problems
Finzell, Peter; Bryden, Kenneth M.
2017-03-06
This study demonstrates a novel approach to solving inverse radiant enclosure problems based on distributed construction. Specifically, the problem of determining the temperature distribution needed on the heater surfaces to achieve a desired design surface temperature profile is recast as a distributed construction problem in which a shared resource, temperature, is distributed by computational agents moving blocks. The sharing of blocks between agents enables them to achieve their desired local state, which in turn achieves the desired global state. Each agent uses the current state of their local environment and a simple set of rules to determine when to exchangemore » blocks, each block representing a discrete unit of temperature change. This algorithm is demonstrated using the established two-dimensional inverse radiation enclosure problem. The temperature profile on the heater surfaces is adjusted to achieve a desired temperature profile on the design surfaces. The resource sharing algorithm was able to determine the needed temperatures on the heater surfaces to obtain the desired temperature distribution on the design surfaces in the nine cases examined.« less
NASA Astrophysics Data System (ADS)
Gerber, S.; Holsman, J. P.
1981-02-01
A proposed design analysis is presented of a passive solar energy efficient system for a typical three level, three bedroom, two story, garage under townhouse. The design incorporates the best, most performance proven and cost effective products, materials, processes, technologies, and subsystems which are available today. Seven distinct categories recognized for analysis are identified as: the exterior environment; the interior environment; conservation of energy; natural energy utilization; auxiliary energy utilization; control and distribution systems; and occupant adaptation. Preliminary design features, fenestration systems, the plenum supply system, the thermal storage party fire walls, direct gain storage, the radiant comfort system, and direct passive cooling systems are briefly described.
A Distributed Simulation Software System for Multi-Spacecraft Missions
NASA Technical Reports Server (NTRS)
Burns, Richard; Davis, George; Cary, Everett
2003-01-01
The paper will provide an overview of the web-based distributed simulation software system developed for end-to-end, multi-spacecraft mission design, analysis, and test at the NASA Goddard Space Flight Center (GSFC). This software system was developed for an internal research and development (IR&D) activity at GSFC called the Distributed Space Systems (DSS) Distributed Synthesis Environment (DSE). The long-term goal of the DSS-DSE is to integrate existing GSFC stand-alone test beds, models, and simulation systems to create a "hands on", end-to-end simulation environment for mission design, trade studies and simulations. The short-term goal of the DSE was therefore to develop the system architecture, and then to prototype the core software simulation capability based on a distributed computing approach, with demonstrations of some key capabilities by the end of Fiscal Year 2002 (FY02). To achieve the DSS-DSE IR&D objective, the team adopted a reference model and mission upon which FY02 capabilities were developed. The software was prototyped according to the reference model, and demonstrations were conducted for the reference mission to validate interfaces, concepts, etc. The reference model, illustrated in Fig. 1, included both space and ground elements, with functional capabilities such as spacecraft dynamics and control, science data collection, space-to-space and space-to-ground communications, mission operations, science operations, and data processing, archival and distribution addressed.
A Distributed Control System Prototyping Environment to Support Control Room Modernization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lew, Roger Thomas; Boring, Ronald Laurids; Ulrich, Thomas Anthony
Operators of critical processes, such as nuclear power production, must contend with highly complex systems, procedures, and regulations. Developing human-machine interfaces (HMIs) that better support operators is a high priority for ensuring the safe and reliable operation of critical processes. Human factors engineering (HFE) provides a rich and mature set of tools for evaluating the performance of HMIs, however the set of tools for developing and designing HMIs is still in its infancy. Here we propose a rapid prototyping approach for integrating proposed HMIs into their native environments before a design is finalized. This approach allows researchers and developers tomore » test design ideas and eliminate design flaws prior to fully developing the new system. We illustrate this approach with four prototype designs developed using Microsoft’s Windows Presentation Foundation (WPF). One example is integrated into a microworld environment to test the functionality of the design and identify the optimal level of automation for a new system in a nuclear power plant. The other three examples are integrated into a full-scale, glasstop digital simulator of a nuclear power plant. One example demonstrates the capabilities of next generation control concepts; another aims to expand the current state of the art; lastly, an HMI prototype was developed as a test platform for a new control system currently in development at U.S. nuclear power plants. WPF possesses several characteristics that make it well suited to HMI design. It provides a tremendous amount of flexibility, agility, robustness, and extensibility. Distributed control system (DCS) specific environments tend to focus on the safety and reliability requirements for real-world interfaces and consequently have less emphasis on providing functionality to support novel interaction paradigms. Because of WPF’s large user-base, Microsoft can provide an extremely mature tool. Within process control applications,WPF is platform independent and can communicate with popular full-scope process control simulator vendor plant models and DCS platforms.« less
Engineering High Assurance Distributed Cyber Physical Systems
2015-01-15
decisions: number of interacting agents and co-dependent decisions made in real-time without causing interference . To engineer a high assurance DART...environment specification, architecture definition, domain-specific languages, design patterns, code - generation, analysis, test-generation, and simulation...include synchronization between the models and source code , debugging at the model level, expression of the design intent, and quality of service
ERIC Educational Resources Information Center
Lin, Zhiang; Carley, Kathleen
How should organizations of intelligent agents be designed so that they exhibit high performance even during periods of stress? A formal model of organizational performance given a distributed decision-making environment in which agents encounter a radar detection task is presented. Using this model the performance of organizations with various…
ERIC Educational Resources Information Center
McKenna, Ann F.; Hynes, Morgan M.; Johnson, Amy M.; Carberry, Adam R.
2016-01-01
Product archaeology as an educational approach asks engineering students to consider and explore the broader societal and global impacts of a product's manufacturing, distribution, use, and disposal on people, economics, and the environment. This study examined the impact of product archaeology in a project-based engineering design course on…
NASA Technical Reports Server (NTRS)
Edwards, David L.; Cooke, William; Scruggs, Rob; Moser, Danielle E.
2008-01-01
The National Aeronautics and Space Administration (NASA) is progressing toward long-term lunar habitation. Critical to the design of a lunar habitat is an understanding of the lunar surface environment; of specific importance is the primary meteoroid and subsequent ejecta environment. The document, NASA SP-8013, was developed for the Apollo program and is the latest definition of the ejecta environment. There is concern that NASA SP-8013 may over-estimate the lunar ejecta environment. NASA's Meteoroid Environment Office (MEO) has initiated several tasks to improve the accuracy of our understanding of the lunar surface ejecta environment. This paper reports the results of experiments on projectile impact into powered pumice and unconsolidated JSC-1A Lunar Mare Regolith stimulant (JSC-1A) targets. The Ames Vertical Gun Range (AVGR) was used to accelerate projectiles to velocities in excess of 5 km/s and impact the targets at normal incidence. The ejected particles were detected by thin aluminum foil targets placed around the impact site and angular distributions were determined for ejecta. Comparison of ejecta angular distribution with previous works will be presented. A simplistic technique to characterize the ejected particles was formulated and improvements to this technique will be discussed for implementation in future tests.
Multidisciplinary High-Fidelity Analysis and Optimization of Aerospace Vehicles. Part 1; Formulation
NASA Technical Reports Server (NTRS)
Walsh, J. L.; Townsend, J. C.; Salas, A. O.; Samareh, J. A.; Mukhopadhyay, V.; Barthelemy, J.-F.
2000-01-01
An objective of the High Performance Computing and Communication Program at the NASA Langley Research Center is to demonstrate multidisciplinary shape and sizing optimization of a complete aerospace vehicle configuration by using high-fidelity, finite element structural analysis and computational fluid dynamics aerodynamic analysis in a distributed, heterogeneous computing environment that includes high performance parallel computing. A software system has been designed and implemented to integrate a set of existing discipline analysis codes, some of them computationally intensive, into a distributed computational environment for the design of a highspeed civil transport configuration. The paper describes the engineering aspects of formulating the optimization by integrating these analysis codes and associated interface codes into the system. The discipline codes are integrated by using the Java programming language and a Common Object Request Broker Architecture (CORBA) compliant software product. A companion paper presents currently available results.
NASA Technical Reports Server (NTRS)
Walsh, J. L.; Weston, R. P.; Samareh, J. A.; Mason, B. H.; Green, L. L.; Biedron, R. T.
2000-01-01
An objective of the High Performance Computing and Communication Program at the NASA Langley Research Center is to demonstrate multidisciplinary shape and sizing optimization of a complete aerospace vehicle configuration by using high-fidelity finite-element structural analysis and computational fluid dynamics aerodynamic analysis in a distributed, heterogeneous computing environment that includes high performance parallel computing. A software system has been designed and implemented to integrate a set of existing discipline analysis codes, some of them computationally intensive, into a distributed computational environment for the design of a high-speed civil transport configuration. The paper describes both the preliminary results from implementing and validating the multidisciplinary analysis and the results from an aerodynamic optimization. The discipline codes are integrated by using the Java programming language and a Common Object Request Broker Architecture compliant software product. A companion paper describes the formulation of the multidisciplinary analysis and optimization system.
An Integrated Modeling and Simulation Methodology for Intelligent Systems Design and Testing
2002-08-01
simulation and actual execution. KEYWORDS: Model Continuity, Modeling, Simulation, Experimental Frame, Real Time Systems , Intelligent Systems...the methodology for a stand-alone real time system. Then it will scale up to distributed real time systems . For both systems, step-wise simulation...MODEL CONTINUITY Intelligent real time systems monitor, respond to, or control, an external environment. This environment is connected to the digital
NASA Technical Reports Server (NTRS)
Blumberg, F. C.; Reedy, A.; Yodis, E.
1986-01-01
For the past two years, PRC has been transporting and installing a software engineering environment framework, the Automated Product control Environment (APCE), at a number of PRC and government sites on a variety of different hardware. The APCE was designed using a layered architecture which is based on a standardized set of interfaces to host system services. This interface set called the APCE Interface Set (AIS), was designed to support many of the same goals as the Common Ada Programming Support Environment (APSE) Interface Set (CAIS). The APCE was developed to provide support for the full software lifecycle. Specific requirements of the APCE design included: automation of labor intensive administrative and logistical tasks: freedom for project team members to use existing tools: maximum transportability for APCE programs, interoperability of APCE database data, and distributability of both processes and data: and maximum performance on a wide variety of operating systems. A brief description is given of the APCE and AIS, a comparison of the AIS and CAIS both in terms of functionality and of philosophy and approach and a presentation of PRC's experience in rehosting AIS and transporting APCE programs and project data. Conclusions are drawn from this experience with respect to both the CAIS efforts and Space Station plans.
The structure of the clouds distributed operating system
NASA Technical Reports Server (NTRS)
Dasgupta, Partha; Leblanc, Richard J., Jr.
1989-01-01
A novel system architecture, based on the object model, is the central structuring concept used in the Clouds distributed operating system. This architecture makes Clouds attractive over a wide class of machines and environments. Clouds is a native operating system, designed and implemented at Georgia Tech. and runs on a set of generated purpose computers connected via a local area network. The system architecture of Clouds is composed of a system-wide global set of persistent (long-lived) virtual address spaces, called objects that contain persistent data and code. The object concept is implemented at the operating system level, thus presenting a single level storage view to the user. Lightweight treads carry computational activity through the code stored in the objects. The persistent objects and threads gives rise to a programming environment composed of shared permanent memory, dispensing with the need for hardware-derived concepts such as the file systems and message systems. Though the hardware may be distributed and may have disks and networks, the Clouds provides the applications with a logically centralized system, based on a shared, structured, single level store. The current design of Clouds uses a minimalist philosophy with respect to both the kernel and the operating system. That is, the kernel and the operating system support a bare minimum of functionality. Clouds also adheres to the concept of separation of policy and mechanism. Most low-level operating system services are implemented above the kernel and most high level services are implemented at the user level. From the measured performance of using the kernel mechanisms, we are able to demonstrate that efficient implementations are feasible for the object model on commercially available hardware. Clouds provides a rich environment for conducting research in distributed systems. Some of the topics addressed in this paper include distributed programming environments, consistency of persistent data and fault-tolerance.
A Latency-Tolerant Partitioner for Distributed Computing on the Information Power Grid
NASA Technical Reports Server (NTRS)
Das, Sajal K.; Harvey, Daniel J.; Biwas, Rupak; Kwak, Dochan (Technical Monitor)
2001-01-01
NASA's Information Power Grid (IPG) is an infrastructure designed to harness the power of graphically distributed computers, databases, and human expertise, in order to solve large-scale realistic computational problems. This type of a meta-computing environment is necessary to present a unified virtual machine to application developers that hides the intricacies of a highly heterogeneous environment and yet maintains adequate security. In this paper, we present a novel partitioning scheme. called MinEX, that dynamically balances processor workloads while minimizing data movement and runtime communication, for applications that are executed in a parallel distributed fashion on the IPG. We also analyze the conditions that are required for the IPG to be an effective tool for such distributed computations. Our results show that MinEX is a viable load balancer provided the nodes of the IPG are connected by a high-speed asynchronous interconnection network.
Pigeon interaction mode switch-based UAV distributed flocking control under obstacle environments.
Qiu, Huaxin; Duan, Haibin
2017-11-01
Unmanned aerial vehicle (UAV) flocking control is a serious and challenging problem due to local interactions and changing environments. In this paper, a pigeon flocking model and a pigeon coordinated obstacle-avoiding model are proposed based on a behavior that pigeon flocks will switch between hierarchical and egalitarian interaction mode at different flight phases. Owning to the similarity between bird flocks and UAV swarms in essence, a distributed flocking control algorithm based on the proposed pigeon flocking and coordinated obstacle-avoiding models is designed to coordinate a heterogeneous UAV swarm to fly though obstacle environments with few informed individuals. The comparative simulation results are elaborated to show the feasibility, validity and superiority of our proposed algorithm. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Design of supply chain in fuzzy environment
NASA Astrophysics Data System (ADS)
Rao, Kandukuri Narayana; Subbaiah, Kambagowni Venkata; Singh, Ganja Veera Pratap
2013-05-01
Nowadays, customer expectations are increasing and organizations are prone to operate in an uncertain environment. Under this uncertain environment, the ultimate success of the firm depends on its ability to integrate business processes among supply chain partners. Supply chain management emphasizes cross-functional links to improve the competitive strategy of organizations. Now, companies are moving from decoupled decision processes towards more integrated design and control of their components to achieve the strategic fit. In this paper, a new approach is developed to design a multi-echelon, multi-facility, and multi-product supply chain in fuzzy environment. In fuzzy environment, mixed integer programming problem is formulated through fuzzy goal programming in strategic level with supply chain cost and volume flexibility as fuzzy goals. These fuzzy goals are aggregated using minimum operator. In tactical level, continuous review policy for controlling raw material inventories in supplier echelon and controlling finished product inventories in plant as well as distribution center echelon is considered as fuzzy goals. A non-linear programming model is formulated through fuzzy goal programming using minimum operator in the tactical level. The proposed approach is illustrated with a numerical example.
On the Design of Smart Homes: A Framework for Activity Recognition in Home Environment.
Cicirelli, Franco; Fortino, Giancarlo; Giordano, Andrea; Guerrieri, Antonio; Spezzano, Giandomenico; Vinci, Andrea
2016-09-01
A smart home is a home environment enriched with sensing, actuation, communication and computation capabilities which permits to adapt it to inhabitants preferences and requirements. Establishing a proper strategy of actuation on the home environment can require complex computational tasks on the sensed data. This is the case of activity recognition, which consists in retrieving high-level knowledge about what occurs in the home environment and about the behaviour of the inhabitants. The inherent complexity of this application domain asks for tools able to properly support the design and implementation phases. This paper proposes a framework for the design and implementation of smart home applications focused on activity recognition in home environments. The framework mainly relies on the Cloud-assisted Agent-based Smart home Environment (CASE) architecture offering basic abstraction entities which easily allow to design and implement Smart Home applications. CASE is a three layered architecture which exploits the distributed multi-agent paradigm and the cloud technology for offering analytics services. Details about how to implement activity recognition onto the CASE architecture are supplied focusing on the low-level technological issues as well as the algorithms and the methodologies useful for the activity recognition. The effectiveness of the framework is shown through a case study consisting of a daily activity recognition of a person in a home environment.
Analysis of Regolith Simulant Ejecta Distributions from Normal Incident Hypervelocity Impact
NASA Technical Reports Server (NTRS)
Edwards, David L.; Cooke, William; Suggs, Rob; Moser, Danielle E.
2008-01-01
The National Aeronautics and Space Administration (NASA) has established the Constellation Program. The Constellation Program has defined one of its many goals as long-term lunar habitation. Critical to the design of a lunar habitat is an understanding of the lunar surface environment; of specific importance is the primary meteoroid and subsequent ejecta environment. The document, NASA SP-8013 'Meteoroid Environment Model Near Earth to Lunar Surface', was developed for the Apollo program in 1969 and contains the latest definition of the lunar ejecta environment. There is concern that NASA SP-8013 may over-estimate the lunar ejecta environment. NASA's Meteoroid Environment Office (MEO) has initiated several tasks to improve the accuracy of our understanding of the lunar surface ejecta environment. This paper reports the results of experiments on projectile impact into powdered pumice and unconsolidated JSC-1A Lunar Mare Regolith simulant targets. Projectiles were accelerated to velocities between 2.45 and 5.18 km/s at normal incidence using the Ames Vertical Gun Range (AVGR). The ejected particles were detected by thin aluminum foil targets strategically placed around the impact site and angular ejecta distributions were determined. Assumptions were made to support the analysis which include; assuming ejecta spherical symmetry resulting from normal impact and all ejecta particles were of mean target particle size. This analysis produces a hemispherical flux density distribution of ejecta with sufficient velocity to penetrate the aluminum foil detectors.
Distributed fiber optic system for oil pipeline leakage detection
NASA Astrophysics Data System (ADS)
Paranjape, R.; Liu, N.; Rumple, C.; Hara, Elmer H.
2003-02-01
We present a novel approach for the detection of leakage in oil pipelines using methods of fiber optic distributed sensors, a presence-of-oil based actuator, and Optical Time Domain Reflectometry (OTDR). While the basic concepts of our approach are well understood, the integration of the components into a complete system is a real world engineering design problem. Our focus has been on the development of the actuator design and testing using installed dark fiber. Initial results are promising, however environmental studies into the long term effects of exposure to the environment are still pending.
Design and fabrication of brayton cycle solar heat receiver
NASA Technical Reports Server (NTRS)
Mendelson, I.
1971-01-01
A detail design and fabrication of a solar heat receiver using lithium fluoride as the heat storage material was completed. A gas flow analysis was performed to achieve uniform flow distribution within overall pressure drop limitations. Structural analyses and allowable design criteria were developed for anticipated environments such as launch, pressure containment, and thermal cycling. A complete heat receiver assembly was fabricated almost entirely from the refractory alloy, niobium-1% zirconium.
Integration of a CAD System Into an MDO Framework
NASA Technical Reports Server (NTRS)
Townsend, J. C.; Samareh, J. A.; Weston, R. P.; Zorumski, W. E.
1998-01-01
NASA Langley has developed a heterogeneous distributed computing environment, called the Framework for Inter-disciplinary Design Optimization, or FIDO. Its purpose has been to demonstrate framework technical feasibility and usefulness for optimizing the preliminary design of complex systems and to provide a working environment for testing optimization schemes. Its initial implementation has been for a simplified model of preliminary design of a high-speed civil transport. Upgrades being considered for the FIDO system include a more complete geometry description, required by high-fidelity aerodynamics and structures codes and based on a commercial Computer Aided Design (CAD) system. This report presents the philosophy behind some of the decisions that have shaped the FIDO system and gives a brief case study of the problems and successes encountered in integrating a CAD system into the FEDO framework.
ERIC Educational Resources Information Center
O'Connor, Eileen
2013-01-01
With the advent of web 2.0 and virtual technologies and new understandings about learning within a global, networked environment, online course design has moved beyond the constraints of text readings, papers, and discussion boards. This next generation of online courses needs to dynamically and actively integrate the wide-ranging distribution of…
Performance prediction evaluation of ceramic materials in point-focusing solar receivers
NASA Technical Reports Server (NTRS)
Ewing, J.; Zwissler, J.
1979-01-01
A performance prediction was adapted to evaluate the use of ceramic materials in solar receivers for point focusing distributed applications. System requirements were determined including the receiver operating environment and system operating parameters for various engine types. Preliminary receiver designs were evolved from these system requirements. Specific receiver designs were then evaluated to determine material functional requirements.
NASA Technical Reports Server (NTRS)
Braun, R. D.; Kroo, I. M.
1995-01-01
Collaborative optimization is a design architecture applicable in any multidisciplinary analysis environment but specifically intended for large-scale distributed analysis applications. In this approach, a complex problem is hierarchically de- composed along disciplinary boundaries into a number of subproblems which are brought into multidisciplinary agreement by a system-level coordination process. When applied to problems in a multidisciplinary design environment, this scheme has several advantages over traditional solution strategies. These advantageous features include reducing the amount of information transferred between disciplines, the removal of large iteration-loops, allowing the use of different subspace optimizers among the various analysis groups, an analysis framework which is easily parallelized and can operate on heterogenous equipment, and a structural framework that is well-suited for conventional disciplinary organizations. In this article, the collaborative architecture is developed and its mathematical foundation is presented. An example application is also presented which highlights the potential of this method for use in large-scale design applications.
NASA Technical Reports Server (NTRS)
Davis, George; Cary, Everett; Higinbotham, John; Burns, Richard; Hogie, Keith; Hallahan, Francis
2003-01-01
The paper will provide an overview of the web-based distributed simulation software system developed for end-to-end, multi-spacecraft mission design, analysis, and test at the NASA Goddard Space Flight Center (GSFC). This software system was developed for an internal research and development (IR&D) activity at GSFC called the Distributed Space Systems (DSS) Distributed Synthesis Environment (DSE). The long-term goal of the DSS-DSE is to integrate existing GSFC stand-alone test beds, models, and simulation systems to create a "hands on", end-to-end simulation environment for mission design, trade studies and simulations. The short-term goal of the DSE was therefore to develop the system architecture, and then to prototype the core software simulation capability based on a distributed computing approach, with demonstrations of some key capabilities by the end of Fiscal Year 2002 (FY02). To achieve the DSS-DSE IR&D objective, the team adopted a reference model and mission upon which FY02 capabilities were developed. The software was prototyped according to the reference model, and demonstrations were conducted for the reference mission to validate interfaces, concepts, etc. The reference model, illustrated in Fig. 1, included both space and ground elements, with functional capabilities such as spacecraft dynamics and control, science data collection, space-to-space and space-to-ground communications, mission operations, science operations, and data processing, archival and distribution addressed.
Executing CLIPS expert systems in a distributed environment
NASA Technical Reports Server (NTRS)
Taylor, James; Myers, Leonard
1990-01-01
This paper describes a framework for running cooperating agents in a distributed environment to support the Intelligent Computer Aided Design System (ICADS), a project in progress at the CAD Research Unit of the Design Institute at the California Polytechnic State University. Currently, the systems aids an architectural designer in creating a floor plan that satisfies some general architectural constraints and project specific requirements. At the core of ICADS is the Blackboard Control System. Connected to the blackboard are any number of domain experts called Intelligent Design Tools (IDT). The Blackboard Control System monitors the evolving design as it is being drawn and helps resolve conflicts from the domain experts. The user serves as a partner in this system by manipulating the floor plan in the CAD system and validating recommendations made by the domain experts. The primary components of the Blackboard Control System are two expert systems executed by a modified CLIPS shell. The first is the Message Handler. The second is the Conflict Resolver. The Conflict Resolver synthesizes the suggestions made by the domain experts, which can be either CLIPS expert systems, or compiled C programs. In DEMO1, the current ICADS prototype, the CLIPS domain expert systems are Acoustics, Lighting, Structural, and Thermal; the compiled C domain experts are the CAD system and the User Interface.
The geomagnetically trapped radiation environment: A radiological point of view
NASA Technical Reports Server (NTRS)
Holly, F. E.
1972-01-01
The regions of naturally occurring, geomagnetically trapped radiation are briefly reviewed in terms of physical parameters such as; particle types, fluxes, spectrums, and spatial distributions. The major emphasis is placed upon a description of this environment in terms of the radiobiologically relevant parameters of absorbed dose and dose-rate and a discussion of the radiological implications in terms of the possible impact on space vehicle design and mission planning.
Dynamic Task Assignment of Autonomous Distributed AGV in an Intelligent FMS Environment
NASA Astrophysics Data System (ADS)
Fauadi, Muhammad Hafidz Fazli Bin Md; Lin, Hao Wen; Murata, Tomohiro
The need of implementing distributed system is growing significantly as it is proven to be effective for organization to be flexible against a highly demanding market. Nevertheless, there are still large technical gaps need to be addressed to gain significant achievement. We propose a distributed architecture to control Automated Guided Vehicle (AGV) operation based on multi-agent architecture. System architectures and agents' functions have been designed to support distributed control of AGV. Furthermore, enhanced agent communication protocol has been configured to accommodate dynamic attributes of AGV task assignment procedure. Result proved that the technique successfully provides a better solution.
Studying distributed cognition of simulation-based team training with DiCoT.
Rybing, Jonas; Nilsson, Heléne; Jonson, Carl-Oscar; Bang, Magnus
2016-03-01
Health care organizations employ simulation-based team training (SBTT) to improve skill, communication and coordination in a broad range of critical care contexts. Quantitative approaches, such as team performance measurements, are predominantly used to measure SBTTs effectiveness. However, a practical evaluation method that examines how this approach supports cognition and teamwork is missing. We have applied Distributed Cognition for Teamwork (DiCoT), a method for analysing cognition and collaboration aspects of work settings, with the purpose of assessing the methodology's usefulness for evaluating SBTTs. In a case study, we observed and analysed four Emergo Train System® simulation exercises where medical professionals trained emergency response routines. The study suggests that DiCoT is an applicable and learnable tool for determining key distributed cognition attributes of SBTTs that are of importance for the simulation validity of training environments. Moreover, we discuss and exemplify how DiCoT supports design of SBTTs with a focus on transfer and validity characteristics. Practitioner Summary: In this study, we have evaluated a method to assess simulation-based team training environments from a cognitive ergonomics perspective. Using a case study, we analysed Distributed Cognition for Teamwork (DiCoT) by applying it to the Emergo Train System®. We conclude that DiCoT is useful for SBTT evaluation and simulator (re)design.
Electrical properties study under radiation of the 3D-open-shell-electrode detector
NASA Astrophysics Data System (ADS)
Liu, Manwen; Li, Zheng
2018-05-01
Since the 3D-Open-Shell-Electrode Detector (3DOSED) is proposed and the structure is optimized, it is important to study 3DOSED's electrical properties to determine the detector's working performance, especially in the heavy radiation environments, like the Large Hadron Collider (LHC) and it's upgrade, the High Luminosity (HL-LHC) at CERN. In this work, full 3D technology computer-aided design (TCAD) simulations have been done on this novel silicon detector structure. Simulated detector properties include the electric field distribution, the electric potential distribution, current-voltage (I-V) characteristics, capacitance-voltage (C-V) characteristics, charge collection property, and full depletion voltage. Through the analysis of calculations and simulation results, we find that the 3DOSED's electric field and potential distributions are very uniform, even in the tiny region near the shell openings with little perturbations. The novel detector fits the designing purpose of collecting charges generated by particle/light in a good fashion with a well defined funnel shape of electric potential distribution that makes these charges drifting towards the center collection electrode. Furthermore, by analyzing the I-V, C-V, charge collection property and full depletion voltage, we can expect that the novel detector will perform well, even in the heavy radiation environments.
Thermal Insulating Concrete Wall Panel Design for Sustainable Built Environment
Zhou, Ao; Wong, Kwun-Wah
2014-01-01
Air-conditioning system plays a significant role in providing users a thermally comfortable indoor environment, which is a necessity in modern buildings. In order to save the vast energy consumed by air-conditioning system, the building envelopes in envelope-load dominated buildings should be well designed such that the unwanted heat gain and loss with environment can be minimized. In this paper, a new design of concrete wall panel that enhances thermal insulation of buildings by adding a gypsum layer inside concrete is presented. Experiments have been conducted for monitoring the temperature variation in both proposed sandwich wall panel and conventional concrete wall panel under a heat radiation source. For further understanding the thermal effect of such sandwich wall panel design from building scale, two three-story building models adopting different wall panel designs are constructed for evaluating the temperature distribution of entire buildings using finite element method. Both the experimental and simulation results have shown that the gypsum layer improves the thermal insulation performance by retarding the heat transfer across the building envelopes. PMID:25177718
Thermal insulating concrete wall panel design for sustainable built environment.
Zhou, Ao; Wong, Kwun-Wah; Lau, Denvid
2014-01-01
Air-conditioning system plays a significant role in providing users a thermally comfortable indoor environment, which is a necessity in modern buildings. In order to save the vast energy consumed by air-conditioning system, the building envelopes in envelope-load dominated buildings should be well designed such that the unwanted heat gain and loss with environment can be minimized. In this paper, a new design of concrete wall panel that enhances thermal insulation of buildings by adding a gypsum layer inside concrete is presented. Experiments have been conducted for monitoring the temperature variation in both proposed sandwich wall panel and conventional concrete wall panel under a heat radiation source. For further understanding the thermal effect of such sandwich wall panel design from building scale, two three-story building models adopting different wall panel designs are constructed for evaluating the temperature distribution of entire buildings using finite element method. Both the experimental and simulation results have shown that the gypsum layer improves the thermal insulation performance by retarding the heat transfer across the building envelopes.
Vaginal drug distribution modeling.
Katz, David F; Yuan, Andrew; Gao, Yajing
2015-09-15
This review presents and applies fundamental mass transport theory describing the diffusion and convection driven mass transport of drugs to the vaginal environment. It considers sources of variability in the predictions of the models. It illustrates use of model predictions of microbicide drug concentration distribution (pharmacokinetics) to gain insights about drug effectiveness in preventing HIV infection (pharmacodynamics). The modeling compares vaginal drug distributions after different gel dosage regimens, and it evaluates consequences of changes in gel viscosity due to aging. It compares vaginal mucosal concentration distributions of drugs delivered by gels vs. intravaginal rings. Finally, the modeling approach is used to compare vaginal drug distributions across species with differing vaginal dimensions. Deterministic models of drug mass transport into and throughout the vaginal environment can provide critical insights about the mechanisms and determinants of such transport. This knowledge, and the methodology that obtains it, can be applied and translated to multiple applications, involving the scientific underpinnings of vaginal drug distribution and the performance evaluation and design of products, and their dosage regimens, that achieve it. Copyright © 2015 Elsevier B.V. All rights reserved.
Influence of operating conditions on the optimum design of electric vehicle battery cooling plates
NASA Astrophysics Data System (ADS)
Jarrett, Anthony; Kim, Il Yong
2014-01-01
The efficiency of cooling plates for electric vehicle batteries can be improved by optimizing the geometry of internal fluid channels. In practical operation, a cooling plate is exposed to a range of operating conditions dictated by the battery, environment, and driving behaviour. To formulate an efficient cooling plate design process, the optimum design sensitivity with respect to each boundary condition is desired. This determines which operating conditions must be represented in the design process, and therefore the complexity of designing for multiple operating conditions. The objective of this study is to determine the influence of different operating conditions on the optimum cooling plate design. Three important performance measures were considered: temperature uniformity, mean temperature, and pressure drop. It was found that of these three, temperature uniformity was most sensitive to the operating conditions, especially with respect to the distribution of the input heat flux, and also to the coolant flow rate. An additional focus of the study was the distribution of heat generated by the battery cell: while it is easier to assume that heat is generated uniformly, by using an accurate distribution for design optimization, this study found that cooling plate performance could be significantly improved.
A Prototype Decision Support System for the Location of Military Water Points.
1980-06-01
create an environ- ment which is conductive to an efficient man/machine decision making system . This could be accomplished by designing the operating...Figure 12. Flowchart of Program COMPUTE 50 Procedure This Decision Support System was designed to be interactive. That is, it requests data from the user...Pg. 82-114, 1974. 24. Geoffrion, A.M. and G.W. Graves, "Multicomodity Distribution System Design by Benders Partition", Management Science, Vol. 20, Pg
NASA Technical Reports Server (NTRS)
Benford, Steve; Bowers, John; Fahlen, Lennart E.; Greenhalgh, Chris; Snowdon, Dave
1994-01-01
This paper explores the issue of user embodiment within collaborative virtual environments. By user embodiment we mean the provision of users with appropriate body images so as to represent them to others and also to themselves. By collaborative virtual environments we mean multi-user virtual reality systems which support cooperative work (although we argue that the results of our exploration may also be applied to other kinds of collaborative systems). The main part of the paper identifies a list of embodiment design issues including: presence, location, identity, activity, availability, history of activity, viewpoint, action point, gesture, facial expression, voluntary versus involuntary expression, degree of presence, reflecting capabilities, manipulating the user's view of others, representation across multiple media, autonomous and distributed body parts, truthfulness and efficiency. Following this, we show how these issues are reflected in our own DIVE and MASSIVE prototype collaborative virtual environments.
Research and Development of Collaborative Environments for Command and Control
2011-05-01
at any state of building. The viewer tool presents the designed model with 360-degree perspective views even after regeneration of the design, which...and it shows the following prompt. GUM > APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED...11 First initialize the microSD card by typing GUM > mmcinit Then erase the old Linux kernel and the root file system on the flash memory
NASA Technical Reports Server (NTRS)
1980-01-01
The requirements implementation strategy for first level development of the Integrated Programs for Aerospace Vehicle Design (IPAD) computing system is presented. The capabilities of first level IPAD are sufficient to demonstrated management of engineering data on two computers (CDC CYBER 170/720 and DEC VAX 11/780 computers) using the IPAD system in a distributed network environment.
Design and implementation of a CORBA-based genome mapping system prototype.
Hu, J; Mungall, C; Nicholson, D; Archibald, A L
1998-01-01
CORBA (Common Object Request Broker Architecture), as an open standard, is considered to be a good solution for the development and deployment of applications in distributed heterogeneous environments. This technology can be applied in the bioinformatics area to enhance utilization, management and interoperation between biological resources. This paper investigates issues in developing CORBA applications for genome mapping information systems in the Internet environment with emphasis on database connectivity and graphical user interfaces. The design and implementation of a CORBA prototype for an animal genome mapping database are described. The prototype demonstration is available via: http://www.ri.bbsrc.ac.uk/ark_corba/. jian.hu@bbsrc.ac.uk
Al Nozha, Omar Mansour; Fadel, Hani T
2017-01-01
Taibah University offers regular nursing (RNP) and nursing bridging (NBP) bachelor programs. We evaluated student perception of the learning environment as one means of quality assurance. To assess nursing student perception of their educational environment, to compare the perceptions of regular and bridging students, and to compare the perceptions of students in the old and new curricula. Cross-sectional survey. College of Nursing at Taibah University, Madinah, Saudi Arabia. The Dundee Ready Educational Environment Measure (DREEM) instrument was distributed to over 714 nursing students to assess perception of the educational environment. Independent samples t test and Pearson's chi square were used to compare the programs and curricula. The DREEM inventory score. Of 714 students, 271 (38%) were RNP students and 443 (62%) were NBP students. The mean (standard deviation) DREEM score was 111 (25). No significant differences were observed between the programs except for the domain "academic self-perceptions" being higher in RNP students (P < .001). Higher mean DREEM scores were observed among students studying the new curriculum in the RNP (P < .001) and NBP (P > .05). Nursing students generally perceived their learning environment as more positive than negative. Regular students were more positive than bridging students. Students who experienced the new curriculum were more positive towards learning. The cross-sectional design and unequal gender and study level distributions may limit generalizability of the results. Longitudinal, large-scale studies with more even distributions of participant characteristics are needed.
Hyperswitch communication network
NASA Technical Reports Server (NTRS)
Peterson, J.; Pniel, M.; Upchurch, E.
1991-01-01
The Hyperswitch Communication Network (HCN) is a large scale parallel computer prototype being developed at JPL. Commercial versions of the HCN computer are planned. The HCN computer being designed is a message passing multiple instruction multiple data (MIMD) computer, and offers many advantages in price-performance ratio, reliability and availability, and manufacturing over traditional uniprocessors and bus based multiprocessors. The design of the HCN operating system is a uniquely flexible environment that combines both parallel processing and distributed processing. This programming paradigm can achieve a balance among the following competing factors: performance in processing and communications, user friendliness, and fault tolerance. The prototype is being designed to accommodate a maximum of 64 state of the art microprocessors. The HCN is classified as a distributed supercomputer. The HCN system is described, and the performance/cost analysis and other competing factors within the system design are reviewed.
Xu, Zhenqiang; Yao, Maosheng
2013-05-01
Increasing evidences show that inhalation of indoor bioaerosols has caused numerous adverse health effects and diseases. However, the bioaerosol size distribution, composition, and concentration level, representing different inhalation risks, could vary with different living environments. The six-stage Andersen sampler is designed to simulate the sampling of different human lung regions. Here, the sampler was used in investigating the bioaerosol exposure in six different environments (student dorm, hospital, laboratory, hotel room, dining hall, and outdoor environment) in Beijing. During the sampling, the Andersen sampler was operated for 30 min for each sample, and three independent experiments were performed for each of the environments. The air samples collected onto each of the six stages of the sampler were incubated on agar plates directly at 26 °C, and the colony forming units (CFU) were manually counted and statistically corrected. In addition, the developed CFUs were washed off the agar plates and subjected to polymerase chain reaction (PCR)-denaturing gradient gel electrophoresis (DGGE) for diversity analysis. Results revealed that for most environments investigated, the culturable bacterial aerosol concentrations were higher than those of culturable fungal aerosols. The culturable bacterial and fungal aerosol fractions, concentration, size distribution, and diversity were shown to vary significantly with the sampling environments. PCR-DGGE analysis indicated that different environments had different culturable bacterial aerosol compositions as revealed by distinct gel band patterns. For most environments tested, larger (>3 μm) culturable bacterial aerosols with a skewed size distribution were shown to prevail, accounting for more than 60 %, while for culturable fungal aerosols with a normal size distribution, those 2.1-4.7 μm dominated, accounting for 20-40 %. Alternaria, Cladosporium, Chaetomium, and Aspergillus were found abundant in most environments studied here. Viable microbial load per unit of particulate matter was also shown to vary significantly with the sampling environments. The results from this study suggested that different environments even with similar levels of total microbial culturable aerosol concentrations could present different inhalation risks due to different bioaerosol particle size distribution and composition. This work fills literature gaps regarding bioaerosol size and composition-based exposure risks in different human dwellings in contrast to a vast body of total bioaerosol levels.
Lunar soils grain size catalog
NASA Technical Reports Server (NTRS)
Graf, John C.
1993-01-01
This catalog compiles every available grain size distribution for Apollo surface soils, trench samples, cores, and Luna 24 soils. Original laboratory data are tabled, and cumulative weight distribution curves and histograms are plotted. Standard statistical parameters are calculated using the method of moments. Photos and location comments describe the sample environment and geological setting. This catalog can help researchers describe the geotechnical conditions and site variability of the lunar surface essential to the design of a lunar base.
Methodology for CFD Design Analysis of National Launch System Nozzle Manifold
NASA Technical Reports Server (NTRS)
Haire, Scot L.
1993-01-01
The current design environment dictates that high technology CFD (Computational Fluid Dynamics) analysis produce quality results in a timely manner if it is to be integrated into the design process. The design methodology outlined describes the CFD analysis of an NLS (National Launch System) nozzle film cooling manifold. The objective of the analysis was to obtain a qualitative estimate for the flow distribution within the manifold. A complex, 3D, multiple zone, structured grid was generated from a 3D CAD file of the geometry. A Euler solution was computed with a fully implicit compressible flow solver. Post processing consisted of full 3D color graphics and mass averaged performance. The result was a qualitative CFD solution that provided the design team with relevant information concerning the flow distribution in and performance characteristics of the film cooling manifold within an effective time frame. Also, this design methodology was the foundation for a quick turnaround CFD analysis of the next iteration in the manifold design.
System Level Uncertainty Assessment for Collaborative RLV Design
NASA Technical Reports Server (NTRS)
Charania, A. C.; Bradford, John E.; Olds, John R.; Graham, Matthew
2002-01-01
A collaborative design process utilizing Probabilistic Data Assessment (PDA) is showcased. Given the limitation of financial resources by both the government and industry, strategic decision makers need more than just traditional point designs, they need to be aware of the likelihood of these future designs to meet their objectives. This uncertainty, an ever-present character in the design process, can be embraced through a probabilistic design environment. A conceptual design process is presented that encapsulates the major engineering disciplines for a Third Generation Reusable Launch Vehicle (RLV). Toolsets consist of aerospace industry standard tools in disciplines such as trajectory, propulsion, mass properties, cost, operations, safety, and economics. Variations of the design process are presented that use different fidelities of tools. The disciplinary engineering models are used in a collaborative engineering framework utilizing Phoenix Integration's ModelCenter and AnalysisServer environment. These tools allow the designer to join disparate models and simulations together in a unified environment wherein each discipline can interact with any other discipline. The design process also uses probabilistic methods to generate the system level output metrics of interest for a RLV conceptual design. The specific system being examined is the Advanced Concept Rocket Engine 92 (ACRE-92) RLV. Previous experience and knowledge (in terms of input uncertainty distributions from experts and modeling and simulation codes) can be coupled with Monte Carlo processes to best predict the chances of program success.
A distributed computing model for telemetry data processing
NASA Astrophysics Data System (ADS)
Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.
1994-05-01
We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.
A distributed computing model for telemetry data processing
NASA Technical Reports Server (NTRS)
Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.
1994-01-01
We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.
A concurrent distributed system for aircraft tactical decision generation
NASA Technical Reports Server (NTRS)
Mcmanus, John W.
1990-01-01
A research program investigating the use of AI techniques to aid in the development of a tactical decision generator (TDG) for within visual range (WVR) air combat engagements is discussed. The application of AI programming and problem-solving methods in the development and implementation of a concurrent version of the computerized logic for air-to-air warfare simulations (CLAWS) program, a second-generation TDG, is presented. Concurrent computing environments and programming approaches are discussed, and the design and performance of prototype concurrent TDG system (Cube CLAWS) are presented. It is concluded that the Cube CLAWS has provided a useful testbed to evaluate the development of a distributed blackboard system. The project has shown that the complexity of developing specialized software on a distributed, message-passing architecture such as the Hypercube is not overwhelming, and that reasonable speedups and processor efficiency can be achieved by a distributed blackboard system. The project has also highlighted some of the costs of using a distributed approach to designing a blackboard system.
A distributed version of the NASA Engine Performance Program
NASA Technical Reports Server (NTRS)
Cours, Jeffrey T.; Curlett, Brian P.
1993-01-01
Distributed NEPP, a version of the NASA Engine Performance Program, uses the original NEPP code but executes it in a distributed computer environment. Multiple workstations connected by a network increase the program's speed and, more importantly, the complexity of the cases it can handle in a reasonable time. Distributed NEPP uses the public domain software package, called Parallel Virtual Machine, allowing it to execute on clusters of machines containing many different architectures. It includes the capability to link with other computers, allowing them to process NEPP jobs in parallel. This paper discusses the design issues and granularity considerations that entered into programming Distributed NEPP and presents the results of timing runs.
PISCES: An environment for parallel scientific computation
NASA Technical Reports Server (NTRS)
Pratt, T. W.
1985-01-01
The parallel implementation of scientific computing environment (PISCES) is a project to provide high-level programming environments for parallel MIMD computers. Pisces 1, the first of these environments, is a FORTRAN 77 based environment which runs under the UNIX operating system. The Pisces 1 user programs in Pisces FORTRAN, an extension of FORTRAN 77 for parallel processing. The major emphasis in the Pisces 1 design is in providing a carefully specified virtual machine that defines the run-time environment within which Pisces FORTRAN programs are executed. Each implementation then provides the same virtual machine, regardless of differences in the underlying architecture. The design is intended to be portable to a variety of architectures. Currently Pisces 1 is implemented on a network of Apollo workstations and on a DEC VAX uniprocessor via simulation of the task level parallelism. An implementation for the Flexible Computing Corp. FLEX/32 is under construction. An introduction to the Pisces 1 virtual computer and the FORTRAN 77 extensions is presented. An example of an algorithm for the iterative solution of a system of equations is given. The most notable features of the design are the provision for several granularities of parallelism in programs and the provision of a window mechanism for distributed access to large arrays of data.
BRIGHTEN YOUR CORNER, BROADCAST NO. 7464. UNIVERSITY EXPLORER.
ERIC Educational Resources Information Center
LEVY, CHARLES; HOWE, WILLIAM
LIGHTING RECOMMENDATIONS ALONG WITH DESIGN IMPLICATIONS ARE DISCUSSED. COMMENTS OF SEVERAL LEADING LIGHTING AUTHORITIES ARE INCLUDED. A SERIES OF LIGHTING CONSIDERATIONS RECOMMENDS--(1) CHILDREN CAN ACQUIRE AN AWARENESS OF THEIR LUMINOUS ENVIRONMENT THROUGH EARLY TRAINING, (2) INTENSITY, DISTRIBUTION, HORIZONTAL OR VERTICAL POLARIZATION AND THE…
USDA-ARS?s Scientific Manuscript database
Information on physical properties of munitions compounds is necessary for assessing their environmental distribution and transport, and predict potential hazards. This information is also needed for selection and design of successful physical, chemical or biological environmental remediation proces...
NASA Technical Reports Server (NTRS)
2008-01-01
NASA s advanced visual simulations are essential for analyses associated with life cycle planning, design, training, testing, operations, and evaluation. Kennedy Space Center, in particular, uses simulations for ground services and space exploration planning in an effort to reduce risk and costs while improving safety and performance. However, it has been difficult to circulate and share the results of simulation tools among the field centers, and distance and travel expenses have made timely collaboration even harder. In response, NASA joined with Valador Inc. to develop the Distributed Observer Network (DON), a collaborative environment that leverages game technology to bring 3-D simulations to conventional desktop and laptop computers. DON enables teams of engineers working on design and operations to view and collaborate on 3-D representations of data generated by authoritative tools. DON takes models and telemetry from these sources and, using commercial game engine technology, displays the simulation results in a 3-D visual environment. Multiple widely dispersed users, working individually or in groups, can view and analyze simulation results on desktop and laptop computers in real time.
Aerospace Systems Design in NASA's Collaborative Engineering Environment
NASA Technical Reports Server (NTRS)
Monell, Donald W.; Piland, William M.
1999-01-01
Past designs of complex aerospace systems involved an environment consisting of collocated design teams with project managers, technical discipline experts, and other experts (e.g. manufacturing and systems operations). These experts were generally qualified only on the basis of past design experience and typically had access to a limited set of integrated analysis tools. These environments provided less than desirable design fidelity, often lead to the inability of assessing critical programmatic and technical issues (e.g., cost risk, technical impacts), and generally derived a design that was not necessarily optimized across the entire system. The continually changing, modern aerospace industry demands systems design processes that involve the best talent available (no matter where it resides) and access to the best design and analysis tools. A solution to these demands involves a design environment referred to as collaborative engineering. The collaborative engineering environment evolving within the National Aeronautics and Space Administration (NASA) is a capability that enables the Agency's engineering infrastructure to interact and use the best state-of-the-art tools and data across organizational boundaries. Using collaborative engineering, the collocated team is replaced with an interactive team structure where the team members are geographically distributed and the best engineering talent can be applied to the design effort regardless of physical location. In addition, a more efficient, higher quality design product is delivered by bringing together the best engineering talent with more up-to-date design and analysis tools. These tools are focused on interactive, multidisciplinary design and analysis with emphasis on the complete life cycle of the system, and they include nontraditional, integrated tools for life cycle cost estimation and risk assessment. NASA has made substantial progress during the last two years in developing a collaborative engineering environment. NASA is planning to use this collaborative engineering infrastructure to provide better aerospace systems life cycle design and analysis, which includes analytical assessment of the technical and programmatic aspects of a system from "cradle to grave." This paper describes the recent NASA developments in the area of collaborative engineering, the benefits (realized and anticipated) of using the developed capability, and the long-term plans for implementing this capability across the Agency.
Aerospace Systems Design in NASA's Collaborative Engineering Environment
NASA Technical Reports Server (NTRS)
Monell, Donald W.; Piland, William M.
2000-01-01
Past designs of complex aerospace systems involved an environment consisting of collocated design teams with project managers, technical discipline experts, and other experts (e.g., manufacturing and systems operation). These experts were generally qualified only on the basis of past design experience and typically had access to a limited set of integrated analysis tools. These environments provided less than desirable design fidelity, often lead to the inability of assessing critical programmatic and technical issues (e.g., cost, risk, technical impacts), and generally derived a design that was not necessarily optimized across the entire system. The continually changing, modern aerospace industry demands systems design processes that involve the best talent available (no matter where it resides) and access to the the best design and analysis tools. A solution to these demands involves a design environment referred to as collaborative engineering. The collaborative engineering environment evolving within the National Aeronautics and Space Administration (NASA) is a capability that enables the Agency's engineering infrastructure to interact and use the best state-of-the-art tools and data across organizational boundaries. Using collaborative engineering, the collocated team is replaced with an interactive team structure where the team members are geographical distributed and the best engineering talent can be applied to the design effort regardless of physical location. In addition, a more efficient, higher quality design product is delivered by bringing together the best engineering talent with more up-to-date design and analysis tools. These tools are focused on interactive, multidisciplinary design and analysis with emphasis on the complete life cycle of the system, and they include nontraditional, integrated tools for life cycle cost estimation and risk assessment. NASA has made substantial progress during the last two years in developing a collaborative engineering environment. NASA is planning to use this collaborative engineering engineering infrastructure to provide better aerospace systems life cycle design and analysis, which includes analytical assessment of the technical and programmatic aspects of a system from "cradle to grave." This paper describes the recent NASA developments in the area of collaborative engineering, the benefits (realized and anticipated) of using the developed capability, and the long-term plans for implementing this capability across Agency.
Aerospace Systems Design in NASA's Collaborative Engineering Environment
NASA Astrophysics Data System (ADS)
Monell, Donald W.; Piland, William M.
2000-07-01
Past designs of complex aerospace systems involved an environment consisting of collocated design teams with project managers, technical discipline experts, and other experts (e.g., manufacturing and systems operations). These experts were generally qualified only on the basis of past design experience and typically had access to a limited set of integrated analysis tools. These environments provided less than desirable design fidelity, often led to the inability of assessing critical programmatic and technical issues (e.g., cost, risk, technical impacts), and generally derived a design that was not necessarily optimized across the entire system. The continually changing, modern aerospace industry demands systems design processes that involve the best talent available (no matter where it resides) and access to the best design and analysis tools. A solution to these demands involves a design environment referred to as collaborative engineering. The collaborative engineering environment evolving within the National Aeronautics and Space Administration (NASA) is a capability that enables the Agency's engineering infrastructure to interact and use the best state-of-the-art tools and data across organizational boundaries. Using collaborative engineering, the collocated team is replaced with an interactive team structure where the team members are geographically distributed and the best engineering talent can be applied to the design effort regardless of physical location. In addition, a more efficient, higher quality design product is delivered by bringing together the best engineering talent with more up-to-date design and analysis tools. These tools are focused on interactive, multidisciplinary design and analysis with emphasis on the complete life cycle of the system, and they include nontraditional, integrated tools for life cycle cost estimation and risk assessment. NASA has made substantial progress during the last two years in developing a collaborative engineering environment. NASA is planning to use this collaborative engineering infrastructure to provide better aerospace systems life cycle design and analysis, which includes analytical assessment of the technical and programmatic aspects of a system from "cradle to grave." This paper describes the recent NASA developments in the area of collaborative engineering, the benefits (realized and anticipated) of using the developed capability, and the long-term plans for implementing this capability across the Agency.
Shoukourian, S K; Vasilyan, A M; Avagyan, A A; Shukurian, A K
1999-01-01
A formalized "top to bottom" design approach was described in [1] for distributed applications built on databases, which were considered as a medium between virtual and real user environments for a specific medical application. Merging different components within a unified distributed application posits new essential problems for software. Particularly protection tools, which are sufficient separately, become deficient during the integration due to specific additional links and relationships not considered formerly. E.g., it is impossible to protect a shared object in the virtual operating room using only DBMS protection tools, if the object is stored as a record in DB tables. The solution of the problem should be found only within the more general application framework. Appropriate tools are absent or unavailable. The present paper suggests a detailed outline of a design and testing toolset for access differentiation systems (ADS) in distributed medical applications which use databases. The appropriate formal model as well as tools for its mapping to a DMBS are suggested. Remote users connected via global networks are considered too.
CHIME: A Metadata-Based Distributed Software Development Environment
2005-01-01
structures by using typography , graphics , and animation. The Software Im- mersion in our conceptual model for CHIME can be seen as a form of Software...Even small- to medium-sized development efforts may involve hundreds of artifacts -- design documents, change requests, test cases and results, code...for managing and organizing information from all phases of the software lifecycle. CHIME is designed around an XML-based metadata architecture, in
Zuo, Wangda; Wetter, Michael; Tian, Wei; ...
2015-07-13
Here, this paper describes a coupled dynamic simulation of an indoor environment with heating, ventilation, and air conditioning (HVAC) systems, controls and building envelope heat transfer. The coupled simulation can be used for the design and control of ventilation systems with stratified air distributions. Those systems are commonly used to reduce building energy consumption while improving the indoor environment quality. The indoor environment was simulated using the fast fluid dynamics (FFD) simulation programme. The building fabric heat transfer, HVAC and control system were modelled using the Modelica Buildings library. After presenting the concept, the mathematical algorithm and the implementation ofmore » the coupled simulation were introduced. The coupled FFD–Modelica simulation was then evaluated using three examples of room ventilation with complex flow distributions with and without feedback control. Lastly, further research and development needs were also discussed.« less
Shielding Effectiveness in a Two-Dimensional Reverberation Chamber Using Finite-Element Techniques
NASA Technical Reports Server (NTRS)
Bunting, Charles F.
2006-01-01
Reverberation chambers are attaining an increased importance in determination of electromagnetic susceptibility of avionics equipment. Given the nature of the variable boundary condition, the ability of a given source to couple energy into certain modes and the passband characteristic due the chamber Q, the fields are typically characterized by statistical means. The emphasis of this work is to apply finite-element techniques at cutoff to the analysis of a two-dimensional structure to examine the notion of shielding-effectiveness issues in a reverberating environment. Simulated mechanical stirring will be used to obtain the appropriate statistical field distribution. The shielding effectiveness (SE) in a simulated reverberating environment is compared to measurements in a reverberation chamber. A log-normal distribution for the SE is observed with implications for system designers. The work is intended to provide further refinement in the consideration of SE in a complex electromagnetic environment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zuo, Wangda; Wetter, Michael; Tian, Wei
Here, this paper describes a coupled dynamic simulation of an indoor environment with heating, ventilation, and air conditioning (HVAC) systems, controls and building envelope heat transfer. The coupled simulation can be used for the design and control of ventilation systems with stratified air distributions. Those systems are commonly used to reduce building energy consumption while improving the indoor environment quality. The indoor environment was simulated using the fast fluid dynamics (FFD) simulation programme. The building fabric heat transfer, HVAC and control system were modelled using the Modelica Buildings library. After presenting the concept, the mathematical algorithm and the implementation ofmore » the coupled simulation were introduced. The coupled FFD–Modelica simulation was then evaluated using three examples of room ventilation with complex flow distributions with and without feedback control. Lastly, further research and development needs were also discussed.« less
NASA Technical Reports Server (NTRS)
Drusano, George L.
1991-01-01
The optimal sampling theory is evaluated in applications to studies related to the distribution and elimination of several drugs (including ceftazidime, piperacillin, and ciprofloxacin), using the SAMPLE module of the ADAPT II package of programs developed by D'Argenio and Schumitzky (1979, 1988) and comparing the pharmacokinetic parameter values with results obtained by traditional ten-sample design. The impact of the use of optimal sampling was demonstrated in conjunction with NONMEM (Sheiner et al., 1977) approach, in which the population is taken as the unit of analysis, allowing even fragmentary patient data sets to contribute to population parameter estimates. It is shown that this technique is applicable in both the single-dose and the multiple-dose environments. The ability to study real patients made it possible to show that there was a bimodal distribution in ciprofloxacin nonrenal clearance.
NASA Astrophysics Data System (ADS)
Xu, Li; Liu, Lanlan; Niu, Jie; Tang, Li; Li, Jinliang; Zhou, Zhanfan; Long, Chenhai; Yang, Qi; Yi, Ziqi; Guo, Hao; Long, Yang; Fu, Yanyi
2017-05-01
As social requirement of power supply reliability keeps rising, distribution network working with power uninterrupted has been widely carried out, while the high - temperature operating environment in summer can easily lead to physical discomfort for the operators, and then lead to safety incidents. Aiming at above problem, air-conditioning suit for distribution network working with power uninterrupted has been putted forward in this paper, and the structure composition and cooling principle of which has been explained, and it has been ultimately put to on-site application. The results showed that, cooling effect of air-conditioning suits was remarkable, and improved the working environment for the operators effectively, which is of great significance to improve Chinese level of working with power uninterrupted, reduce the probability of accidents and enhance the reliability of power supply.
A salient region detection model combining background distribution measure for indoor robots.
Li, Na; Xu, Hui; Wang, Zhenhua; Sun, Lining; Chen, Guodong
2017-01-01
Vision system plays an important role in the field of indoor robot. Saliency detection methods, capturing regions that are perceived as important, are used to improve the performance of visual perception system. Most of state-of-the-art methods for saliency detection, performing outstandingly in natural images, cannot work in complicated indoor environment. Therefore, we propose a new method comprised of graph-based RGB-D segmentation, primary saliency measure, background distribution measure, and combination. Besides, region roundness is proposed to describe the compactness of a region to measure background distribution more robustly. To validate the proposed approach, eleven influential methods are compared on the DSD and ECSSD dataset. Moreover, we build a mobile robot platform for application in an actual environment, and design three different kinds of experimental constructions that are different viewpoints, illumination variations and partial occlusions. Experimental results demonstrate that our model outperforms existing methods and is useful for indoor mobile robots.
Raju, Leo; Milton, R S; Mahadevan, Senthilkumaran
The objective of this paper is implementation of multiagent system (MAS) for the advanced distributed energy management and demand side management of a solar microgrid. Initially, Java agent development environment (JADE) frame work is used to implement MAS based dynamic energy management of solar microgrid. Due to unstable nature of MATLAB, when dealing with multithreading environment, MAS operating in JADE is linked with the MATLAB using a middle ware called Multiagent Control Using Simulink with Jade Extension (MACSimJX). MACSimJX allows the solar microgrid components designed with MATLAB to be controlled by the corresponding agents of MAS. The microgrid environment variables are captured through sensors and given to agents through MATLAB/Simulink and after the agent operations in JADE, the results are given to the actuators through MATLAB for the implementation of dynamic operation in solar microgrid. MAS operating in JADE maximizes operational efficiency of solar microgrid by decentralized approach and increase in runtime efficiency due to JADE. Autonomous demand side management is implemented for optimizing the power exchange between main grid and microgrid with intermittent nature of solar power, randomness of load, and variation of noncritical load and grid price. These dynamics are considered for every time step and complex environment simulation is designed to emulate the distributed microgrid operations and evaluate the impact of agent operations.
Raju, Leo; Milton, R. S.; Mahadevan, Senthilkumaran
2016-01-01
The objective of this paper is implementation of multiagent system (MAS) for the advanced distributed energy management and demand side management of a solar microgrid. Initially, Java agent development environment (JADE) frame work is used to implement MAS based dynamic energy management of solar microgrid. Due to unstable nature of MATLAB, when dealing with multithreading environment, MAS operating in JADE is linked with the MATLAB using a middle ware called Multiagent Control Using Simulink with Jade Extension (MACSimJX). MACSimJX allows the solar microgrid components designed with MATLAB to be controlled by the corresponding agents of MAS. The microgrid environment variables are captured through sensors and given to agents through MATLAB/Simulink and after the agent operations in JADE, the results are given to the actuators through MATLAB for the implementation of dynamic operation in solar microgrid. MAS operating in JADE maximizes operational efficiency of solar microgrid by decentralized approach and increase in runtime efficiency due to JADE. Autonomous demand side management is implemented for optimizing the power exchange between main grid and microgrid with intermittent nature of solar power, randomness of load, and variation of noncritical load and grid price. These dynamics are considered for every time step and complex environment simulation is designed to emulate the distributed microgrid operations and evaluate the impact of agent operations. PMID:27127802
New frontier, new power: the retail environment in Australia's dark market
Carter, S
2003-01-01
Objective: To investigate the role of the retail environment in cigarette marketing in Australia, one of the "darkest" markets in the world. Design: Analysis of 172 tobacco industry documents; and articles and advertisements found by hand searching Australia's three leading retail trade journals. Results: As Australian cigarette marketing was increasingly restricted, the retail environment became the primary communication vehicle for building cigarette brands. When retail marketing was restricted, the industry conceded only incrementally and under duress, and at times continues to break the law. The tobacco industry targets retailers via trade promotional expenditure, financial and practical assistance with point of sale marketing, alliance building, brand advertising, and distribution. Cigarette brand advertising in retail magazines are designed to build brand identities. Philip Morris and British American Tobacco are now competing to control distribution of all products to retailers, placing themselves at the heart of retail business. Conclusions: Cigarette companies prize retail marketing in Australia's dark market. Stringent point of sale marketing restrictions should be included in any comprehensive tobacco control measures. Relationships between retailers and the industry will be more difficult to regulate. Retail press advertising and trade promotional expenditure could be banned. In-store marketing assistance, retail–tobacco industry alliance building, and new electronic retail distribution systems may be less amenable to regulation. Alliances between the health and retail sectors and financial support for a move away from retail dependence on tobacco may be necessary to effect cultural change. PMID:14645954
A Distributed Snow Evolution Modeling System (SnowModel)
NASA Astrophysics Data System (ADS)
Liston, G. E.; Elder, K.
2004-12-01
A spatially distributed snow-evolution modeling system (SnowModel) has been specifically designed to be applicable over a wide range of snow landscapes, climates, and conditions. To reach this goal, SnowModel is composed of four sub-models: MicroMet defines the meteorological forcing conditions, EnBal calculates surface energy exchanges, SnowMass simulates snow depth and water-equivalent evolution, and SnowTran-3D accounts for snow redistribution by wind. While other distributed snow models exist, SnowModel is unique in that it includes a well-tested blowing-snow sub-model (SnowTran-3D) for application in windy arctic, alpine, and prairie environments where snowdrifts are common. These environments comprise 68% of the seasonally snow-covered Northern Hemisphere land surface. SnowModel also accounts for snow processes occurring in forested environments (e.g., canopy interception related processes). SnowModel is designed to simulate snow-related physical processes occurring at spatial scales of 5-m and greater, and temporal scales of 1-hour and greater. These include: accumulation from precipitation; wind redistribution and sublimation; loading, unloading, and sublimation within forest canopies; snow-density evolution; and snowpack ripening and melt. To enhance its wide applicability, SnowModel includes the physical calculations required to simulate snow evolution within each of the global snow classes defined by Sturm et al. (1995), e.g., tundra, taiga, alpine, prairie, maritime, and ephemeral snow covers. The three, 25-km by 25-km, Cold Land Processes Experiment (CLPX) mesoscale study areas (MSAs: Fraser, North Park, and Rabbit Ears) are used as SnowModel simulation examples to highlight model strengths, weaknesses, and features in forested, semi-forested, alpine, and shrubland environments.
NASA Technical Reports Server (NTRS)
Lawrence, Charles; Putt, Charles W.
1997-01-01
The Visual Computing Environment (VCE) is a NASA Lewis Research Center project to develop a framework for intercomponent and multidisciplinary computational simulations. Many current engineering analysis codes simulate various aspects of aircraft engine operation. For example, existing computational fluid dynamics (CFD) codes can model the airflow through individual engine components such as the inlet, compressor, combustor, turbine, or nozzle. Currently, these codes are run in isolation, making intercomponent and complete system simulations very difficult to perform. In addition, management and utilization of these engineering codes for coupled component simulations is a complex, laborious task, requiring substantial experience and effort. To facilitate multicomponent aircraft engine analysis, the CFD Research Corporation (CFDRC) is developing the VCE system. This system, which is part of NASA's Numerical Propulsion Simulation System (NPSS) program, can couple various engineering disciplines, such as CFD, structural analysis, and thermal analysis. The objectives of VCE are to (1) develop a visual computing environment for controlling the execution of individual simulation codes that are running in parallel and are distributed on heterogeneous host machines in a networked environment, (2) develop numerical coupling algorithms for interchanging boundary conditions between codes with arbitrary grid matching and different levels of dimensionality, (3) provide a graphical interface for simulation setup and control, and (4) provide tools for online visualization and plotting. VCE was designed to provide a distributed, object-oriented environment. Mechanisms are provided for creating and manipulating objects, such as grids, boundary conditions, and solution data. This environment includes parallel virtual machine (PVM) for distributed processing. Users can interactively select and couple any set of codes that have been modified to run in a parallel distributed fashion on a cluster of heterogeneous workstations. A scripting facility allows users to dictate the sequence of events that make up the particular simulation.
NASA Technical Reports Server (NTRS)
Afjeh, Abdollah A.; Reed, John A.
2003-01-01
This research is aimed at developing a neiv and advanced simulation framework that will significantly improve the overall efficiency of aerospace systems design and development. This objective will be accomplished through an innovative integration of object-oriented and Web-based technologies ivith both new and proven simulation methodologies. The basic approach involves Ihree major areas of research: Aerospace system and component representation using a hierarchical object-oriented component model which enables the use of multimodels and enforces component interoperability. Collaborative software environment that streamlines the process of developing, sharing and integrating aerospace design and analysis models. . Development of a distributed infrastructure which enables Web-based exchange of models to simplify the collaborative design process, and to support computationally intensive aerospace design and analysis processes. Research for the first year dealt with the design of the basic architecture and supporting infrastructure, an initial implementation of that design, and a demonstration of its application to an example aircraft engine system simulation.
Analysis of Work Design in Rubber Processing Plant
NASA Astrophysics Data System (ADS)
Wahyuni, Dini; Nasution, Harmein; Budiman, Irwan; Wijaya, Khairini
2018-02-01
The work design illustrates how structured jobs, tasks, and roles are defined and modified and their impact on individuals, groups, and organizations. If the work is not designed well, the company must pay greater costs for workers' health, longer production processes or even penalties for not being able to meet the delivery schedule. This is visible to the condition in a rubber processing factory in North Sumatra. Work design aspects such as layouts, machinery and equipment, worker's physical working environment, work methods, and organizational policies have not been well-organized. Coagulum grinding machines into sheets are often damaged, resulting in 4 times the delay of product delivery in 2016, the presence of complaints of heat exposure submitted by workers, and workstation that has not been properly arranged is an indication of the need for work design. The research data will be collected through field observation, and distribution of questionnaires related aspects of work design. The result of the analysis depends on the respondent's answer from the distributed questionnaire regarding the 6 aspects studied.
Flight Plasma Diagnostics for High-Power, Solar-Electric Deep-Space Spacecraft
NASA Technical Reports Server (NTRS)
Johnson, Lee; De Soria-Santacruz Pich, Maria; Conroy, David; Lobbia, Robert; Huang, Wensheng; Choi, Maria; Sekerak, Michael J.
2018-01-01
NASA's Asteroid Redirect Robotic Mission (ARRM) project plans included a set of plasma and space environment instruments, the Plasma Diagnostic Package (PDP), to fulfill ARRM requirements for technology extensibility to future missions. The PDP objectives were divided into the classes of 1) Plasma thruster dynamics, 2) Solar array-specific environmental effects, 3) Plasma environmental spacecraft effects, and 4) Energetic particle spacecraft environment. A reference design approach and interface requirements for ARRM's PDP was generated by the PDP team at JPL and GRC. The reference design consisted of redundant single-string avionics located on the ARRM spacecraft bus as well as solar array, driving and processing signals from multiple copies of several types of plasma, effects, and environments sensors distributed over the spacecraft and array. The reference design sensor types were derived in part from sensors previously developed for USAF Research Laboratory (AFRL) plasma effects campaigns such as those aboard TacSat-2 in 2007 and AEHF-2 in 2012.
2006-03-01
Technologies: Workshop on Design Issues in Anonymity and Unobservability, 30–45. Springer-Verlag, LNCS 2009, July 2000. 10. Chaum , David . “Untraceable...electronic mail, return addresses, and digital pseudonyms”. Communications of the ACM, 4(2), February 1981. 11. Chaum , David . “The Dining Cryptographers...Proceedings of Eurocrypt, 294–311. 2003. 4. Andersen, David G. “Mayday: Distributed Filtering for Internet Services”. 4th Usenix Symposium on Internet
Designing Multimedia for Meaningful Online Teaching and Learning
ERIC Educational Resources Information Center
Terry, Krista P.; Doolittle, Peter E.; Scheer, Stephanie B.; McNeill, Andrea
2004-01-01
The development of distance and distributed learning environments on college campuses has created a need to reconsider traditional approaches to teaching and learning by integrating research and theories in human learning, pedagogy, and instructional technology. Creating effective and efficient multimedia for Web-based instruction requires a…
Dimensions Driving Business Student Satisfaction in Higher Education
ERIC Educational Resources Information Center
Yusoff, Mazirah; McLeay, Fraser; Woodruffe-Burton, Helen
2015-01-01
Purpose: This study aims to identify the dimensions of business student satisfaction in the Malaysian private higher educational environment and evaluate the influence that demographic factors have on satisfaction. Design/Methodology/Approach: A questionnaire was developed and distributed to 1,200 undergraduate business students at four private…
Pressure And Thermal Modeling Of Rocket Launches
NASA Technical Reports Server (NTRS)
Smith, Sheldon D.; Myruski, Brian L.; Farmer, Richard C.; Freeman, Jon A.
1995-01-01
Report presents mathematical model for use in designing rocket-launching stand. Predicts pressure and thermal environment, as well as thermal responses of structures to impinging rocket-exhaust plumes. Enables relatively inexperienced analyst to determine time-varying distributions and absolute levels of pressure and heat loads on structures.
Distributed Leadership in Online Groups
ERIC Educational Resources Information Center
Gressick, Julia; Derry, Sharon J.
2010-01-01
We conducted research within a program serving future mathematics and science teachers. Groups of teachers worked primarily online in an asynchronous discussion environment on a 6-week task in which they applied learning-science ideas acquired from an educational psychology course to design interdisciplinary instructional units. We employed an…
Vimalchand, Pannalal; Liu, Guohai; Peng, Wan Wang
2015-02-24
The improvements proposed in this invention provide a reliable apparatus and method to gasify low rank coals in a class of pressurized circulating fluidized bed reactors termed "transport gasifier." The embodiments overcome a number of operability and reliability problems with existing gasifiers. The systems and methods address issues related to distribution of gasification agent without the use of internals, management of heat release to avoid any agglomeration and clinker formation, specific design of bends to withstand the highly erosive environment due to high solid particles circulation rates, design of a standpipe cyclone to withstand high temperature gasification environment, compact design of seal-leg that can handle high mass solids flux, design of nozzles that eliminate plugging, uniform aeration of large diameter Standpipe, oxidant injection at the cyclone exits to effectively modulate gasifier exit temperature and reduction in overall height of the gasifier with a modified non-mechanical valve.
Theory and experimental validation of SPLASH (Single Panel Lamp and Shroud Helper).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larsen, Marvin Elwood; Porter, Jason M.
2005-06-01
The radiant heat test facility develops test sets providing well-characterized thermal environments, often representing fires. Many of the components and procedures have become standardized to such an extent that the development of a specialized design tool was appropriate. SPLASH (Single Panel Lamp and Shroud Helper) is that tool. SPLASH is implemented as a user-friendly program that allows a designer to describe a test setup in terms of parameters such as lamp number, power, position, and separation distance. Thermal radiation is the dominant mechanism of heat transfer and the SPLASH model solves a radiation enclosure problem to estimate temperature distributions inmore » a shroud providing the boundary condition of interest. Irradiance distribution on a specified viewing plane is also estimated. This document provides the theoretical development for the underlying model. A series of tests were conducted to characterize SPLASH's ability to analyze lamp and shroud systems. The comparison suggests that SPLASH succeeds as a design tool. Simplifications made to keep the model tractable are demonstrated to result in estimates that are only approximately as uncertain as many of the properties and characteristics of the operating environment.« less
Feedback brake distribution control for minimum pitch
NASA Astrophysics Data System (ADS)
Tavernini, Davide; Velenis, Efstathios; Longo, Stefano
2017-06-01
The distribution of brake forces between front and rear axles of a vehicle is typically specified such that the same level of brake force coefficient is imposed at both front and rear wheels. This condition is known as 'ideal' distribution and it is required to deliver the maximum vehicle deceleration and minimum braking distance. For subcritical braking conditions, the deceleration demand may be delivered by different distributions between front and rear braking forces. In this research we show how to obtain the optimal distribution which minimises the pitch angle of a vehicle and hence enhances driver subjective feel during braking. A vehicle model including suspension geometry features is adopted. The problem of the minimum pitch brake distribution for a varying deceleration level demand is solved by means of a model predictive control (MPC) technique. To address the problem of the undesirable pitch rebound caused by a full-stop of the vehicle, a second controller is designed and implemented independently from the braking distribution in use. An extended Kalman filter is designed for state estimation and implemented in a high fidelity environment together with the MPC strategy. The proposed solution is compared with the reference 'ideal' distribution as well as another previous feed-forward solution.
NASA Technical Reports Server (NTRS)
Mayer, Richard J.; Blinn, Thomas M.; Mayer, Paula S. D.; Ackley, Keith A.; Crump, John W., IV; Henderson, Richard; Futrell, Michael T.
1991-01-01
The Framework Programmable Software Development Platform (FPP) is a project aimed at combining effective tool and data integration mechanisms with a model of the software development process in an intelligent integrated software environment. Guided by the model, this system development framework will take advantage of an integrated operating environment to automate effectively the management of the software development process so that costly mistakes during the development phase can be eliminated. The focus here is on the design of components that make up the FPP. These components serve as supporting systems for the Integration Mechanism and the Framework Processor and provide the 'glue' that ties the FPP together. Also discussed are the components that allow the platform to operate in a distributed, heterogeneous environment and to manage the development and evolution of software system artifacts.
Rio: a dynamic self-healing services architecture using Jini networking technology
NASA Astrophysics Data System (ADS)
Clarke, James B.
2002-06-01
Current mainstream distributed Java architectures offer great capabilities embracing conventional enterprise architecture patterns and designs. These traditional systems provide robust transaction oriented environments that are in large part focused on data and host processors. Typically, these implementations require that an entire application be deployed on every machine that will be used as a compute resource. In order for this to happen, the application is usually taken down, installed and started with all systems in-sync and knowing about each other. Static environments such as these present an extremely difficult environment to setup, deploy and administer.
NASA Astrophysics Data System (ADS)
Desjardins, Elia Nelson
2011-12-01
This dissertation examines the ways children use language to construct scientific knowledge in designed informal learning environments such as museums, aquariums, and zoos, with particular attention to autobiographical storytelling. This study takes as its foundation cultural-historical activity theory, defining learning as increased participation in meaningful, knowledge-based activity. It aims to improve experience design in informal learning environments by facilitating and building upon language interactions that are already in use by learners in these contexts. Fieldwork consists of audio recordings of individual children aged 4--12 as they explored a museum of science and technology with their families. Recordings were transcribed and coded according to the activity (task) and context (artifact/exhibit) in which the child was participating during each sequence of utterances. Additional evidence is provided by supplemental interviews with museum educators. Analysis suggests that short autobiographical stories can provide opportunities for learners to access metacognitive knowledge, for educators to assess learners' prior experience and knowledge, and for designers to engage affective pathways in order to increase participation that is both active and contemplative. Design implications are discussed and a design proposal for a distributed informal learning environment is presented.
Incremental learning of concept drift in nonstationary environments.
Elwell, Ryan; Polikar, Robi
2011-10-01
We introduce an ensemble of classifiers-based approach for incremental learning of concept drift, characterized by nonstationary environments (NSEs), where the underlying data distributions change over time. The proposed algorithm, named Learn(++). NSE, learns from consecutive batches of data without making any assumptions on the nature or rate of drift; it can learn from such environments that experience constant or variable rate of drift, addition or deletion of concept classes, as well as cyclical drift. The algorithm learns incrementally, as other members of the Learn(++) family of algorithms, that is, without requiring access to previously seen data. Learn(++). NSE trains one new classifier for each batch of data it receives, and combines these classifiers using a dynamically weighted majority voting. The novelty of the approach is in determining the voting weights, based on each classifier's time-adjusted accuracy on current and past environments. This approach allows the algorithm to recognize, and act accordingly, to the changes in underlying data distributions, as well as to a possible reoccurrence of an earlier distribution. We evaluate the algorithm on several synthetic datasets designed to simulate a variety of nonstationary environments, as well as a real-world weather prediction dataset. Comparisons with several other approaches are also included. Results indicate that Learn(++). NSE can track the changing environments very closely, regardless of the type of concept drift. To allow future use, comparison and benchmarking by interested researchers, we also release our data used in this paper. © 2011 IEEE
High Level Analysis, Design and Validation of Distributed Mobile Systems with
NASA Astrophysics Data System (ADS)
Farahbod, R.; Glässer, U.; Jackson, P. J.; Vajihollahi, M.
System design is a creative activity calling for abstract models that facilitate reasoning about the key system attributes (desired requirements and resulting properties) so as to ensure these attributes are properly established prior to actually building a system. We explore here the practical side of using the abstract state machine (ASM) formalism in combination with the CoreASM open source tool environment for high-level design and experimental validation of complex distributed systems. Emphasizing the early phases of the design process, a guiding principle is to support freedom of experimentation by minimizing the need for encoding. CoreASM has been developed and tested building on a broad scope of applications, spanning computational criminology, maritime surveillance and situation analysis. We critically reexamine here the CoreASM project in light of three different application scenarios.
NASA Astrophysics Data System (ADS)
Mo, Hong-yuan; Wang, Ying-jie; Yu, Zhuo-yuan
2009-07-01
The Poverty Alleviation Monitoring and Evaluation System (PAMES) is introduced in this paper. The authors present environment platform selection, and details of system design and realization. Different with traditional research of poverty alleviation, this paper develops a new analytical geo-visualization approach to study the distribution and causes of poverty phenomena within Geographic Information System (GIS). Based on the most detailed poverty population data, the spatial location and population statistical indicators of poverty village in Jiangxi province, the distribution characteristics of poverty population are detailed. The research results can provide much poverty alleviation decision support from a spatial-temporal view. It should be better if the administrative unit of poverty-stricken area to be changed from county to village according to spatial distribution pattern of poverty.
Object-oriented Tools for Distributed Computing
NASA Technical Reports Server (NTRS)
Adler, Richard M.
1993-01-01
Distributed computing systems are proliferating, owing to the availability of powerful, affordable microcomputers and inexpensive communication networks. A critical problem in developing such systems is getting application programs to interact with one another across a computer network. Remote interprogram connectivity is particularly challenging across heterogeneous environments, where applications run on different kinds of computers and operating systems. NetWorks! (trademark) is an innovative software product that provides an object-oriented messaging solution to these problems. This paper describes the design and functionality of NetWorks! and illustrates how it is being used to build complex distributed applications for NASA and in the commercial sector.
PhyLIS: a simple GNU/Linux distribution for phylogenetics and phyloinformatics.
Thomson, Robert C
2009-07-30
PhyLIS is a free GNU/Linux distribution that is designed to provide a simple, standardized platform for phylogenetic and phyloinformatic analysis. The operating system incorporates most commonly used phylogenetic software, which has been pre-compiled and pre-configured, allowing for straightforward application of phylogenetic methods and development of phyloinformatic pipelines in a stable Linux environment. The software is distributed as a live CD and can be installed directly or run from the CD without making changes to the computer. PhyLIS is available for free at http://www.eve.ucdavis.edu/rcthomson/phylis/.
PhyLIS: A Simple GNU/Linux Distribution for Phylogenetics and Phyloinformatics
Thomson, Robert C.
2009-01-01
PhyLIS is a free GNU/Linux distribution that is designed to provide a simple, standardized platform for phylogenetic and phyloinformatic analysis. The operating system incorporates most commonly used phylogenetic software, which has been pre-compiled and pre-configured, allowing for straightforward application of phylogenetic methods and development of phyloinformatic pipelines in a stable Linux environment. The software is distributed as a live CD and can be installed directly or run from the CD without making changes to the computer. PhyLIS is available for free at http://www.eve.ucdavis.edu/rcthomson/phylis/. PMID:19812729
The implementation and use of Ada on distributed systems with high reliability requirements
NASA Technical Reports Server (NTRS)
Knight, J. C.
1986-01-01
The use and implementation of Ada in distributed environments in which reliability is the primary concern were investigted. A distributed system, programmed entirely in Ada, was studied to assess the use of individual tasks without concern for the processor used. Continued development and testing of the fault tolerant Ada testbed; development of suggested changes to Ada to cope with the failures of interest; design of approaches to fault tolerant software in real time systems, and the integration of these ideas into Ada; and the preparation of various papers and presentations were discussed.
Using Dedal to share and reuse distributed engineering design information
NASA Technical Reports Server (NTRS)
Baya, Vinod; Baudin, Catherine; Mabogunje, Ade; Das, Aseem; Cannon, David M.; Leifer, Larry J.
1994-01-01
The overall goal of the project is to facilitate the reuse of previous design experience for the maintenance, repair and redesign of artifacts in the electromechanical engineering domain. An engineering team creates information in the form of meeting summaries, project memos, progress reports, engineering notes, spreadsheet calculations and CAD drawings. Design information captured in these media is difficult to reuse because the way design concepts are referred to evolve over the life of a project and because decisions, requirements and structure are interrelated but rarely explicitly linked. Based on protocol analysis of the information seeking behavior of designer's, we defined a language to describe the content and the form of design records and implemented this language in Dedal, a tool for indexing, modeling and retrieving design information. We first describe the approach to indexing and retrieval in Dedal. Next we describe ongoing work in extending Dedal's capabilities to a distributed environment by integrating it with World Wide Web. This will enable members of a design team who are not co-located to share and reuse information.
Efficient Software Systems for Cardio Surgical Departments
NASA Astrophysics Data System (ADS)
Fountoukis, S. G.; Diomidous, M. J.
2009-08-01
Herein, the design implementation and deployment of an object oriented software system, suitable for the monitoring of cardio surgical departments, is investigated. Distributed design architectures are applied and the implemented software system can be deployed on distributed infrastructures. The software is flexible and adaptable to any cardio surgical environment regardless of the department resources used. The system exploits the relations and the interdependency of the successive bed positions that the patients occupy at the different health care units during their stay in a cardio surgical department, to determine bed availabilities and to perform patient scheduling and instant rescheduling whenever necessary. It also aims to successful monitoring of the workings of the cardio surgical departments in an efficient manner.
ERIC Educational Resources Information Center
Khan, Badrul
2005-01-01
"E-Learning QUICK Checklist" walks readers through the various factors important to developing, evaluating and implementing an open, flexible and distributed learning environment. This book is designed as a quick checklist for e-learning. It contains many practical items that the reader can use as review criteria to check if e-learning modules,…
Distributed Learning Environment: Major Functions, Implementation, and Continuous Improvement.
ERIC Educational Resources Information Center
Converso, Judith A.; Schaffer, Scott P.; Guerra, Ingrid J.
The content of this paper is based on a development plan currently in design for the U.S. Navy in conjunction with the Learning Systems Institute at Florida State University. Leading research (literature review) references and case study ("best practice") references are presented as supporting evidence for the results-oriented…
Temporal Withdrawal Behaviors in an Educational Policy Context
ERIC Educational Resources Information Center
Rosenblatt, Zehava; Shapira-Lishchinsky, Orly
2017-01-01
Purpose: The purpose of this paper is to investigate the differential relations between two teacher withdrawal behaviors: work absence and lateness, and two types of school ethics: organizational justice (distributive, procedural) and ethical climate (formal, caring), all in the context of school turbulent environment. Design/methodology/approach:…
NASA Astrophysics Data System (ADS)
Schubert, Oliver J.; Tolle, Charles R.
2004-09-01
Over the last decade the world has seen numerous autonomous vehicle programs. Wheels and track designs are the basis for many of these vehicles. This is primarily due to four main reasons: a vast preexisting knowledge base for these designs, energy efficiency of power sources, scalability of actuators, and the lack of control systems technologies for handling alternate highly complex distributed systems. Though large efforts seek to improve the mobility of these vehicles, many limitations still exist for these systems within unstructured environments, e.g. limited mobility within industrial and nuclear accident sites where existing plant configurations have been extensively changed. These unstructured operational environments include missions for exploration, reconnaissance, and emergency recovery of objects within reconfigured or collapsed structures, e.g. bombed buildings. More importantly, these environments present a clear and present danger for direct human interactions during the initial phases of recovery operations. Clearly, the current classes of autonomous vehicles are incapable of performing in these environments. Thus the next generation of designs must include highly reconfigurable and flexible autonomous robotic platforms. This new breed of autonomous vehicles will be both highly flexible and environmentally adaptable. Presented in this paper is one of the most successful designs from nature, the snake-eel-worm (SEW). This design implements shape memory alloy (SMA) actuators which allow for scaling of the robotic SEW designs from sub-micron scale to heavy industrial implementations without major conceptual redesigns as required in traditional hydraulic, pneumatic, or motor driven systems. Autonomous vehicles based on the SEW design posses the ability to easily move between air based environments and fluid based environments with limited or no reconfiguration. Under a SEW designed vehicle, one not only achieves vastly improved maneuverability within a highly unstructured environment, but also gains robotic manipulation abilities, normally relegated as secondary add-ons within existing vehicles, all within one small condensed package. The prototype design presented includes a Beowulf style computing system for advanced guidance calculations and visualization computations. All of the design and implementation pertaining to the SEW robot discussed in this paper is the product of a student team under the summer fellowship program at the DOEs INEEL.
Environments for online maritime simulators with cloud computing capabilities
NASA Astrophysics Data System (ADS)
Raicu, Gabriel; Raicu, Alexandra
2016-12-01
This paper presents the cloud computing environments, network principles and methods for graphical development in realistic naval simulation, naval robotics and virtual interactions. The aim of this approach is to achieve a good simulation quality in large networked environments using open source solutions designed for educational purposes. Realistic rendering of maritime environments requires near real-time frameworks with enhanced computing capabilities during distance interactions. E-Navigation concepts coupled with the last achievements in virtual and augmented reality will enhance the overall experience leading to new developments and innovations. We have to deal with a multiprocessing situation using advanced technologies and distributed applications using remote ship scenario and automation of ship operations.
Rapid Analysis of Mass Distribution of Radiation Shielding
NASA Technical Reports Server (NTRS)
Zapp, Edward
2007-01-01
Radiation Shielding Evaluation Toolset (RADSET) is a computer program that rapidly calculates the spatial distribution of mass of an arbitrary structure for use in ray-tracing analysis of the radiation-shielding properties of the structure. RADSET was written to be used in conjunction with unmodified commercial computer-aided design (CAD) software that provides access to data on the structure and generates selected three-dimensional-appearing views of the structure. RADSET obtains raw geometric, material, and mass data on the structure from the CAD software. From these data, RADSET calculates the distribution(s) of the masses of specific materials about any user-specified point(s). The results of these mass-distribution calculations are imported back into the CAD computing environment, wherein the radiation-shielding calculations are performed.
Systems Issues In Terrestrial Fiber Optic Link Reliability
NASA Astrophysics Data System (ADS)
Spencer, James L.; Lewin, Barry R.; Lee, T. Frank S.
1990-01-01
This paper reviews fiber optic system reliability issues from three different viewpoints - availability, operating environment, and evolving technologies. Present availability objectives for interoffice links and for the distribution loop must be re-examined for applications such as the Synchronous Optical Network (SONET), Fiber-to-the-Home (FTTH), and analog services. The hostile operating environments of emerging applications (such as FTTH) must be carefully considered in system design as well as reliability assessments. Finally, evolving technologies might require the development of new reliability testing strategies.
Basic avionics module design for general aviation aircraft
NASA Technical Reports Server (NTRS)
Smyth, R. K.; Smyth, D. E.
1978-01-01
The design of an advanced digital avionics system (basic avionics module) for general aviation aircraft operated with a single pilot under IFR conditions is described. The microprocessor based system provided all avionic functions, including flight management, navigation, and lateral flight control. The mode selection was interactive with the pilot. The system used a navigation map data base to provide operation in the current and planned air traffic control environment. The system design included software design listings for some of the required modules. The distributed microcomputer uses the IEEE 488 bus for interconnecting the microcomputer and sensors.
Control and performance of the AGS and AGS Booster Main Magnet Power Supplies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reece, R.K.; Casella, R.; Culwick, B.
1993-06-01
Techniques for precision control of the main magnet power supplies for the AGS and AGS Booster synchrotron will be discussed. Both synchrotrons are designed to operate in a Pulse-to-Pulse Modulation (PPM) environment with a Supercycle Generator defining and distributing global timing events for the AGS Facility. Details of modelling, real-time feedback and feedforward systems, generation and distribution of real time field data, operational parameters and an overview of performance for both machines are included.
Control and performance of the AGS and AGS Booster Main Magnet Power Supplies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reece, R.K.; Casella, R.; Culwick, B.
1993-01-01
Techniques for precision control of the main magnet power supplies for the AGS and AGS Booster synchrotron will be discussed. Both synchrotrons are designed to operate in a Pulse-to-Pulse Modulation (PPM) environment with a Supercycle Generator defining and distributing global timing events for the AGS Facility. Details of modelling, real-time feedback and feedforward systems, generation and distribution of real time field data, operational parameters and an overview of performance for both machines are included.
Competitive-Cooperative Automated Reasoning from Distributed and Multiple Source of Data
NASA Astrophysics Data System (ADS)
Fard, Amin Milani
Knowledge extraction from distributed database systems, have been investigated during past decade in order to analyze billions of information records. In this work a competitive deduction approach in a heterogeneous data grid environment is proposed using classic data mining and statistical methods. By applying a game theory concept in a multi-agent model, we tried to design a policy for hierarchical knowledge discovery and inference fusion. To show the system run, a sample multi-expert system has also been developed.
Hagen, R. W.; Ambos, H. D.; Browder, M. W.; Roloff, W. R.; Thomas, L. J.
1979-01-01
The Clinical Physiologic Research System (CPRS) developed from our experience in applying computers to medical instrumentation problems. This experience revealed a set of applications with a commonality in data acquisition, analysis, input/output, and control needs that could be met by a portable system. The CPRS demonstrates a practical methodology for integrating commercial instruments with distributed modular elements of local design in order to make facile responses to changing instrumentation needs in clinical environments. ImagesFigure 3
BioNetFit: a fitting tool compatible with BioNetGen, NFsim and distributed computing environments
Thomas, Brandon R.; Chylek, Lily A.; Colvin, Joshua; ...
2015-11-09
Rule-based models are analyzed with specialized simulators, such as those provided by the BioNetGen and NFsim open-source software packages. Here in this paper, we present BioNetFit, a general-purpose fitting tool that is compatible with BioNetGen and NFsim. BioNetFit is designed to take advantage of distributed computing resources. This feature facilitates fitting (i.e. optimization of parameter values for consistency with data) when simulations are computationally expensive.
Space station WP-04 power system. Volume 1: Executive summary
NASA Technical Reports Server (NTRS)
Hallinan, G. J.
1987-01-01
Major study activities and results of the phase B study contract for the preliminary design of the space station Electrical Power System (EPS) are summarized. The areas addressed include the general system design, man-tended option, automation and robotics, evolutionary growth, software development environment, advanced development, customer accommodations, operations planning, product assurance, and design and development phase planning. The EPS consists of a combination photovoltaic and solar dynamic power generation subsystem and a power management and distribution (PMAD) subsystem. System trade studies and costing activities are also summarized.
Storage system software solutions for high-end user needs
NASA Technical Reports Server (NTRS)
Hogan, Carole B.
1992-01-01
Today's high-end storage user is one that requires rapid access to a reliable terabyte-capacity storage system running in a distributed environment. This paper discusses conventional storage system software and concludes that this software, designed for other purposes, cannot meet high-end storage requirements. The paper also reviews the philosophy and design of evolving storage system software. It concludes that this new software, designed with high-end requirements in mind, provides the potential for solving not only the storage needs of today but those of the foreseeable future as well.
Bi-Level Integrated System Synthesis (BLISS) for Concurrent and Distributed Processing
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw; Altus, Troy D.; Phillips, Matthew; Sandusky, Robert
2002-01-01
The paper introduces a new version of the Bi-Level Integrated System Synthesis (BLISS) methods intended for optimization of engineering systems conducted by distributed specialty groups working concurrently and using a multiprocessor computing environment. The method decomposes the overall optimization task into subtasks associated with disciplines or subsystems where the local design variables are numerous and a single, system-level optimization whose design variables are relatively few. The subtasks are fully autonomous as to their inner operations and decision making. Their purpose is to eliminate the local design variables and generate a wide spectrum of feasible designs whose behavior is represented by Response Surfaces to be accessed by a system-level optimization. It is shown that, if the problem is convex, the solution of the decomposed problem is the same as that obtained without decomposition. A simplified example of an aircraft design shows the method working as intended. The paper includes a discussion of the method merits and demerits and recommendations for further research.
Sharing Data and Analytical Resources Securely in a Biomedical Research Grid Environment
Langella, Stephen; Hastings, Shannon; Oster, Scott; Pan, Tony; Sharma, Ashish; Permar, Justin; Ervin, David; Cambazoglu, B. Barla; Kurc, Tahsin; Saltz, Joel
2008-01-01
Objectives To develop a security infrastructure to support controlled and secure access to data and analytical resources in a biomedical research Grid environment, while facilitating resource sharing among collaborators. Design A Grid security infrastructure, called Grid Authentication and Authorization with Reliably Distributed Services (GAARDS), is developed as a key architecture component of the NCI-funded cancer Biomedical Informatics Grid (caBIG™). The GAARDS is designed to support in a distributed environment 1) efficient provisioning and federation of user identities and credentials; 2) group-based access control support with which resource providers can enforce policies based on community accepted groups and local groups; and 3) management of a trust fabric so that policies can be enforced based on required levels of assurance. Measurements GAARDS is implemented as a suite of Grid services and administrative tools. It provides three core services: Dorian for management and federation of user identities, Grid Trust Service for maintaining and provisioning a federated trust fabric within the Grid environment, and Grid Grouper for enforcing authorization policies based on both local and Grid-level groups. Results The GAARDS infrastructure is available as a stand-alone system and as a component of the caGrid infrastructure. More information about GAARDS can be accessed at http://www.cagrid.org. Conclusions GAARDS provides a comprehensive system to address the security challenges associated with environments in which resources may be located at different sites, requests to access the resources may cross institutional boundaries, and user credentials are created, managed, revoked dynamically in a de-centralized manner. PMID:18308979
NASA Astrophysics Data System (ADS)
Ren, Lei; Zhang, Lin; Tao, Fei; (Luke) Zhang, Xiaolong; Luo, Yongliang; Zhang, Yabin
2012-08-01
Multidisciplinary design of complex products leads to an increasing demand for high performance simulation (HPS) platforms. One great challenge is how to achieve high efficient utilisation of large-scale simulation resources in distributed and heterogeneous environments. This article reports a virtualisation-based methodology to realise a HPS platform. This research is driven by the issues concerning large-scale simulation resources deployment and complex simulation environment construction, efficient and transparent utilisation of fine-grained simulation resources and high reliable simulation with fault tolerance. A framework of virtualisation-based simulation platform (VSIM) is first proposed. Then the article investigates and discusses key approaches in VSIM, including simulation resources modelling, a method to automatically deploying simulation resources for dynamic construction of system environment, and a live migration mechanism in case of faults in run-time simulation. Furthermore, the proposed methodology is applied to a multidisciplinary design system for aircraft virtual prototyping and some experiments are conducted. The experimental results show that the proposed methodology can (1) significantly improve the utilisation of fine-grained simulation resources, (2) result in a great reduction in deployment time and an increased flexibility for simulation environment construction and (3)achieve fault tolerant simulation.
Development of Charge to Mass Ratio Microdetector for Future Mars Mission
NASA Technical Reports Server (NTRS)
Chen, Yuan-Lian Albert
2003-01-01
The Mars environment comprises a dry, cold and low air pressure atmosphere with low gravity (0.38g) and high resistivity soil. The global dust storms that cover a large portion of Mars are observed often from Earth. This environment provides an ideal condition for turboelectric charging. The extremely dry conditions on the Martian surface have raised concerns that electrostatic charge buildup will not be dissipated easily. If turboelectrically generated charge cannot be dissipated or avoided, then dust will accumulate on charged surfaces and electrostatic discharge may cause hazards for future exploration missions. The low surface on Mars helps to prolong the charge decay on the dust particles and soil. To better understanding the physics of Martian charged dust particles is essential to future Mars missions. We research and design two sensors, velocity/charge sensor and PZT momentum sensors, to measure the velocity distribution, charge distribution and mass distribution of Martian wed dust particles. These sensors are fabricated at NASA Kenney Space Center, Electrostatic and Surface Physics Laboratory. The sensors are calibrated. The momentum sensor is capable to measure 45 pan size particles. The designed detector is very simple, robust, without moving parts, and does not require a high voltage power supply. Two sensors are combined to form the Dust Microdetector - CHAL.
Brewin, James; Tang, Jessica; Dasgupta, Prokar; Khan, Muhammad S; Ahmed, Kamran; Bello, Fernando; Kneebone, Roger; Jaye, Peter
2015-07-01
To evaluate the face, content and construct validity of the distributed simulation (DS) environment for technical and non-technical skills training in endourology. To evaluate the educational impact of DS for urology training. DS offers a portable, low-cost simulated operating room environment that can be set up in any open space. A prospective mixed methods design using established validation methodology was conducted in this simulated environment with 10 experienced and 10 trainee urologists. All participants performed a simulated prostate resection in the DS environment. Outcome measures included surveys to evaluate the DS, as well as comparative analyses of experienced and trainee urologist's performance using real-time and 'blinded' video analysis and validated performance metrics. Non-parametric statistical methods were used to compare differences between groups. The DS environment demonstrated face, content and construct validity for both non-technical and technical skills. Kirkpatrick level 1 evidence for the educational impact of the DS environment was shown. Further studies are needed to evaluate the effect of simulated operating room training on real operating room performance. This study has shown the validity of the DS environment for non-technical, as well as technical skills training. DS-based simulation appears to be a valuable addition to traditional classroom-based simulation training. © 2014 The Authors BJU International © 2014 BJU International Published by John Wiley & Sons Ltd.
A method of distributed avionics data processing based on SVM classifier
NASA Astrophysics Data System (ADS)
Guo, Hangyu; Wang, Jinyan; Kang, Minyang; Xu, Guojing
2018-03-01
Under the environment of system combat, in order to solve the problem on management and analysis of the massive heterogeneous data on multi-platform avionics system, this paper proposes a management solution which called avionics "resource cloud" based on big data technology, and designs an aided decision classifier based on SVM algorithm. We design an experiment with STK simulation, the result shows that this method has a high accuracy and a broad application prospect.
2015-05-23
flight. The design used the engine, transmission, and rotor system of the UH-1 design. In doing so, Bell helicopters publicly declared that the Cobra...Public Release; Distribution is Unlimited 13. SUPPLEMENTARY NOTES 14. ABSTRACT The attack helicopter airframe and role evolved slowly, over time, to...attack helicopter doctrine, heavily influenced by the Global War on Terror and the 11th Attack Helicopter Regiment’s disastrous deep attack during
Research into display sharing techniques for distributed computing environments
NASA Technical Reports Server (NTRS)
Hugg, Steven B.; Fitzgerald, Paul F., Jr.; Rosson, Nina Y.; Johns, Stephen R.
1990-01-01
The X-based Display Sharing solution for distributed computing environments is described. The Display Sharing prototype includes the base functionality for telecast and display copy requirements. Since the prototype implementation is modular and the system design provided flexibility for the Mission Control Center Upgrade (MCCU) operational consideration, the prototype implementation can be the baseline for a production Display Sharing implementation. To facilitate the process the following discussions are presented: Theory of operation; System of architecture; Using the prototype; Software description; Research tools; Prototype evaluation; and Outstanding issues. The prototype is based on the concept of a dedicated central host performing the majority of the Display Sharing processing, allowing minimal impact on each individual workstation. Each workstation participating in Display Sharing hosts programs to facilitate the user's access to Display Sharing as host machine.
Random heteropolymers preserve protein function in foreign environments
NASA Astrophysics Data System (ADS)
Panganiban, Brian; Qiao, Baofu; Jiang, Tao; DelRe, Christopher; Obadia, Mona M.; Nguyen, Trung Dac; Smith, Anton A. A.; Hall, Aaron; Sit, Izaac; Crosby, Marquise G.; Dennis, Patrick B.; Drockenmuller, Eric; Olvera de la Cruz, Monica; Xu, Ting
2018-03-01
The successful incorporation of active proteins into synthetic polymers could lead to a new class of materials with functions found only in living systems. However, proteins rarely function under the conditions suitable for polymer processing. On the basis of an analysis of trends in protein sequences and characteristic chemical patterns on protein surfaces, we designed four-monomer random heteropolymers to mimic intrinsically disordered proteins for protein solubilization and stabilization in non-native environments. The heteropolymers, with optimized composition and statistical monomer distribution, enable cell-free synthesis of membrane proteins with proper protein folding for transport and enzyme-containing plastics for toxin bioremediation. Controlling the statistical monomer distribution in a heteropolymer, rather than the specific monomer sequence, affords a new strategy to interface with biological systems for protein-based biomaterials.
Modeling the human body/seat system in a vibration environment.
Rosen, Jacob; Arcan, Mircea
2003-04-01
The vibration environment is a common man-made artificial surrounding with which humans have a limited tolerance to cope due to their body dynamics. This research studied the dynamic characteristics of a seated human body/seat system in a vibration environment. The main result is a multi degrees of freedom lumped parameter model that synthesizes two basic dynamics: (i) global human dynamics, the apparent mass phenomenon, including a systematic set of the model parameters for simulating various conditions like body posture, backrest, footrest, muscle tension, and vibration directions, and (ii) the local human dynamics, represented by the human pelvis/vibrating seat contact, using a cushioning interface. The model and its selected parameters successfully described the main effects of the apparent mass phenomenon compared to experimental data documented in the literature. The model provided an analytical tool for human body dynamics research. It also enabled a primary tool for seat and cushioning design. The model was further used to develop design guidelines for a composite cushion using the principle of quasi-uniform body/seat contact force distribution. In terms of evenly distributing the contact forces, the best result for the different materials and cushion geometries simulated in the current study was achieved using a two layer shaped geometry cushion built from three materials. Combining the geometry and the mechanical characteristics of a structure under large deformation into a lumped parameter model enables successful analysis of the human/seat interface system and provides practical results for body protection in dynamic environment.
The Sound-Amplified Environment and Reading Achievement in Elementary Students
ERIC Educational Resources Information Center
Betebenner, Elizabeth Whytlaw
2011-01-01
This study was designed to address the result using sound enhancement technology in classrooms as a method of enhancing the auditory experience for students seated in the rear sections of classrooms. Previous research demonstrated the efficacy of using sound distribution systems (SDS) in real time to enhance speech perception (Anderson &…
Planetary quarantine. Space research and technology
NASA Technical Reports Server (NTRS)
1973-01-01
The impact of satisfying satellite quarantine constraints on outer planet missions and spacecraft design are studied by considering the effects of planetary radiation belts, solar wind radiation, and space vacuum on microorganism survival. Post launch recontamination studies evaluate the effects of mission environments on particle distributions on spacecraft surfaces and effective cleaning and decontamination techniques.
ERIC Educational Resources Information Center
Deek, Fadi; Espinosa, Idania
2005-01-01
Traditionally, novice programmers have had difficulties in three distinct areas: breaking down a given problem, designing a workable solution, and debugging the resulting program. Many programming environments, software applications, and teaching tools have been developed to address the difficulties faced by these novices. Along with advancements…
Spiral Arm Morphology in Cluster Environment
NASA Astrophysics Data System (ADS)
Choi, Isaac Yeoun-Gyu; Ann, Hong Bae
2011-10-01
We examine the dependence of the morphology of spiral galaxies on the environment using the KIAS Value Added Galaxy Catalog (VAGC) which is derived from the Sloan Digital Sky Survey (SDSS) DR7. Our goal is to understand whether the local environment or global conditions dominate in determining the morphology of spiral galaxies. For the analysis, we conduct a morphological classification of galaxies in 20 X-ray selected Abell clusters up to z˜0.06, using SDSS color images and the X-ray data from the Northern ROSAT All-Sky (NORAS) catalog. We analyze the distribution of arm classes along the clustercentric radius as well as that of Hubble types. To segregate the effect of local environment from the global environment, we compare the morphological distribution of galaxies in two X-lay luminosity groups, the low-Lx clusters (Lx < 0.15×1044erg/s) and high-Lx clusters (Lx > 1.8×1044erg/s). We find that the morphology-clustercentric relation prevails in the cluster envirnment although there is a brake near the cluster virial radius. The grand design arms comprise about 40% of the cluster spiral galaxies with a weak morphology-clustercentric radius relation for the arm classes, in the sense that flocculent galaxies tend to increase outward, regardless of the X-ray luminosity. From the cumulative radial distribution of cluster galaxies, we found that the low-Lx clusters are fully virialized while the high-Lx clusters are not.
45 CFR 153.700 - Distributed data environment.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 45 Public Welfare 1 2013-10-01 2013-10-01 false Distributed data environment. 153.700 Section 153... Distributed Data Collection for HHS-Operated Programs § 153.700 Distributed data environment. (a) Dedicated distributed data environments. For each benefit year in which HHS operates the risk adjustment or reinsurance...
45 CFR 153.700 - Distributed data environment.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 45 Public Welfare 1 2014-10-01 2014-10-01 false Distributed data environment. 153.700 Section 153... Distributed Data Collection for HHS-Operated Programs § 153.700 Distributed data environment. (a) Dedicated distributed data environments. For each benefit year in which HHS operates the risk adjustment or reinsurance...
Construction of Green Tide Monitoring System and Research on its Key Techniques
NASA Astrophysics Data System (ADS)
Xing, B.; Li, J.; Zhu, H.; Wei, P.; Zhao, Y.
2018-04-01
As a kind of marine natural disaster, Green Tide has been appearing every year along the Qingdao Coast, bringing great loss to this region, since the large-scale bloom in 2008. Therefore, it is of great value to obtain the real time dynamic information about green tide distribution. In this study, methods of optical remote sensing and microwave remote sensing are employed in Green Tide Monitoring Research. A specific remote sensing data processing flow and a green tide information extraction algorithm are designed, according to the optical and microwave data of different characteristics. In the aspect of green tide spatial distribution information extraction, an automatic extraction algorithm of green tide distribution boundaries is designed based on the principle of mathematical morphology dilation/erosion. And key issues in information extraction, including the division of green tide regions, the obtaining of basic distributions, the limitation of distribution boundary, and the elimination of islands, have been solved. The automatic generation of green tide distribution boundaries from the results of remote sensing information extraction is realized. Finally, a green tide monitoring system is built based on IDL/GIS secondary development in the integrated environment of RS and GIS, achieving the integration of RS monitoring and information extraction.
Singularity: Scientific containers for mobility of compute.
Kurtzer, Gregory M; Sochat, Vanessa; Bauer, Michael W
2017-01-01
Here we present Singularity, software developed to bring containers and reproducibility to scientific computing. Using Singularity containers, developers can work in reproducible environments of their choosing and design, and these complete environments can easily be copied and executed on other platforms. Singularity is an open source initiative that harnesses the expertise of system and software engineers and researchers alike, and integrates seamlessly into common workflows for both of these groups. As its primary use case, Singularity brings mobility of computing to both users and HPC centers, providing a secure means to capture and distribute software and compute environments. This ability to create and deploy reproducible environments across these centers, a previously unmet need, makes Singularity a game changing development for computational science.
Singularity: Scientific containers for mobility of compute
Kurtzer, Gregory M.; Bauer, Michael W.
2017-01-01
Here we present Singularity, software developed to bring containers and reproducibility to scientific computing. Using Singularity containers, developers can work in reproducible environments of their choosing and design, and these complete environments can easily be copied and executed on other platforms. Singularity is an open source initiative that harnesses the expertise of system and software engineers and researchers alike, and integrates seamlessly into common workflows for both of these groups. As its primary use case, Singularity brings mobility of computing to both users and HPC centers, providing a secure means to capture and distribute software and compute environments. This ability to create and deploy reproducible environments across these centers, a previously unmet need, makes Singularity a game changing development for computational science. PMID:28494014
Modeling Gene-Environment Interactions With Quasi-Natural Experiments.
Schmitz, Lauren; Conley, Dalton
2017-02-01
This overview develops new empirical models that can effectively document Gene × Environment (G×E) interactions in observational data. Current G×E studies are often unable to support causal inference because they use endogenous measures of the environment or fail to adequately address the nonrandom distribution of genes across environments, confounding estimates. Comprehensive measures of genetic variation are incorporated into quasi-natural experimental designs to exploit exogenous environmental shocks or isolate variation in environmental exposure to avoid potential confounders. In addition, we offer insights from population genetics that improve upon extant approaches to address problems from population stratification. Together, these tools offer a powerful way forward for G×E research on the origin and development of social inequality across the life course. © 2015 Wiley Periodicals, Inc.
Investigations into the design principles in the chemotactic behavior of Escherichia coli.
Kim, Tae-Hwan; Jung, Sung Hoon; Cho, Kwang-Hyun
2008-01-01
Inspired by the recent studies on the analysis of biased random walk behavior of Escherichia coli[Passino, K.M., 2002. Biomimicry of bacterial foraging for distributed optimization and control. IEEE Control Syst. Mag. 22 (3), 52-67; Passino, K.M., 2005. Biomimicry for Optimization, Control and Automation. Springer-Verlag, pp. 768-798; Liu, Y., Passino, K.M., 2002. Biomimicry of social foraging bacteria for distributed optimization: models, principles, and emergent behaviors. J. Optim. Theory Appl. 115 (3), 603-628], we have developed a model describing the motile behavior of E. coli by specifying some simple rules on the chemotaxis. Based on this model, we have analyzed the role of some key parameters involved in the chemotactic behavior to unravel the underlying design principles. By investigating the target tracking capability of E. coli in a maze through computer simulations, we found that E. coli clusters can be controlled as target trackers in a complex micro-scale-environment. In addition, we have explored the dynamical characteristics of this target tracking mechanism through perturbation of parameters under noisy environments. It turns out that the E. coli chemotaxis mechanism might be designed such that it is sensitive enough to efficiently track the target and also robust enough to overcome environmental noises.
An analysis of the orbital distribution of solid rocket motor slag
NASA Astrophysics Data System (ADS)
Horstman, Matthew F.; Mulrooney, Mark
2009-01-01
The contribution by solid rocket motors (SRMs) to the orbital debris environment is potentially significant and insufficiently studied. Design and combustion processes can lead to the emission of enough by-products to warrant assessment of their contribution to orbital debris. These particles are formed during SRM tail-off, or burn termination, by the rapid solidification of molten Al2O3 slag accumulated during the burn. The propensity of SRMs to generate particles larger than 100μm raises concerns regarding the debris environment. Sizes as large as 1 cm have been witnessed in ground tests, and comparable sizes have been estimated via observations of sub-orbital tail-off events. Utilizing previous research we have developed more sophisticated size distributions and modeled the time evolution of resultant orbital populations using a historical database of SRM launches, propellant, and likely location and time of tail-off. This analysis indicates that SRM ejecta is a significant component of the debris environment.
Separation and reconstruction of high pressure water-jet reflective sound signal based on ICA
NASA Astrophysics Data System (ADS)
Yang, Hongtao; Sun, Yuling; Li, Meng; Zhang, Dongsu; Wu, Tianfeng
2011-12-01
The impact of high pressure water-jet on the different materials target will produce different reflective mixed sound. In order to reconstruct the reflective sound signals distribution on the linear detecting line accurately and to separate the environment noise effectively, the mixed sound signals acquired by linear mike array were processed by ICA. The basic principle of ICA and algorithm of FASTICA were described in detail. The emulation experiment was designed. The environment noise signal was simulated by using band-limited white noise and the reflective sound signal was simulated by using pulse signal. The reflective sound signal attenuation produced by the different distance transmission was simulated by weighting the sound signal with different contingencies. The mixed sound signals acquired by linear mike array were synthesized by using the above simulated signals and were whitened and separated by ICA. The final results verified that the environment noise separation and the reconstruction of the detecting-line sound distribution can be realized effectively.
Cetinceviz, Yucel; Bayindir, Ramazan
2012-05-01
The network requirements of control systems in industrial applications increase day by day. The Internet based control system and various fieldbus systems have been designed in order to meet these requirements. This paper describes an Internet based control system with wireless fieldbus communication designed for distributed processes. The system was implemented as an experimental setup in a laboratory. In industrial facilities, the process control layer and the distance connection of the distributed control devices in the lowest levels of the industrial production environment are provided with fieldbus networks. In this paper, the Internet based control system that will be able to meet the system requirements with a new-generation communication structure, which is called wired/wireless hybrid system, has been designed on field level and carried out to cover all sectors of distributed automation, from process control, to distributed input/output (I/O). The system has been accomplished by hardware structure with a programmable logic controller (PLC), a communication processor (CP) module, two industrial wireless modules and a distributed I/O module, Motor Protection Package (MPP) and software structure with WinCC flexible program used for the screen of Scada (Supervisory Control And Data Acquisition), SIMATIC MANAGER package program ("STEP7") used for the hardware and network configuration and also for downloading control program to PLC. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
Energy consciousness in the design of lighting for people
DOE Office of Scientific and Technical Information (OSTI.GOV)
Halldane, J.F.
1975-01-01
A comprehensive overview of energy and power distribution in the environment is presented as it relates to lighting. The objectives are to develop a consciousness of the effects of light and vision in order to utilize them more effectively. Notes are made of the physical effects of radiant power on living things and materials including thermal absorption, reflection, transmission, refraction, spectral conversion, interference, diffraction, polarization, phototropy, luminescence, photochemical changes, and photoelectric effects. Environmental issues are stressed. The evaluation process in design is briefly discussed. Reference is made to the goal, parameter, synthesis, and criterion specification as a checklist for evaluation.more » Particular concern is raised for the occupants who experience the constructed environment, since their interests do not appear to be sufficiently represented in the present day design process. Meaningfulness of measurement is emphasized and some anomalies illustrated. (auth)« less
Support design and practice for floor heave of deeply buried roadway
NASA Astrophysics Data System (ADS)
Liu, Chaoke; Ren, Jianxi; Gao, Bingli; Song, Yongjun
2017-05-01
Aiming at the severe floor heave of auxiliary haulage roadway in Jianzhuang Coal Mine, the paper analysed mechanical environment and failure characteristics of auxiliary haulage roadway surrounding rock with the combination of mechanical test, theoretical analysis, industrial test, etc. The mechanical mechanism for deformation and failure of weak rock roadway in Jianzhuang Coal Mine was disclosed by establishing a roadway mechanical model under the effect of even-distributed load, which provided a basis for the design of inverted concrete arch. Based on complex failure mechanism of the roadway, a support method with combined inverted concrete arch and anchor in floor was proposed. The result shows that the ground stress environment has extremely adverse influence on the roadway, and the practice indicates that the floor heave countermeasures can effectively control the floor heave. The obtained conclusion provides a reference for the research and design on control technology of roadway floor heave in the future.
Real-Time Hardware-in-the-Loop Simulation of Ares I Launch Vehicle
NASA Technical Reports Server (NTRS)
Tobbe, Patrick; Matras, Alex; Walker, David; Wilson, Heath; Fulton, Chris; Alday, Nathan; Betts, Kevin; Hughes, Ryan; Turbe, Michael
2009-01-01
The Ares Real-Time Environment for Modeling, Integration, and Simulation (ARTEMIS) has been developed for use by the Ares I launch vehicle System Integration Laboratory at the Marshall Space Flight Center. The primary purpose of the Ares System Integration Laboratory is to test the vehicle avionics hardware and software in a hardware - in-the-loop environment to certify that the integrated system is prepared for flight. ARTEMIS has been designed to be the real-time simulation backbone to stimulate all required Ares components for verification testing. ARTE_VIIS provides high -fidelity dynamics, actuator, and sensor models to simulate an accurate flight trajectory in order to ensure realistic test conditions. ARTEMIS has been designed to take advantage of the advances in underlying computational power now available to support hardware-in-the-loop testing to achieve real-time simulation with unprecedented model fidelity. A modular realtime design relying on a fully distributed computing architecture has been implemented.
A formation control strategy with coupling weights for the multi-robot system
NASA Astrophysics Data System (ADS)
Liang, Xudong; Wang, Siming; Li, Weijie
2017-12-01
The distributed formation problem of the multi-robot system with general linear dynamic characteristics and directed communication topology is discussed. In order to avoid that the multi-robot system can not maintain the desired formation in the complex communication environment, the distributed cooperative algorithm with coupling weights based on zipf distribution is designed. The asymptotic stability condition for the formation of the multi-robot system is given, and the theory of the graph and the Lyapunov theory are used to prove that the formation can converge to the desired geometry formation and the desired motion rules of the virtual leader under this condition. Nontrivial simulations are performed to validate the effectiveness of the distributed cooperative algorithm with coupling weights.
Aerothermodynamic Design of the Mars Science Laboratory Heatshield
NASA Technical Reports Server (NTRS)
Edquist, Karl T.; Dyakonov, Artem A.; Wright, Michael J.; Tang, Chun Y.
2009-01-01
Aerothermodynamic design environments are presented for the Mars Science Laboratory entry capsule heatshield. The design conditions are based on Navier-Stokes flowfield simulations on shallow (maximum total heat load) and steep (maximum heat flux, shear stress, and pressure) entry trajectories from a 2009 launch. Boundary layer transition is expected prior to peak heat flux, a first for Mars entry, and the heatshield environments were defined for a fully-turbulent heat pulse. The effects of distributed surface roughness on turbulent heat flux and shear stress peaks are included using empirical correlations. Additional biases and uncertainties are based on computational model comparisons with experimental data and sensitivity studies. The peak design conditions are 197 W/sq cm for heat flux, 471 Pa for shear stress, 0.371 Earth atm for pressure, and 5477 J/sq cm for total heat load. Time-varying conditions at fixed heatshield locations were generated for thermal protection system analysis and flight instrumentation development. Finally, the aerothermodynamic effects of delaying launch until 2011 are previewed.
Responsive systems - The challenge for the nineties
NASA Technical Reports Server (NTRS)
Malek, Miroslaw
1990-01-01
A concept of responsive computer systems will be introduced. The emerging responsive systems demand fault-tolerant and real-time performance in parallel and distributed computing environments. The design methodologies for fault-tolerant, real time and responsive systems will be presented. Novel techniques of introducing redundancy for improved performance and dependability will be illustrated. The methods of system responsiveness evaluation will be proposed. The issues of determinism, closed and open systems will also be discussed from the perspective of responsive systems design.
NASA Astrophysics Data System (ADS)
Ying, Shen; Li, Lin; Gao, Yurong
2009-10-01
Spatial visibility analysis is the important direction of pedestrian behaviors because our visual conception in space is the straight method to get environment information and navigate your actions. Based on the agent modeling and up-tobottom method, the paper develop the framework about the analysis of the pedestrian flow depended on visibility. We use viewshed in visibility analysis and impose the parameters on agent simulation to direct their motion in urban space. We analyze the pedestrian behaviors in micro-scale and macro-scale of urban open space. The individual agent use visual affordance to determine his direction of motion in micro-scale urban street on district. And we compare the distribution of pedestrian flow with configuration in macro-scale urban environment, and mine the relationship between the pedestrian flow and distribution of urban facilities and urban function. The paper first computes the visibility situations at the vantage point in urban open space, such as street network, quantify the visibility parameters. The multiple agents use visibility parameters to decide their direction of motion, and finally pedestrian flow reach to a stable state in urban environment through the simulation of multiple agent system. The paper compare the morphology of visibility parameters and pedestrian distribution with urban function and facilities layout to confirm the consistence between them, which can be used to make decision support in urban design.
Pan, Tony; Flick, Patrick; Jain, Chirag; Liu, Yongchao; Aluru, Srinivas
2017-10-09
Counting and indexing fixed length substrings, or k-mers, in biological sequences is a key step in many bioinformatics tasks including genome alignment and mapping, genome assembly, and error correction. While advances in next generation sequencing technologies have dramatically reduced the cost and improved latency and throughput, few bioinformatics tools can efficiently process the datasets at the current generation rate of 1.8 terabases every 3 days. We present Kmerind, a high performance parallel k-mer indexing library for distributed memory environments. The Kmerind library provides a set of simple and consistent APIs with sequential semantics and parallel implementations that are designed to be flexible and extensible. Kmerind's k-mer counter performs similarly or better than the best existing k-mer counting tools even on shared memory systems. In a distributed memory environment, Kmerind counts k-mers in a 120 GB sequence read dataset in less than 13 seconds on 1024 Xeon CPU cores, and fully indexes their positions in approximately 17 seconds. Querying for 1% of the k-mers in these indices can be completed in 0.23 seconds and 28 seconds, respectively. Kmerind is the first k-mer indexing library for distributed memory environments, and the first extensible library for general k-mer indexing and counting. Kmerind is available at https://github.com/ParBLiSS/kmerind.
NASA Astrophysics Data System (ADS)
Park, Bumjin; Kim, Dongwook; Park, Jaehyoung; Kim, Kibeom; Koo, Jay; Park, HyunHo; Ahn, Seungyoung
2018-05-01
Recently, magnetic energy harvesting technologies have been studied actively for self-sustainable operation of applications around power line. However, magnetic energy harvesting around power lines has the problem of magnetic saturation, which can cause power performance degradation of the harvester. In this paper, optimal design of a toroidal core for magnetic energy harvesters has been proposed with consideration of magnetic saturation near power lines. Using Permeability-H curve and Ampere's circuital law, the optimum dimensional parameters needed to generate induced voltage were analyzed via calculation and simulation. To reflect a real environment, we consider the nonlinear characteristic of the magnetic core material and supply current through a 3-phase distribution panel used in the industry. The effectiveness of the proposed design methodology is verified by experiments in a power distribution panel and takes 60.9 V from power line current of 60 A at 60 Hz.
Outlier Responses Reflect Sensitivity to Statistical Structure in the Human Brain
Garrido, Marta I.
2013-01-01
We constantly look for patterns in the environment that allow us to learn its key regularities. These regularities are fundamental in enabling us to make predictions about what is likely to happen next. The physiological study of regularity extraction has focused primarily on repetitive sequence-based rules within the sensory environment, or on stimulus-outcome associations in the context of reward-based decision-making. Here we ask whether we implicitly encode non-sequential stochastic regularities, and detect violations therein. We addressed this question using a novel experimental design and both behavioural and magnetoencephalographic (MEG) metrics associated with responses to pure-tone sounds with frequencies sampled from a Gaussian distribution. We observed that sounds in the tail of the distribution evoked a larger response than those that fell at the centre. This response resembled the mismatch negativity (MMN) evoked by surprising or unlikely events in traditional oddball paradigms. Crucially, responses to physically identical outliers were greater when the distribution was narrower. These results show that humans implicitly keep track of the uncertainty induced by apparently random distributions of sensory events. Source reconstruction suggested that the statistical-context-sensitive responses arose in a temporo-parietal network, areas that have been associated with attention orientation to unexpected events. Our results demonstrate a very early neurophysiological marker of the brain's ability to implicitly encode complex statistical structure in the environment. We suggest that this sensitivity provides a computational basis for our ability to make perceptual inferences in noisy environments and to make decisions in an uncertain world. PMID:23555230
Distributed intelligent urban environment monitoring system
NASA Astrophysics Data System (ADS)
Du, Jinsong; Wang, Wei; Gao, Jie; Cong, Rigang
2018-02-01
The current environmental pollution and destruction have developed into a world-wide major social problem that threatens human survival and development. Environmental monitoring is the prerequisite and basis of environmental governance, but overall, the current environmental monitoring system is facing a series of problems. Based on the electrochemical sensor, this paper designs a small, low-cost, easy to layout urban environmental quality monitoring terminal, and multi-terminal constitutes a distributed network. The system has been small-scale demonstration applications and has confirmed that the system is suitable for large-scale promotion
High aspect reactor vessel and method of use
NASA Technical Reports Server (NTRS)
Wolf, David A. (Inventor); Sams, Clarence F. (Inventor); Schwarz, Ray P. (Inventor)
1992-01-01
An improved bio-reactor vessel and system useful for carrying out mammalian cell growth in suspension in a culture media are presented. The main goal of the invention is to grow and maintain cells under a homogeneous distribution under acceptable biochemical environment of gas partial pressures and nutrient levels without introducing direct agitation mechanisms or associated disruptive mechanical forces. The culture chamber rotates to maintain an even distribution of cells in suspension and minimizes the length of a gas diffusion path. The culture chamber design is presented and discussed.
A Method for Designing Conforming Folding Propellers
NASA Technical Reports Server (NTRS)
Litherland, Brandon L.; Patterson, Michael D.; Derlaga, Joseph M.; Borer, Nicholas K.
2017-01-01
As the aviation vehicle design environment expands due to the in flux of new technologies, new methods of conceptual design and modeling are required in order to meet the customer's needs. In the case of distributed electric propulsion (DEP), the use of high-lift propellers upstream of the wing leading edge augments lift at low speeds enabling smaller wings with sufficient takeoff and landing performance. During cruise, however, these devices would normally contribute significant drag if left in a fixed or windmilling arrangement. Therefore, a design that stows the propeller blades is desirable. In this paper, we present a method for designing folding-blade configurations that conform to the nacelle surface when stowed. These folded designs maintain performance nearly identical to their straight, non-folding blade counterparts.
Meteoroids and Orbital Debris: Effects on Spacecraft
NASA Technical Reports Server (NTRS)
Belk, Cynthia A.; Robinson, Jennifer H.; Alexander, Margaret B.; Cooke, William J.; Pavelitz, Steven D.
1997-01-01
The natural space environment is characterized by many complex and subtle phenomena hostile to spacecraft. The effects of these phenomena impact spacecraft design, development, and operations. Space systems become increasingly susceptible to the space environment as use of composite materials and smaller, faster electronics increases. This trend makes an understanding of the natural space environment essential to accomplish overall mission objectives, especially in the current climate of better/cheaper/faster. Meteoroids are naturally occurring phenomena in the natural space environment. Orbital debris is manmade space litter accumulated in Earth orbit from the exploration of space. Descriptions are presented of orbital debris source, distribution, size, lifetime, and mitigation measures. This primer is one in a series of NASA Reference Publications currently being developed by the Electromagnetics and Aerospace Environments Branch, Systems Analysis and Integration Laboratory, Marshall Space Flight Center, National Aeronautics and Space Administration.
NASA Technical Reports Server (NTRS)
Twombly, I. Alexander; Smith, Jeffrey; Bruyns, Cynthia; Montgomery, Kevin; Boyle, Richard
2003-01-01
The International Space Station will soon provide an unparalleled research facility for studying the near- and longer-term effects of microgravity on living systems. Using the Space Station Glovebox Facility - a compact, fully contained reach-in environment - astronauts will conduct technically challenging life sciences experiments. Virtual environment technologies are being developed at NASA Ames Research Center to help realize the scientific potential of this unique resource by facilitating the experimental hardware and protocol designs and by assisting the astronauts in training. The Virtual GloveboX (VGX) integrates high-fidelity graphics, force-feedback devices and real- time computer simulation engines to achieve an immersive training environment. Here, we describe the prototype VGX system, the distributed processing architecture used in the simulation environment, and modifications to the visualization pipeline required to accommodate the display configuration.
Developing an Environment for Exploring Distributed Operations: A Wargaming Example
2005-05-01
a basis for performance standards. At the same time, the design tried to provide an acceptable mix of structured versus free - play activity in...participants’ free - play discussion and collaboration during Counteraction. Scripting allowed the research team to embed potential problems or measurement...Learned - Structured Exercises ......................................................................... 24 Scripted and Free - Play Wargaming Phases
Apollo 17 ultraviolet spectrometer experiment (S-169)
NASA Technical Reports Server (NTRS)
Fastie, W. G.
1974-01-01
The scientific objectives of the ultraviolet spectrometer experiment are discussed, along with design and operational details, instrument preparation and performance, and scientific results. Information gained from the experiment is given concerning the lunar atmosphere and albedo, zodiacal light, astronomical observations, spacecraft environment, and the distribution of atomic hydrogen in the solar system and in the earth's atmosphere.
NASA Astrophysics Data System (ADS)
Mauk, B.; Haggerty, D. K.; Paranicas, C.; Clark, G. B.; Kollmann, P.; Rymer, A. M.; Brown, L. E.; Jaskulek, S. E.; Schlemm, C. E.; Kim, C. K.; Nelson, K.; Bolton, S. J.; Bagenal, F.; Connerney, J. E. P.; Gladstone, R.; Kurth, W. S.; Levin, S.; McComas, D. J.; Valek, P. W.
2016-12-01
The Juno spacecraft first entered Jupiter's magnetosphere on 25 June 2016, but evidence for Jupiter's magnetospheric environment was first observed by the Jupiter Energetic Particle Detector Instrument (JEDI) as early as January 2016 in the form of leaking energetic particles observed over 1200 RJ away from Jupiter. JEDI is an energetic particle instrument designed to measure the energy, angular, and compositional distribution of energetic electrons ( 25 to > 700 keV) and ions (protons: 10 keV to > 1.5 MeV). A special set of channels for oxygen and sulfur extend up in energy to > 10 MeV. The JEDI instrument comprises three separate sensor heads, each with multiple (6) telescopes, in order to capture angular distributions of energetic particles over the poles of Jupiter as Juno rushes over auroral forms as narrow as < 80 km at a speed of up to 55 km/s. Since entering Jupiter's magnetosphere JEDI has observed both familiar, and some unfamiliar structures, including: 1) undulations along the dawn flank of Jupiter's magnetosphere possibly signaling the occurrence of Kelvin-Helmholz instability structures thought to play a role in coupling the solar wind energetics to the dynamics of Jupiter's magnetosphere, and 2) spiky electron transients with magnetic field-aligned angular distributions within the distant magnetodisc plasmas conjectured to be related to transient auroral forms observed at other times by the Hubble Space Telescope poleward of Jupiter's main aurora. A principal target of JEDI and other fields and particles instruments on Juno is the near-planet polar regions of Jupiter's space environment, never-before visited by spacecraft. These instruments were designed to determine the physics of auroral acceleration at Jupiter and the role that those processes play in enabling Jupiter to spin up and energize its vast magnetospheric space environment. The first polar pass is scheduled for 27 August 2016. In this report we present the first results from the JEDI instrument after making measurements in this novel polar environment.
NASA Astrophysics Data System (ADS)
Rong, W. N.; Alila, Y.
2017-12-01
Using nine pairs of control-treatment watersheds with varying climate, physiography, and harvesting practices in the Rain-On-Snow (ROS) environment of the Pacific Northwest region, we explore the linkage between environmental control and the sensitivity of peakflow response to harvesting effects. Compared to previous paired watershed studies in ROS environment, we employed an experimental design of Frequency Pairing to isolate the effects of disturbances on systems' response. In contrary, the aspect of changing frequency distributions is not commonly invoked in previous literatures on the topic of forests and floods. Our results show how harvesting can dramatically increase the magnitude of all peakflows on record and how such effects can increase with increasing return periods, as a consequence of substantial increases to the mean and variance of the peakflow frequency distribution. Most critically, peakflows with return period larger than 10 years can increase in frequency, where the larger the peakflow event the more frequent it may become. The sensitivity of the upper tail of the frequency distribution of peakflows was found to be linked to the physiographic and climatic characteristics via a unifying synchronization / desynchronization spatial scaling mechanism that controls the generation of rain-on-snow runoff. This new physically-based stochastic hydrology understanding on the response of watersheds in ROS environments runs counter the deterministic prevailing wisdom of forest hydrology, which presumes a limited and diminishing role of forest cover as the magnitude of the peakflow event increases. By demonstrating the need for invoking the dimension of frequency in the understanding and prediction of the effects of harvesting on peakflows, findings from this study suggested that pure deterministic hypotheses and experimental designs that solely focusing on the changing magnitude of peakflows have been misguiding forest hydrology research for over a century on this topic.
Design and Control of Compliant Tensegrity Robots Through Simulation and Hardware Validation
NASA Technical Reports Server (NTRS)
Caluwaerts, Ken; Despraz, Jeremie; Iscen, Atil; Sabelhaus, Andrew P.; Bruce, Jonathan; Schrauwen, Benjamin; Sunspiral, Vytas
2014-01-01
To better understand the role of tensegrity structures in biological systems and their application to robotics, the Dynamic Tensegrity Robotics Lab at NASA Ames Research Center has developed and validated two different software environments for the analysis, simulation, and design of tensegrity robots. These tools, along with new control methodologies and the modular hardware components developed to validate them, are presented as a system for the design of actuated tensegrity structures. As evidenced from their appearance in many biological systems, tensegrity ("tensile-integrity") structures have unique physical properties which make them ideal for interaction with uncertain environments. Yet these characteristics, such as variable structural compliance, and global multi-path load distribution through the tension network, make design and control of bio-inspired tensegrity robots extremely challenging. This work presents the progress in using these two tools in tackling the design and control challenges. The results of this analysis includes multiple novel control approaches for mobility and terrain interaction of spherical tensegrity structures. The current hardware prototype of a six-bar tensegrity, code-named ReCTeR, is presented in the context of this validation.
I want what you've got: Cross platform portabiity and human-robot interaction assessment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Julie L. Marble, Ph.D.*.; Douglas A. Few; David J. Bruemmer
2005-08-01
Human-robot interaction is a subtle, yet critical aspect of design that must be assessed during the development of both the human-robot interface and robot behaviors if the human-robot team is to effectively meet the complexities of the task environment. Testing not only ensures that the system can successfully achieve the tasks for which it was designed, but more importantly, usability testing allows the designers to understand how humans and robots can, will, and should work together to optimize workload distribution. A lack of human-centered robot interface design, the rigidity of sensor configuration, and the platform-specific nature of research robot developmentmore » environments are a few factors preventing robotic solutions from reaching functional utility in real word environments. Often the difficult engineering challenge of implementing adroit reactive behavior, reliable communication, trustworthy autonomy that combines with system transparency and usable interfaces is overlooked in favor of other research aims. The result is that many robotic systems never reach a level of functional utility necessary even to evaluate the efficacy of the basic system, much less result in a system that can be used in a critical, real-world environment. Further, because control architectures and interfaces are often platform specific, it is difficult or even impossible to make usability comparisons between them. This paper discusses the challenges inherent to the conduct of human factors testing of variable autonomy control architectures and across platforms within a complex, real-world environment. It discusses the need to compare behaviors, architectures, and interfaces within a structured environment that contains challenging real-world tasks, and the implications for system acceptance and trust of autonomous robotic systems for how humans and robots interact in true interactive teams.« less
DEPEND - A design environment for prediction and evaluation of system dependability
NASA Technical Reports Server (NTRS)
Goswami, Kumar K.; Iyer, Ravishankar K.
1990-01-01
The development of DEPEND, an integrated simulation environment for the design and dependability analysis of fault-tolerant systems, is described. DEPEND models both hardware and software components at a functional level, and allows automatic failure injection to assess system performance and reliability. It relieves the user of the work needed to inject failures, maintain statistics, and output reports. The automatic failure injection scheme is geared toward evaluating a system under high stress (workload) conditions. The failures that are injected can affect both hardware and software components. To illustrate the capability of the simulator, a distributed system which employs a prediction-based, dynamic load-balancing heuristic is evaluated. Experiments were conducted to determine the impact of failures on system performance and to identify the failures to which the system is especially susceptible.
Novel designs for application specific MEMS pressure sensors.
Fragiacomo, Giulio; Reck, Kasper; Lorenzen, Lasse; Thomsen, Erik V
2010-01-01
In the framework of developing innovative microfabricated pressure sensors, we present here three designs based on different readout principles, each one tailored for a specific application. A touch mode capacitive pressure sensor with high sensitivity (14 pF/bar), low temperature dependence and high capacitive output signal (more than 100 pF) is depicted. An optical pressure sensor intrinsically immune to electromagnetic interference, with large pressure range (0-350 bar) and a sensitivity of 1 pm/bar is presented. Finally, a resonating wireless pressure sensor power source free with a sensitivity of 650 KHz/mmHg is described. These sensors will be related with their applications in harsh environment, distributed systems and medical environment, respectively. For many aspects, commercially available sensors, which in vast majority are piezoresistive, are not suited for the applications proposed.
A comparison of queueing, cluster and distributed computing systems
NASA Technical Reports Server (NTRS)
Kaplan, Joseph A.; Nelson, Michael L.
1993-01-01
Using workstation clusters for distributed computing has become popular with the proliferation of inexpensive, powerful workstations. Workstation clusters offer both a cost effective alternative to batch processing and an easy entry into parallel computing. However, a number of workstations on a network does not constitute a cluster. Cluster management software is necessary to harness the collective computing power. A variety of cluster management and queuing systems are compared: Distributed Queueing Systems (DQS), Condor, Load Leveler, Load Balancer, Load Sharing Facility (LSF - formerly Utopia), Distributed Job Manager (DJM), Computing in Distributed Networked Environments (CODINE), and NQS/Exec. The systems differ in their design philosophy and implementation. Based on published reports on the different systems and conversations with the system's developers and vendors, a comparison of the systems are made on the integral issues of clustered computing.
NASA Astrophysics Data System (ADS)
Wang, Rui
It is known that high intensity radiated fields (HIRF) can produce upsets in digital electronics, and thereby degrade the performance of digital flight control systems. Such upsets, either from natural or man-made sources, can change data values on digital buses and memory and affect CPU instruction execution. HIRF environments are also known to trigger common-mode faults, affecting nearly-simultaneously multiple fault containment regions, and hence reducing the benefits of n-modular redundancy and other fault-tolerant computing techniques. Thus, it is important to develop models which describe the integration of the embedded digital system, where the control law is implemented, as well as the dynamics of the closed-loop system. In this dissertation, theoretical tools are presented to analyze the relationship between the design choices for a class of distributed recoverable computing platforms and the tracking performance degradation of a digital flight control system implemented on such a platform while operating in a HIRF environment. Specifically, a tractable hybrid performance model is developed for a digital flight control system implemented on a computing platform inspired largely by the NASA family of fault-tolerant, reconfigurable computer architectures known as SPIDER (scalable processor-independent design for enhanced reliability). The focus will be on the SPIDER implementation, which uses the computer communication system known as ROBUS-2 (reliable optical bus). A physical HIRF experiment was conducted at the NASA Langley Research Center in order to validate the theoretical tracking performance degradation predictions for a distributed Boeing 747 flight control system subject to a HIRF environment. An extrapolation of these results for scenarios that could not be physically tested is also presented.
GEANT4 distributed computing for compact clusters
NASA Astrophysics Data System (ADS)
Harrawood, Brian P.; Agasthya, Greeshma A.; Lakshmanan, Manu N.; Raterman, Gretchen; Kapadia, Anuj J.
2014-11-01
A new technique for distribution of GEANT4 processes is introduced to simplify running a simulation in a parallel environment such as a tightly coupled computer cluster. Using a new C++ class derived from the GEANT4 toolkit, multiple runs forming a single simulation are managed across a local network of computers with a simple inter-node communication protocol. The class is integrated with the GEANT4 toolkit and is designed to scale from a single symmetric multiprocessing (SMP) machine to compact clusters ranging in size from tens to thousands of nodes. User designed 'work tickets' are distributed to clients using a client-server work flow model to specify the parameters for each individual run of the simulation. The new g4DistributedRunManager class was developed and well tested in the course of our Neutron Stimulated Emission Computed Tomography (NSECT) experiments. It will be useful for anyone running GEANT4 for large discrete data sets such as covering a range of angles in computed tomography, calculating dose delivery with multiple fractions or simply speeding the through-put of a single model.
Vascular system modeling in parallel environment - distributed and shared memory approaches
Jurczuk, Krzysztof; Kretowski, Marek; Bezy-Wendling, Johanne
2011-01-01
The paper presents two approaches in parallel modeling of vascular system development in internal organs. In the first approach, new parts of tissue are distributed among processors and each processor is responsible for perfusing its assigned parts of tissue to all vascular trees. Communication between processors is accomplished by passing messages and therefore this algorithm is perfectly suited for distributed memory architectures. The second approach is designed for shared memory machines. It parallelizes the perfusion process during which individual processing units perform calculations concerning different vascular trees. The experimental results, performed on a computing cluster and multi-core machines, show that both algorithms provide a significant speedup. PMID:21550891
3D Visualization for Phoenix Mars Lander Science Operations
NASA Technical Reports Server (NTRS)
Edwards, Laurence; Keely, Leslie; Lees, David; Stoker, Carol
2012-01-01
Planetary surface exploration missions present considerable operational challenges in the form of substantial communication delays, limited communication windows, and limited communication bandwidth. A 3D visualization software was developed and delivered to the 2008 Phoenix Mars Lander (PML) mission. The components of the system include an interactive 3D visualization environment called Mercator, terrain reconstruction software called the Ames Stereo Pipeline, and a server providing distributed access to terrain models. The software was successfully utilized during the mission for science analysis, site understanding, and science operations activity planning. A terrain server was implemented that provided distribution of terrain models from a central repository to clients running the Mercator software. The Ames Stereo Pipeline generates accurate, high-resolution, texture-mapped, 3D terrain models from stereo image pairs. These terrain models can then be visualized within the Mercator environment. The central cross-cutting goal for these tools is to provide an easy-to-use, high-quality, full-featured visualization environment that enhances the mission science team s ability to develop low-risk productive science activity plans. In addition, for the Mercator and Viz visualization environments, extensibility and adaptability to different missions and application areas are key design goals.
Sánchez-Álvarez, David; Rodríguez-Pérez, Francisco-Javier
2018-01-01
In this paper, we present a work based on the computational load distribution among the homogeneous nodes and the Hub/Sink of Wireless Sensor Networks (WSNs). The main contribution of the paper is an early decision support framework helping WSN designers to take decisions about computational load distribution for those WSNs where power consumption is a key issue (when we refer to “framework” in this work, we are considering it as a support tool to make decisions where the executive judgment can be included along with the set of mathematical tools of the WSN designer; this work shows the need to include the load distribution as an integral component of the WSN system for making early decisions regarding energy consumption). The framework takes advantage of the idea that balancing sensors nodes and Hub/Sink computational load can lead to improved energy consumption for the whole or at least the battery-powered nodes of the WSN. The approach is not trivial and it takes into account related issues such as the required data distribution, nodes, and Hub/Sink connectivity and availability due to their connectivity features and duty-cycle. For a practical demonstration, the proposed framework is applied to an agriculture case study, a sector very relevant in our region. In this kind of rural context, distances, low costs due to vegetable selling prices and the lack of continuous power supplies may lead to viable or inviable sensing solutions for the farmers. The proposed framework systematize and facilitates WSN designers the required complex calculations taking into account the most relevant variables regarding power consumption, avoiding full/partial/prototype implementations, and measurements of different computational load distribution potential solutions for a specific WSN. PMID:29570645
Distributed agile software development for the SKA
NASA Astrophysics Data System (ADS)
Wicenec, Andreas; Parsons, Rebecca; Kitaeff, Slava; Vinsen, Kevin; Wu, Chen; Nelson, Paul; Reed, David
2012-09-01
The SKA software will most probably be developed by many groups distributed across the globe and coming from dierent backgrounds, like industries and research institutions. The SKA software subsystems will have to cover a very wide range of dierent areas, but still they have to react and work together like a single system to achieve the scientic goals and satisfy the challenging data ow requirements. Designing and developing such a system in a distributed fashion requires proper tools and the setup of an environment to allow for ecient detection and tracking of interface and integration issues in particular in a timely way. Agile development can provide much faster feedback mechanisms and also much tighter collaboration between the customer (scientist) and the developer. Continuous integration and continuous deployment on the other hand can provide much faster feedback of integration issues from the system level to the subsystem developers. This paper describes the results obtained from trialing a potential SKA development environment based on existing science software development processes like ALMA, the expected distribution of the groups potentially involved in the SKA development and experience gained in the development of large scale commercial software projects.
Buoyancy driven acceleration in a hospital operating room indoor environment
NASA Astrophysics Data System (ADS)
McNeill, James; Hertzberg, Jean; Zhai, John
2011-11-01
In hospital operating rooms, centrally located non-isothermal ceiling jets provide sterile air for protecting the surgical site from infectious particles in the room air as well as room cooling. Modern operating rooms are requiring larger temperature differences to accommodate increasing cooling loads for heat gains from medical equipment. This trend may lead to significant changes in the room air distribution patterns that may sacrifice the sterile air field across the surgical table. Quantitative flow visualization experiments using laser sheet illumination and RANS modeling of the indoor environment were conducted to demonstrate the impact of the indoor environment thermal conditions on the room air distribution. The angle of the jet shear layer was studied as function of the area of the vena contracta of the jet, which is in turn dependent upon the Archimedes number of the jet. Increases in the buoyancy forces cause greater air velocities in the vicinity of the surgical site increasing the likelihood of deposition of contaminants in the flow field. The outcome of this study shows the Archimedes number should be used as the design parameter for hospital operating room air distribution in order to maintain a proper supply air jet for covering the sterile region. This work is supported by ASHRAE.
Exemplary Design Envelope Specification for Standard Modular Hydropower Technology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Witt, Adam M.; Smith, Brennan T.; Tsakiris, Achilleas
Hydropower is an established, affordable renewable energy generation technology supplying nearly 18% of the electricity consumed globally. A hydropower facility interacts continuously with the surrounding water resource environment, causing alterations of varying magnitude in the natural flow of water, energy, fish, sediment, and recreation upstream and downstream. A universal challenge in facility design is balancing the extraction of useful energy and power system services from a stream with the need to maintain ecosystem processes and natural environmental function. On one hand, hydroelectric power is a carbon-free, renewable, and flexible asset to the power system. On the other, the disruption ofmore » longitudinal connectivity and the artificial barrier to aquatic movement created by hydraulic structures can produce negative impacts that stress fresh water environments. The growing need for carbon-free, reliable, efficient distributed energy sources suggests there is significant potential for hydropower projects that can deploy with low installed costs, enhanced ecosystem service offerings, and minimal disruptions of the stream environment.« less
Advanced Engineering Environments: Implications for Aerospace Manufacturing
NASA Technical Reports Server (NTRS)
Thomas, D.
2001-01-01
There are significant challenges facing today's aerospace industry. Global competition, more complex products, geographically-distributed design teams, demands for lower cost, higher reliability and safer vehicles, and the need to incorporate the latest technologies quicker all face the developer of aerospace systems. New information technologies offer promising opportunities to develop advanced engineering environments (AEEs) to meet these challenges. Significant advances in the state-of-the-art of aerospace engineering practice are envisioned in the areas of engineering design and analytical tools, cost and risk tools, collaborative engineering, and high-fidelity simulations early in the development cycle. These advances will enable modeling and simulation of manufacturing methods, which will in turn allow manufacturing considerations to be included much earlier in the system development cycle. Significant cost savings, increased quality, and decreased manufacturing cycle time are expected to result. This paper will give an overview of the NASA's Intelligent Synthesis Environment, the agency initiative to develop an AEE, with a focus on the anticipated benefits in aerospace manufacturing.
CONFIG: Integrated engineering of systems and their operation
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Ryan, Dan; Fleming, Land
1994-01-01
This article discusses CONFIG 3, a prototype software tool that supports integrated conceptual design evaluation from early in the product life cycle, by supporting isolated or integrated modeling, simulation, and analysis of the function, structure, behavior, failures and operations of system designs. Integration and reuse of models is supported in an object-oriented environment providing capabilities for graph analysis and discrete event simulation. CONFIG supports integration among diverse modeling approaches (component view, configuration or flow path view, and procedure view) and diverse simulation and analysis approaches. CONFIG is designed to support integrated engineering in diverse design domains, including mechanical and electro-mechanical systems, distributed computer systems, and chemical processing and transport systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, Timothy M.; Palmintier, Bryan; Suryanarayanan, Siddharth
As more Smart Grid technologies (e.g., distributed photovoltaic, spatially distributed electric vehicle charging) are integrated into distribution grids, static distribution simulations are no longer sufficient for performing modeling and analysis. GridLAB-D is an agent-based distribution system simulation environment that allows fine-grained end-user models, including geospatial and network topology detail. A problem exists in that, without outside intervention, once the GridLAB-D simulation begins execution, it will run to completion without allowing the real-time interaction of Smart Grid controls, such as home energy management systems and aggregator control. We address this lack of runtime interaction by designing a flexible communication interface, Bus.pymore » (pronounced bus-dot-pie), that uses Python to pass messages between one or more GridLAB-D instances and a Smart Grid simulator. This work describes the design and implementation of Bus.py, discusses its usefulness in terms of some Smart Grid scenarios, and provides an example of an aggregator-based residential demand response system interacting with GridLAB-D through Bus.py. The small scale example demonstrates the validity of the interface and shows that an aggregator using said interface is able to control residential loads in GridLAB-D during runtime to cause a reduction in the peak load on the distribution system in (a) peak reduction and (b) time-of-use pricing cases.« less
ERIC Educational Resources Information Center
Denham, A. R.
2015-01-01
There has been a steady rise in the support for games as learning environments. This support is largely based on the strong levels of engagement and motivation observed during gameplay. What has proven difficult is the ability to consistently design and develop learning games that are both engaging and educationally viable. Those in the game-based…
The United States Army Functional Concept for Intelligence, 2016-2028
2010-10-13
Intelligence improvement strategies historically addressed the changing operational environment by creating sensors and analytical systems designed to locate...hierarchical centrally- directed combat formations and predict their actions in high-intensity conflict. These strategies assumed that intelligence...4) U.S. operations can be derailed over time through a strategy of exhaustion. (5) U.S. forces distributed over wide areas can be
NASA Technical Reports Server (NTRS)
Mojarradi, Mohammad M.; Kolawa, Elizabeth; Blalock, Benjamin; Johnson, R. Wayne
2005-01-01
Next generation space-based robotics systems will be constructed using distributed architectures where electronics capable of working in the extreme environments of the planets of the solar system are integrated with the sensors and actuators in plug-and-play modules and are connected through common multiple redundant data and power buses.
DOS Design/Application Tools System/Segment Specification. Volume 3
1990-09-01
consume the same information to obtain that information without "manual" translation by people. Solving the information management problem effectively...and consumes ’ even more information than centralized development. Distributed systems cannot be developed successfully by experiment without...human intervention because all tools consume input from and produce output to the same repository. New tools are easily absorbed into the environment
Privacy enhanced group communication in clinical environment
NASA Astrophysics Data System (ADS)
Li, Mingyan; Narayanan, Sreeram; Poovendran, Radha
2005-04-01
Privacy protection of medical records has always been an important issue and is mandated by the recent Health Insurance Portability and Accountability Act (HIPAA) standards. In this paper, we propose security architectures for a tele-referring system that allows electronic group communication among professionals for better quality treatments, while protecting patient privacy against unauthorized access. Although DICOM defines the much-needed guidelines for confidentiality of medical data during transmission, there is no provision in the existing medical security systems to guarantee patient privacy once the data has been received. In our design, we address this issue by enabling tracing back to the recipient whose received data is disclosed to outsiders, using watermarking technique. We present security architecture design of a tele-referring system using a distributed approach and a centralized web-based approach. The resulting tele-referring system (i) provides confidentiality during the transmission and ensures integrity and authenticity of the received data, (ii) allows tracing of the recipient who has either distributed the data to outsiders or whose system has been compromised, (iii) provides proof of receipt or origin, and (iv) can be easy to use and low-cost to employ in clinical environment.
Space Station Module Power Management and Distribution System (SSM/PMAD)
NASA Technical Reports Server (NTRS)
Miller, William (Compiler); Britt, Daniel (Compiler); Elges, Michael (Compiler); Myers, Chris (Compiler)
1994-01-01
This report provides an overview of the Space Station Module Power Management and Distribution (SSM/PMAD) testbed system and describes recent enhancements to that system. Four tasks made up the original contract: (1) common module power management and distribution system automation plan definition; (2) definition of hardware and software elements of automation; (3) design, implementation and delivery of the hardware and software making up the SSM/PMAD system; and (4) definition and development of the host breadboard computer environment. Additions and/or enhancements to the SSM/PMAD test bed that have occurred since July 1990 are reported. These include: (1) rehosting the MAESTRO scheduler; (2) reorganization of the automation software internals; (3) a more robust communications package; (4) the activity editor to the MAESTRO scheduler; (5) rehosting the LPLMS to execute under KNOMAD; implementation of intermediate levels of autonomy; (6) completion of the KNOMAD knowledge management facility; (7) significant improvement of the user interface; (8) soft and incipient fault handling design; (9) intermediate levels of autonomy, and (10) switch maintenance.
The AI Bus architecture for distributed knowledge-based systems
NASA Technical Reports Server (NTRS)
Schultz, Roger D.; Stobie, Iain
1991-01-01
The AI Bus architecture is layered, distributed object oriented framework developed to support the requirements of advanced technology programs for an order of magnitude improvement in software costs. The consequent need for highly autonomous computer systems, adaptable to new technology advances over a long lifespan, led to the design of an open architecture and toolbox for building large scale, robust, production quality systems. The AI Bus accommodates a mix of knowledge based and conventional components, running on heterogeneous, distributed real world and testbed environment. The concepts and design is described of the AI Bus architecture and its current implementation status as a Unix C++ library or reusable objects. Each high level semiautonomous agent process consists of a number of knowledge sources together with interagent communication mechanisms based on shared blackboards and message passing acquaintances. Standard interfaces and protocols are followed for combining and validating subsystems. Dynamic probes or demons provide an event driven means for providing active objects with shared access to resources, and each other, while not violating their security.
An educational distributed Cosmic Ray detector network based on ArduSiPM
NASA Astrophysics Data System (ADS)
Bocci, V.; Chiodi, G.; Fresch, P.; Iacoangeli, F.; Recchia, L.
2017-10-01
The advent of high performance microcontrollers equipped with analog and digital peripherals, makes the design of a complete particle detector and a relative acquisition system on a single microcontroller chip possible. The existence of a world wide data infrastructure such as the internet, allows for the conception of a distributed network of cheap detectors able to elaborate and send data as well as to respond to setting commands. The internet infrastructure enables the distribution of the absolute time, with precision of a few milliseconds, to all devices independently of their physical location, when the sky view is accessible it possible to use a GPS module to reach synchronization of tens of nanoseconds. These devices can be far apart from each other and their relative distance can range from a few meters to thousands of kilometers. This allows for the design of a crowdsourcing experiment of citizen science, based on the use of many small scintillation-based particle detectors to monitor the high energetic cosmic ray and the radiation environment.
Macro-/micro-environment-sensitive chemosensing and biological imaging.
Yang, Zhigang; Cao, Jianfang; He, Yanxia; Yang, Jung Ho; Kim, Taeyoung; Peng, Xiaojun; Kim, Jong Seung
2014-07-07
Environment-related parameters, including viscosity, polarity, temperature, hypoxia, and pH, play pivotal roles in controlling the physical or chemical behaviors of local molecules. In particular, in a biological environment, such factors predominantly determine the biological properties of the local environment or reflect corresponding status alterations. Abnormal changes in these factors would cause cellular malfunction or become a hallmark of the occurrence of severe diseases. Therefore, in recent years, they have increasingly attracted research interest from the fields of chemistry and biological chemistry. With the emergence of fluorescence sensing and imaging technology, several fluorescent chemosensors have been designed to respond to such parameters and to further map their distributions and variations in vitro/in vivo. In this work, we have reviewed a number of various environment-responsive chemosensors related to fluorescent recognition of viscosity, polarity, temperature, hypoxia, and pH that have been reported thus far.
Defining fire environment zones in the boreal forests of northeastern China.
Wu, Zhiwei; He, Hong S; Yang, Jian; Liang, Yu
2015-06-15
Fire activity in boreal forests will substantially increase with prolonged growing seasons under a warming climate. This trend poses challenges to managing fires in boreal forest landscapes. A fire environment zone map offers a basis for evaluating these fire-related problems and designing more effective fire management plans to improve the allocation of management resources across a landscape. Toward that goal, we identified three fire environment zones across boreal forest landscapes in northeastern China using analytical methods to identify spatial clustering of the environmental variables of climate, vegetation, topography, and human activity. The three fire environment zones were found to be in strong agreement with the spatial distributions of the historical fire data (occurrence, size, and frequency) for 1966-2005. This paper discusses how the resulting fire environment zone map can be used to guide forest fire management and fire regime prediction. Copyright © 2015 Elsevier B.V. All rights reserved.
Intelligent computer-aided training authoring environment
NASA Technical Reports Server (NTRS)
Way, Robert D.
1994-01-01
Although there has been much research into intelligent tutoring systems (ITS), there are few authoring systems available that support ITS metaphors. Instructional developers are generally obliged to use tools designed for creating on-line books. We are currently developing an authoring environment derived from NASA's research on intelligent computer-aided training (ICAT). The ICAT metaphor, currently in use at NASA has proven effective in disciplines from satellite deployment to high school physics. This technique provides a personal trainer (PT) who instructs the student using a simulated work environment (SWE). The PT acts as a tutor, providing individualized instruction and assistance to each student. Teaching in an SWE allows the student to learn tasks by doing them, rather than by reading about them. This authoring environment will expedite ICAT development by providing a tool set that guides the trainer modeling process. Additionally, this environment provides a vehicle for distributing NASA's ICAT technology to the private sector.
EPICS as a MARTe Configuration Environment
NASA Astrophysics Data System (ADS)
Valcarcel, Daniel F.; Barbalace, Antonio; Neto, André; Duarte, André S.; Alves, Diogo; Carvalho, Bernardo B.; Carvalho, Pedro J.; Sousa, Jorge; Fernandes, Horácio; Goncalves, Bruno; Sartori, Filippo; Manduchi, Gabriele
2011-08-01
The Multithreaded Application Real-Time executor (MARTe) software provides an environment for the hard real-time execution of codes while leveraging a standardized algorithm development process. The Experimental Physics and Industrial Control System (EPICS) software allows the deployment and remote monitoring of networked control systems. Channel Access (CA) is the protocol that enables the communication between EPICS distributed components. It allows to set and monitor process variables across the network belonging to different systems. The COntrol and Data Acquisition and Communication (CODAC) system for the ITER Tokamak will be EPICS based and will be used to monitor and live configure the plant controllers. The reconfiguration capability in a hard real-time system requires strict latencies from the request to the actuation and it is a key element in the design of the distributed control algorithm. Presently, MARTe and its objects are configured using a well-defined structured language. After each configuration, all objects are destroyed and the system rebuilt, following the strong hard real-time rule that a real-time system in online mode must behave in a strictly deterministic fashion. This paper presents the design and considerations to use MARTe as a plant controller and enable it to be EPICS monitorable and configurable without disturbing the execution at any time, in particular during a plasma discharge. The solutions designed for this will be presented and discussed.
Propagation model for the Land Mobile Satellite channel in urban environments
NASA Technical Reports Server (NTRS)
Sforza, M.; Dibernardo, G.; Cioni, R.
1993-01-01
This paper presents the major characteristics of a simulation package capable of performing a complete narrow and wideband analysis of the mobile satellite communication channel in urban environments for any given orbital configuration. The wavelength-to-average urban geometrical dimension ratio has required the use of the Geometrical Theory of Diffraction (GTD). For the RF frequency range, the model has been designed to be (1 up to 60 GHz) extended to include effects of non-perfect conductivity and surface roughness. Taking advantage of the inherent capabilities of such a high frequency method, we are able to provide a complete description of the electromagnetic field at the mobile terminal. Using the information made available at the ray-tracer and GTD solver outputs, the Land Mobile Satellite (LMS) urban model can also give a detailed description of the communication channel in terms of power delay profiles, Doppler spectra, channel scattering functions, and so forth. Statistical data, e.g. cumulative distribution functions, level crossing rates or distributions of fades are also provided. The user can access the simulation tool through a Design-CAD user-friendly interface by means of which she can effectively design her own urban layout and run consequently all the envisaged routines. The software is optimized in its execution time so that numerous runs can be achieved in a considerably short time.
Proceedings of the Workshop on software tools for distributed intelligent control systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herget, C.J.
1990-09-01
The Workshop on Software Tools for Distributed Intelligent Control Systems was organized by Lawrence Livermore National Laboratory for the United States Army Headquarters Training and Doctrine Command and the Defense Advanced Research Projects Agency. The goals of the workshop were to the identify the current state of the art in tools which support control systems engineering design and implementation, identify research issues associated with writing software tools which would provide a design environment to assist engineers in multidisciplinary control design and implementation, formulate a potential investment strategy to resolve the research issues and develop public domain code which can formmore » the core of more powerful engineering design tools, and recommend test cases to focus the software development process and test associated performance metrics. Recognizing that the development of software tools for distributed intelligent control systems will require a multidisciplinary effort, experts in systems engineering, control systems engineering, and compute science were invited to participate in the workshop. In particular, experts who could address the following topics were selected: operating systems, engineering data representation and manipulation, emerging standards for manufacturing data, mathematical foundations, coupling of symbolic and numerical computation, user interface, system identification, system representation at different levels of abstraction, system specification, system design, verification and validation, automatic code generation, and integration of modular, reusable code.« less
1992-03-17
No. 1 Approved for Public Release; Distribution Unlimited PHILLIPS LABORATORY AIR FORCE SYSTEMS COMMAND HANSCOM AIR FORCE BASE, MASSACHUSETTS 01731...the SWOE thermal models and the design of a new Command Interface System and User Interface System . 14. SUBJECT TERMS 15. NUMBER OF PAGES 116 BTI/SWOE...to the 3-D Tree Model 24 4.2.1 Operation Via the SWOE Command Interface System 26 4.2.2 Addition of Radiation Exchange to the Environment 26 4.2.3
NASA Technical Reports Server (NTRS)
Ortega, J. M.
1984-01-01
Several short summaries of the work performed during this reporting period are presented. Topics discussed in this document include: (1) resilient seeded errors via simple techniques; (2) knowledge representation for engineering design; (3) analysis of faults in a multiversion software experiment; (4) implementation of parallel programming environment; (5) symbolic execution of concurrent programs; (6) two computer graphics systems for visualization of pressure distribution and convective density particles; (7) design of a source code management system; (8) vectorizing incomplete conjugate gradient on the Cyber 203/205; (9) extensions of domain testing theory and; (10) performance analyzer for the pisces system.
1994-08-01
Momentum and Its Derivatives in Various Coordinate Systems 47 CONTENTS (cont) Page C Absolute Acceleration of Geometric Center C of the S & A Plane 55 D...Dynamics of Rotor-Driven S & A Mechanism with a Two-Pass Clock 59 Gear Train and A Verge Runaway Escapement Operating in an Aeroballistic Environment E...System Fixed to 295 Underside of Mechanism Plane (Applicable to M577 S & A ) H Program Aercloc 301 Distribution List 365 Accesion For NTIS CRA&M DTIC TAB 0
NASA Technical Reports Server (NTRS)
Cohen, M. M.
1985-01-01
The space station program is based on a set of premises on mission requirements and the operational capabilities of the space shuttle. These premises will influence the human behavioral factors and conditions on board the space station. These include: launch in the STS Orbiter payload bay, orbital characteristics, power supply, microgravity environment, autonomy from the ground, crew make-up and organization, distributed command control, safety, and logistics resupply. The most immediate design impacts of these premises will be upon the architectural organization and internal environment of the space station.
Terrestrial environment (climatic) criteria guidelines for use in aerospace vehicle development
NASA Technical Reports Server (NTRS)
Turner, R. E. (Compiler); Hill, C. K. (Compiler)
1982-01-01
Guidelines on terrestrial environment data specifically applicable for NASA aerospace vehicles and associated equipment development are provided. The general distribution of natural environmental extremes in the conterminous United States that may be needed to specify design criteria in the transportation of space vehicle subsystems and components is considered. Atmospheric attenuation was included, since certain Earth orbital experiment missions are influenced by the Earth's atmosphere. Climatic extremes for worldwide operational needs is also included. Atmospheric chemistry, seismic criteria, and a mathematical model to predict atmospheric dispersion of aerospace engine exhaust cloud rise and growth are discussed. Atmospheric cloud phenomena are considered.
Development of a COTS-Based Computing Environment Blueprint Application at KSC
NASA Technical Reports Server (NTRS)
Ghansah, Isaac; Boatright, Bryan
1996-01-01
This paper describes a blueprint that can be used for developing a distributed computing environment (DCE) for NASA in general, and the Kennedy Space Center (KSC) in particular. A comprehensive, open, secure, integrated, and multi-vendor DCE such as OSF DCE has been suggested. Design issues, as well as recommendations for each component have been given. Where necessary, modifications were suggested to fit the needs of KSC. This was done in the areas of security and directory services. Readers requiring a more comprehensive coverage are encouraged to refer to the eight-chapter document prepared for this work.
NASA Technical Reports Server (NTRS)
Reuther, James; Alonso, Juan Jose; Rimlinger, Mark J.; Jameson, Antony
1996-01-01
This work describes the application of a control theory-based aerodynamic shape optimization method to the problem of supersonic aircraft design. The design process is greatly accelerated through the use of both control theory and a parallel implementation on distributed memory computers. Control theory is employed to derive the adjoint differential equations whose solution allows for the evaluation of design gradient information at a fraction of the computational cost required by previous design methods. The resulting problem is then implemented on parallel distributed memory architectures using a domain decomposition approach, an optimized communication schedule, and the MPI (Message Passing Interface) Standard for portability and efficiency. The final result achieves very rapid aerodynamic design based on higher order computational fluid dynamics methods (CFD). In our earlier studies, the serial implementation of this design method was shown to be effective for the optimization of airfoils, wings, wing-bodies, and complex aircraft configurations using both the potential equation and the Euler equations. In our most recent paper, the Euler method was extended to treat complete aircraft configurations via a new multiblock implementation. Furthermore, during the same conference, we also presented preliminary results demonstrating that this basic methodology could be ported to distributed memory parallel computing architectures. In this paper, our concern will be to demonstrate that the combined power of these new technologies can be used routinely in an industrial design environment by applying it to the case study of the design of typical supersonic transport configurations. A particular difficulty of this test case is posed by the propulsion/airframe integration.
NASA Technical Reports Server (NTRS)
Conroy, Michael; Mazzone, Rebecca; Little, William; Elfrey, Priscilla; Mann, David; Mabie, Kevin; Cuddy, Thomas; Loundermon, Mario; Spiker, Stephen; McArthur, Frank;
2010-01-01
The Distributed Observer network (DON) is a NASA-collaborative environment that leverages game technology to bring three-dimensional simulations to conventional desktop and laptop computers in order to allow teams of engineers working on design and operations, either individually or in groups, to view and collaborate on 3D representations of data generated by authoritative tools such as Delmia Envision, Pro/Engineer, or Maya. The DON takes models and telemetry from these sources and, using commercial game engine technology, displays the simulation results in a 3D visual environment. DON has been designed to enhance accessibility and user ability to observe and analyze visual simulations in real time. A variety of NASA mission segment simulations [Synergistic Engineering Environment (SEE) data, NASA Enterprise Visualization Analysis (NEVA) ground processing simulations, the DSS simulation for lunar operations, and the Johnson Space Center (JSC) TRICK tool for guidance, navigation, and control analysis] were experimented with. Desired functionalities, [i.e. Tivo-like functions, the capability to communicate textually or via Voice-over-Internet Protocol (VoIP) among team members, and the ability to write and save notes to be accessed later] were targeted. The resulting DON application was slated for early 2008 release to support simulation use for the Constellation Program and its teams. Those using the DON connect through a client that runs on their PC or Mac. This enables them to observe and analyze the simulation data as their schedule allows, and to review it as frequently as desired. DON team members can move freely within the virtual world. Preset camera points can be established, enabling team members to jump to specific views. This improves opportunities for shared analysis of options, design reviews, tests, operations, training, and evaluations, and improves prospects for verification of requirements, issues, and approaches among dispersed teams.
A cloud-based X73 ubiquitous mobile healthcare system: design and implementation.
Ji, Zhanlin; Ganchev, Ivan; O'Droma, Máirtín; Zhang, Xin; Zhang, Xueji
2014-01-01
Based on the user-centric paradigm for next generation networks, this paper describes a ubiquitous mobile healthcare (uHealth) system based on the ISO/IEEE 11073 personal health data (PHD) standards (X73) and cloud computing techniques. A number of design issues associated with the system implementation are outlined. The system includes a middleware on the user side, providing a plug-and-play environment for heterogeneous wireless sensors and mobile terminals utilizing different communication protocols and a distributed "big data" processing subsystem in the cloud. The design and implementation of this system are envisaged as an efficient solution for the next generation of uHealth systems.
Noise and autism spectrum disorder in children: An exploratory survey.
Kanakri, Shireen M; Shepley, Mardelle; Varni, James W; Tassinary, Louis G
2017-04-01
With more students being educated in schools for Autism Spectrum Disorder (ASD) than ever before, architects and interior designers need to consider the environmental features that may be modified to enhance the academic and social success of autistic students in school. This study explored existing empirical research on the impact of noise on children with ASD and provides recommendations regarding design features that can contribute to noise reduction. A survey, which addressed the impact of architectural design elements on autism-related behavior, was developed for teachers of children with ASD and distributed to three schools. Most teachers found noise control to be an important issue for students with autism and many observed children using ear defenders. In terms of managing issues related to noise, most teachers agreed that thick or soundproof walls and carpet in the classroom were the most important issues for children with ASD. Suggested future research should address architectural considerations for building an acoustically friendly environment for children with autism, identifying patterns of problematic behaviors in response to acoustical features of the built environment of the classroom setting, and ways to manage maladaptive behaviors in acoustically unfriendly environments. Copyright © 2017 Elsevier Ltd. All rights reserved.
ScyFlow: An Environment for the Visual Specification and Execution of Scientific Workflows
NASA Technical Reports Server (NTRS)
McCann, Karen M.; Yarrow, Maurice; DeVivo, Adrian; Mehrotra, Piyush
2004-01-01
With the advent of grid technologies, scientists and engineers are building more and more complex applications to utilize distributed grid resources. The core grid services provide a path for accessing and utilizing these resources in a secure and seamless fashion. However what the scientists need is an environment that will allow them to specify their application runs at a high organizational level, and then support efficient execution across any given set or sets of resources. We have been designing and implementing ScyFlow, a dual-interface architecture (both GUT and APT) that addresses this problem. The scientist/user specifies the application tasks along with the necessary control and data flow, and monitors and manages the execution of the resulting workflow across the distributed resources. In this paper, we utilize two scenarios to provide the details of the two modules of the project, the visual editor and the runtime workflow engine.
An authentication infrastructure for today and tomorrow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engert, D.E.
1996-06-01
The Open Software Foundation`s Distributed Computing Environment (OSF/DCE) was originally designed to provide a secure environment for distributed applications. By combining it with Kerberos Version 5 from MIT, it can be extended to provide network security as well. This combination can be used to build both an inter and intra organizational infrastructure while providing single sign-on for the user with overall improved security. The ESnet community of the Department of Energy is building just such an infrastructure. ESnet has modified these systems to improve their interoperability, while encouraging the developers to incorporate these changes and work more closely together tomore » continue to improve the interoperability. The success of this infrastructure depends on its flexibility to meet the needs of many applications and network security requirements. The open nature of Kerberos, combined with the vendor support of OSF/DCE, provides the infrastructure for today and tomorrow.« less
Chaotic mixing by microswimmers moving on quasiperiodic orbits
NASA Astrophysics Data System (ADS)
Jalali, Mir Abbas; Khoshnood, Atefeh; Alam, Mohammad-Reza
2015-11-01
Life on the Earth is strongly dependent upon mixing across a vast range of scales. For example, mixing distributes nutrients for microorganisms in aquatic environments, and balances the spatial energy distribution in the oceans and the atmosphere. From industrial point of view, mixing is essential in many microfluidic processes and lab-on-a-chip operations, polymer engineering, pharmaceutics, food engineering, petroleum engineering, and biotechnology. Efficient mixing, typically characterized by chaotic advection, is hard to achieve in low Reynolds number conditions because of the linear nature of the Stokes equation that governs the motion. We report the first demonstration of chaotic mixing induced by a microswimmer that strokes on quasiperiodic orbits with multi-loop turning paths. Our findings can be utilized to understand the interactions of microorganisms with their environments, and to design autonomous robotic mixers that can sweep and mix an entire volume of complex-geometry containers.
Service Discovery Oriented Management System Construction Method
NASA Astrophysics Data System (ADS)
Li, Huawei; Ren, Ying
2017-10-01
In order to solve the problem that there is no uniform method for design service quality management system in large-scale complex service environment, this paper proposes a distributed service-oriented discovery management system construction method. Three measurement functions are proposed to compute nearest neighbor user similarity at different levels. At present in view of the low efficiency of service quality management systems, three solutions are proposed to improve the efficiency of the system. Finally, the key technologies of distributed service quality management system based on service discovery are summarized through the factor addition and subtraction of quantitative experiment.
CAD/CAE Integration Enhanced by New CAD Services Standard
NASA Technical Reports Server (NTRS)
Claus, Russell W.
2002-01-01
A Government-industry team led by the NASA Glenn Research Center has developed a computer interface standard for accessing data from computer-aided design (CAD) systems. The Object Management Group, an international computer standards organization, has adopted this CAD services standard. The new standard allows software (e.g., computer-aided engineering (CAE) and computer-aided manufacturing software to access multiple CAD systems through one programming interface. The interface is built on top of a distributed computing system called the Common Object Request Broker Architecture (CORBA). CORBA allows the CAD services software to operate in a distributed, heterogeneous computing environment.
Field test of classical symmetric encryption with continuous variables quantum key distribution.
Jouguet, Paul; Kunz-Jacques, Sébastien; Debuisschert, Thierry; Fossier, Simon; Diamanti, Eleni; Alléaume, Romain; Tualle-Brouri, Rosa; Grangier, Philippe; Leverrier, Anthony; Pache, Philippe; Painchault, Philippe
2012-06-18
We report on the design and performance of a point-to-point classical symmetric encryption link with fast key renewal provided by a Continuous Variable Quantum Key Distribution (CVQKD) system. Our system was operational and able to encrypt point-to-point communications during more than six months, from the end of July 2010 until the beginning of February 2011. This field test was the first demonstration of the reliability of a CVQKD system over a long period of time in a server room environment. This strengthens the potential of CVQKD for information technology security infrastructure deployments.
Coordinated control of micro-grid based on distributed moving horizon control.
Ma, Miaomiao; Shao, Liyang; Liu, Xiangjie
2018-05-01
This paper proposed the distributed moving horizon coordinated control scheme for the power balance and economic dispatch problems of micro-grid based on distributed generation. We design the power coordinated controller for each subsystem via moving horizon control by minimizing a suitable objective function. The objective function of distributed moving horizon coordinated controller is chosen based on the principle that wind power subsystem has the priority to generate electricity while photovoltaic power generation coordinates with wind power subsystem and the battery is only activated to meet the load demand when necessary. The simulation results illustrate that the proposed distributed moving horizon coordinated controller can allocate the output power of two generation subsystems reasonably under varying environment conditions, which not only can satisfy the load demand but also limit excessive fluctuations of output power to protect the power generation equipment. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Distributed computing environments for future space control systems
NASA Technical Reports Server (NTRS)
Viallefont, Pierre
1993-01-01
The aim of this paper is to present the results of a CNES research project on distributed computing systems. The purpose of this research was to study the impact of the use of new computer technologies in the design and development of future space applications. The first part of this study was a state-of-the-art review of distributed computing systems. One of the interesting ideas arising from this review is the concept of a 'virtual computer' allowing the distributed hardware architecture to be hidden from a software application. The 'virtual computer' can improve system performance by adapting the best architecture (addition of computers) to the software application without having to modify its source code. This concept can also decrease the cost and obsolescence of the hardware architecture. In order to verify the feasibility of the 'virtual computer' concept, a prototype representative of a distributed space application is being developed independently of the hardware architecture.
Singh, Dadabhai T; Trehan, Rahul; Schmidt, Bertil; Bretschneider, Timo
2008-01-01
Preparedness for a possible global pandemic caused by viruses such as the highly pathogenic influenza A subtype H5N1 has become a global priority. In particular, it is critical to monitor the appearance of any new emerging subtypes. Comparative phyloinformatics can be used to monitor, analyze, and possibly predict the evolution of viruses. However, in order to utilize the full functionality of available analysis packages for large-scale phyloinformatics studies, a team of computer scientists, biostatisticians and virologists is needed--a requirement which cannot be fulfilled in many cases. Furthermore, the time complexities of many algorithms involved leads to prohibitive runtimes on sequential computer platforms. This has so far hindered the use of comparative phyloinformatics as a commonly applied tool in this area. In this paper the graphical-oriented workflow design system called Quascade and its efficient usage for comparative phyloinformatics are presented. In particular, we focus on how this task can be effectively performed in a distributed computing environment. As a proof of concept, the designed workflows are used for the phylogenetic analysis of neuraminidase of H5N1 isolates (micro level) and influenza viruses (macro level). The results of this paper are hence twofold. Firstly, this paper demonstrates the usefulness of a graphical user interface system to design and execute complex distributed workflows for large-scale phyloinformatics studies of virus genes. Secondly, the analysis of neuraminidase on different levels of complexity provides valuable insights of this virus's tendency for geographical based clustering in the phylogenetic tree and also shows the importance of glycan sites in its molecular evolution. The current study demonstrates the efficiency and utility of workflow systems providing a biologist friendly approach to complex biological dataset analysis using high performance computing. In particular, the utility of the platform Quascade for deploying distributed and parallelized versions of a variety of computationally intensive phylogenetic algorithms has been shown. Secondly, the analysis of the utilized H5N1 neuraminidase datasets at macro and micro levels has clearly indicated a pattern of spatial clustering of the H5N1 viral isolates based on geographical distribution rather than temporal or host range based clustering.
[Development of a microenvironment test chamber for airborne microbe research].
Zhan, Ningbo; Chen, Feng; Du, Yaohua; Cheng, Zhi; Li, Chenyu; Wu, Jinlong; Wu, Taihu
2017-10-01
One of the most important environmental cleanliness indicators is airborne microbe. However, the particularity of clean operating environment and controlled experimental environment often leads to the limitation of the airborne microbe research. This paper designed and implemented a microenvironment test chamber for airborne microbe research in normal test conditions. Numerical simulation by Fluent showed that airborne microbes were evenly dispersed in the upper part of test chamber, and had a bottom-up concentration growth distribution. According to the simulation results, the verification experiment was carried out by selecting 5 sampling points in different space positions in the test chamber. Experimental results showed that average particle concentrations of all sampling points reached 10 7 counts/m 3 after 5 minutes' distributing of Staphylococcus aureus , and all sampling points showed the accordant mapping of concentration distribution. The concentration of airborne microbe in the upper chamber was slightly higher than that in the middle chamber, and that was also slightly higher than that in the bottom chamber. It is consistent with the results of numerical simulation, and it proves that the system can be well used for airborne microbe research.
Cooperative high-performance storage in the accelerated strategic computing initiative
NASA Technical Reports Server (NTRS)
Gary, Mark; Howard, Barry; Louis, Steve; Minuzzo, Kim; Seager, Mark
1996-01-01
The use and acceptance of new high-performance, parallel computing platforms will be impeded by the absence of an infrastructure capable of supporting orders-of-magnitude improvement in hierarchical storage and high-speed I/O (Input/Output). The distribution of these high-performance platforms and supporting infrastructures across a wide-area network further compounds this problem. We describe an architectural design and phased implementation plan for a distributed, Cooperative Storage Environment (CSE) to achieve the necessary performance, user transparency, site autonomy, communication, and security features needed to support the Accelerated Strategic Computing Initiative (ASCI). ASCI is a Department of Energy (DOE) program attempting to apply terascale platforms and Problem-Solving Environments (PSEs) toward real-world computational modeling and simulation problems. The ASCI mission must be carried out through a unified, multilaboratory effort, and will require highly secure, efficient access to vast amounts of data. The CSE provides a logically simple, geographically distributed, storage infrastructure of semi-autonomous cooperating sites to meet the strategic ASCI PSE goal of highperformance data storage and access at the user desktop.
Search strategy in a complex and dynamic environment (the Indian Ocean case)
NASA Astrophysics Data System (ADS)
Loire, Sophie; Arbabi, Hassan; Clary, Patrick; Ivic, Stefan; Crnjaric-Zic, Nelida; Macesic, Senka; Crnkovic, Bojan; Mezic, Igor; UCSB Team; Rijeka Team
2014-11-01
The disappearance of Malaysia Airlines Flight 370 (MH370) in the early morning hours of 8 March 2014 has exposed the disconcerting lack of efficient methods for identifying where to look and how to look for missing objects in a complex and dynamic environment. The search area for plane debris is a remote part of the Indian Ocean. Searches, of the lawnmower type, have been unsuccessful so far. Lagrangian kinematics of mesoscale features are visible in hypergraph maps of the Indian Ocean surface currents. Without a precise knowledge of the crash site, these maps give an estimate of the time evolution of any initial distribution of plane debris and permits the design of a search strategy. The Dynamic Spectral Multiscale Coverage search algorithm is modified to search a spatial distribution of targets that is evolving with time following the dynamic of ocean surface currents. Trajectories are generated for multiple search agents such that their spatial coverage converges to the target distribution. Central to this DSMC algorithm is a metric for the ergodicity.
Modeling for IFOG Vibration Error Based on the Strain Distribution of Quadrupolar Fiber Coil
Gao, Zhongxing; Zhang, Yonggang; Zhang, Yunhao
2016-01-01
Improving the performance of interferometric fiber optic gyroscope (IFOG) in harsh environment, especially in vibrational environment, is necessary for its practical applications. This paper presents a mathematical model for IFOG to theoretically compute the short-term rate errors caused by mechanical vibration. The computational procedures are mainly based on the strain distribution of quadrupolar fiber coil measured by stress analyzer. The definition of asymmetry of strain distribution (ASD) is given in the paper to evaluate the winding quality of the coil. The established model reveals that the high ASD and the variable fiber elastic modulus in large strain situation are two dominant reasons that give rise to nonreciprocity phase shift in IFOG under vibration. Furthermore, theoretical analysis and computational results indicate that vibration errors of both open-loop and closed-loop IFOG increase with the raise of vibrational amplitude, vibrational frequency and ASD. Finally, an estimation of vibration-induced IFOG errors in aircraft is done according to the proposed model. Our work is meaningful in designing IFOG coils to achieve a better anti-vibration performance. PMID:27455257
Instrumentation and Methodology Development for Mars Mission
NASA Technical Reports Server (NTRS)
Chen, Yuan-Liang Albert
2002-01-01
The Mars environment comprises a dry, cold and low air pressure atmosphere with low gravity (0.38g) and high resistivity soil. The global dust storms that cover a large portion of Mars were observed often from Earth. This environment provides an idea condition for triboelectric charging. The extremely dry conditions on the Martian surface have raised concerns that electrostatic charge buildup will not be dissipated easily. If triboelectrically generated charge cannot be dissipated or avoided, then dust will accumulate on charged surfaces and electrostatic discharge may cause hazards for future exploration missions. The low surface temperature on Mars helps to prolong the charge decay on the dust particles and soil. To better understand the physics of Martian charged dust particles is essential to future Mars missions. We research and design two sensors, velocity/charge sensor and PZT momentum sensors, to detect the velocity distribution, charge distribution and mass distribution of Martian charged dust particles. These sensors are fabricated at NASA Kenney Space Center, Electromagnetic Physics Testbed. The sensors will be tested and calibrated for simulated Mars atmosphere condition with JSC MARS-1 Martian Regolith simulant in this NASA laboratory.
GSHR-Tree: a spatial index tree based on dynamic spatial slot and hash table in grid environments
NASA Astrophysics Data System (ADS)
Chen, Zhanlong; Wu, Xin-cai; Wu, Liang
2008-12-01
Computation Grids enable the coordinated sharing of large-scale distributed heterogeneous computing resources that can be used to solve computationally intensive problems in science, engineering, and commerce. Grid spatial applications are made possible by high-speed networks and a new generation of Grid middleware that resides between networks and traditional GIS applications. The integration of the multi-sources and heterogeneous spatial information and the management of the distributed spatial resources and the sharing and cooperative of the spatial data and Grid services are the key problems to resolve in the development of the Grid GIS. The performance of the spatial index mechanism is the key technology of the Grid GIS and spatial database affects the holistic performance of the GIS in Grid Environments. In order to improve the efficiency of parallel processing of a spatial mass data under the distributed parallel computing grid environment, this paper presents a new grid slot hash parallel spatial index GSHR-Tree structure established in the parallel spatial indexing mechanism. Based on the hash table and dynamic spatial slot, this paper has improved the structure of the classical parallel R tree index. The GSHR-Tree index makes full use of the good qualities of R-Tree and hash data structure. This paper has constructed a new parallel spatial index that can meet the needs of parallel grid computing about the magnanimous spatial data in the distributed network. This arithmetic splits space in to multi-slots by multiplying and reverting and maps these slots to sites in distributed and parallel system. Each sites constructs the spatial objects in its spatial slot into an R tree. On the basis of this tree structure, the index data was distributed among multiple nodes in the grid networks by using large node R-tree method. The unbalance during process can be quickly adjusted by means of a dynamical adjusting algorithm. This tree structure has considered the distributed operation, reduplication operation transfer operation of spatial index in the grid environment. The design of GSHR-Tree has ensured the performance of the load balance in the parallel computation. This tree structure is fit for the parallel process of the spatial information in the distributed network environments. Instead of spatial object's recursive comparison where original R tree has been used, the algorithm builds the spatial index by applying binary code operation in which computer runs more efficiently, and extended dynamic hash code for bit comparison. In GSHR-Tree, a new server is assigned to the network whenever a split of a full node is required. We describe a more flexible allocation protocol which copes with a temporary shortage of storage resources. It uses a distributed balanced binary spatial tree that scales with insertions to potentially any number of storage servers through splits of the overloaded ones. The application manipulates the GSHR-Tree structure from a node in the grid environment. The node addresses the tree through its image that the splits can make outdated. This may generate addressing errors, solved by the forwarding among the servers. In this paper, a spatial index data distribution algorithm that limits the number of servers has been proposed. We improve the storage utilization at the cost of additional messages. The structure of GSHR-Tree is believed that the scheme of this grid spatial index should fit the needs of new applications using endlessly larger sets of spatial data. Our proposal constitutes a flexible storage allocation method for a distributed spatial index. The insertion policy can be tuned dynamically to cope with periods of storage shortage. In such cases storage balancing should be favored for better space utilization, at the price of extra message exchanges between servers. This structure makes a compromise in the updating of the duplicated index and the transformation of the spatial index data. Meeting the needs of the grid computing, GSHRTree has a flexible structure in order to satisfy new needs in the future. The GSHR-Tree provides the R-tree capabilities for large spatial datasets stored over interconnected servers. The analysis, including the experiments, confirmed the efficiency of our design choices. The scheme should fit the needs of new applications of spatial data, using endlessly larger datasets. Using the system response time of the parallel processing of spatial scope query algorithm as the performance evaluation factor, According to the result of the simulated the experiments, GSHR-Tree is performed to prove the reasonable design and the high performance of the indexing structure that the paper presented.
Novel Designs for Application Specific MEMS Pressure Sensors
Fragiacomo, Giulio; Reck, Kasper; Lorenzen, Lasse; Thomsen, Erik V.
2010-01-01
In the framework of developing innovative microfabricated pressure sensors, we present here three designs based on different readout principles, each one tailored for a specific application. A touch mode capacitive pressure sensor with high sensitivity (14 pF/bar), low temperature dependence and high capacitive output signal (more than 100 pF) is depicted. An optical pressure sensor intrinsically immune to electromagnetic interference, with large pressure range (0–350 bar) and a sensitivity of 1 pm/bar is presented. Finally, a resonating wireless pressure sensor power source free with a sensitivity of 650 KHz/mmHg is described. These sensors will be related with their applications in harsh environment, distributed systems and medical environment, respectively. For many aspects, commercially available sensors, which in vast majority are piezoresistive, are not suited for the applications proposed. PMID:22163425
NASA Technical Reports Server (NTRS)
Kaufman, J. W. (Editor)
1977-01-01
Guidelines are provided on terrestrial environment data specifically applicable for NASA aerospace vehicles and associated equipment development. Information is included on the general distribution of natural environment extremes in the conterminous United States that may be needed to specify design criteria in the transportation of space vehicle subsystems and components. Atmospheric attenuation was investigated since certain earth orbital experiment missions are influenced by the earth's atmosphere. A summary of climatic extremes for worldwide operational needs is also included. The latest available information on probable climatic extremes is presented with information on atmospheric chemistry, seismic criteria, and on a mathematical model to predict atmospheric dispersion of aerospace engine exhaust cloud rise and growth. Cloud phenomena are also considered.
High-Performance Compute Infrastructure in Astronomy: 2020 Is Only Months Away
NASA Astrophysics Data System (ADS)
Berriman, B.; Deelman, E.; Juve, G.; Rynge, M.; Vöckler, J. S.
2012-09-01
By 2020, astronomy will be awash with as much as 60 PB of public data. Full scientific exploitation of such massive volumes of data will require high-performance computing on server farms co-located with the data. Development of this computing model will be a community-wide enterprise that has profound cultural and technical implications. Astronomers must be prepared to develop environment-agnostic applications that support parallel processing. The community must investigate the applicability and cost-benefit of emerging technologies such as cloud computing to astronomy, and must engage the Computer Science community to develop science-driven cyberinfrastructure such as workflow schedulers and optimizers. We report here the results of collaborations between a science center, IPAC, and a Computer Science research institute, ISI. These collaborations may be considered pathfinders in developing a high-performance compute infrastructure in astronomy. These collaborations investigated two exemplar large-scale science-driver workflow applications: 1) Calculation of an infrared atlas of the Galactic Plane at 18 different wavelengths by placing data from multiple surveys on a common plate scale and co-registering all the pixels; 2) Calculation of an atlas of periodicities present in the public Kepler data sets, which currently contain 380,000 light curves. These products have been generated with two workflow applications, written in C for performance and designed to support parallel processing on multiple environments and platforms, but with different compute resource needs: the Montage image mosaic engine is I/O-bound, and the NASA Star and Exoplanet Database periodogram code is CPU-bound. Our presentation will report cost and performance metrics and lessons-learned for continuing development. Applicability of Cloud Computing: Commercial Cloud providers generally charge for all operations, including processing, transfer of input and output data, and for storage of data, and so the costs of running applications vary widely according to how they use resources. The cloud is well suited to processing CPU-bound (and memory bound) workflows such as the periodogram code, given the relatively low cost of processing in comparison with I/O operations. I/O-bound applications such as Montage perform best on high-performance clusters with fast networks and parallel file-systems. Science-driven Cyberinfrastructure: Montage has been widely used as a driver application to develop workflow management services, such as task scheduling in distributed environments, designing fault tolerance techniques for job schedulers, and developing workflow orchestration techniques. Running Parallel Applications Across Distributed Cloud Environments: Data processing will eventually take place in parallel distributed across cyber infrastructure environments having different architectures. We have used the Pegasus Work Management System (WMS) to successfully run applications across three very different environments: TeraGrid, OSG (Open Science Grid), and FutureGrid. Provisioning resources across different grids and clouds (also referred to as Sky Computing), involves establishing a distributed environment, where issues of, e.g, remote job submission, data management, and security need to be addressed. This environment also requires building virtual machine images that can run in different environments. Usually, each cloud provides basic images that can be customized with additional software and services. In most of our work, we provisioned compute resources using a custom application, called Wrangler. Pegasus WMS abstracts the architectures of the compute environments away from the end-user, and can be considered a first-generation tool suitable for scientists to run their applications on disparate environments.
Irvine, Kathryn M.; Thornton, Jamie; Backus, Vickie M.; Hohmann, Matthew G.; Lehnhoff, Erik A.; Maxwell, Bruce D.; Michels, Kurt; Rew, Lisa
2013-01-01
Commonly in environmental and ecological studies, species distribution data are recorded as presence or absence throughout a spatial domain of interest. Field based studies typically collect observations by sampling a subset of the spatial domain. We consider the effects of six different adaptive and two non-adaptive sampling designs and choice of three binary models on both predictions to unsampled locations and parameter estimation of the regression coefficients (species–environment relationships). Our simulation study is unique compared to others to date in that we virtually sample a true known spatial distribution of a nonindigenous plant species, Bromus inermis. The census of B. inermis provides a good example of a species distribution that is both sparsely (1.9 % prevalence) and patchily distributed. We find that modeling the spatial correlation using a random effect with an intrinsic Gaussian conditionally autoregressive prior distribution was equivalent or superior to Bayesian autologistic regression in terms of predicting to un-sampled areas when strip adaptive cluster sampling was used to survey B. inermis. However, inferences about the relationships between B. inermis presence and environmental predictors differed between the two spatial binary models. The strip adaptive cluster designs we investigate provided a significant advantage in terms of Markov chain Monte Carlo chain convergence when trying to model a sparsely distributed species across a large area. In general, there was little difference in the choice of neighborhood, although the adaptive king was preferred when transects were randomly placed throughout the spatial domain.
Core compressor exit stage study, volume 6
NASA Technical Reports Server (NTRS)
Wisler, D. C.
1981-01-01
Rear stage blading designs that have lower losses in their endwall boundary layer regions were studied. A baseline Stage A was designed as a low-speed model of stage 7 of a 10-stage compressor. Candidate rotors and stators were designed which have the potential of reducing endwall losses relative to the baseline. Rotor B uses a type of meanline in the tip region that unloads the leading edge and loads the trailing edge relative to the baseline rotor A designs. Rotor C incorporates a more skewed (hub strong) radial distribution of total pressure and smoother distribution of static pressure on the rotor tip than those of rotor B. Candidate stator B embodies twist gradients in the endwall region. Stator C embodies airfoil sections near the endwalls that have reduced trailing edge loading relative to stator A. The baseline and candidate bladings were tested using four identical stages to produce a true multistage environment. Single-stage tests were also conducted. The test data were analyzed and performances were compared. Several of the candidate configurations showed a performance improvement relative to the baseline.
NASA Technical Reports Server (NTRS)
Rudoff, R. C.; Bachalo, E. J.; Bachalo, W. D.; Oldenburg, J. R.
1992-01-01
The design, development, and testing of an icing cloud droplet sizing probe based upon the Phase Doppler Particle Analyzer (PDPA) are discussed. This probe is an in-situ laser interferometry based single particle measuring device capable of determining size distributions. The probe is designed for use in harsh environments such as icing tunnels and natural icing clouds. From the measured size distribution, Median Volume Diameter (MVD) and Liquid Water Content (LWC) may be determined. Both the theory of measurement and the mechanical aspects of the probe design and development are discussed. The MVD results from the probe are compared to an existing calibration based upon different instruments in a series of tests in the NASA Lewis Icing Research Tunnel. Agreement between the PDPA probe and the existing calibration is close for MVDs between 15 to 30 microns, but the PDPA results are considerably smaller for MVDs under 15 microns.
Physics and Engineering Design of the ITER Electron Cyclotron Emission Diagnostic
NASA Astrophysics Data System (ADS)
Rowan, W. L.; Austin, M. E.; Houshmandyar, S.; Phillips, P. E.; Beno, J. H.; Ouroua, A.; Weeks, D. A.; Hubbard, A. E.; Stillerman, J. A.; Feder, R. E.; Khodak, A.; Taylor, G.; Pandya, H. K.; Danani, S.; Kumar, R.
2015-11-01
Electron temperature (Te) measurements and consequent electron thermal transport inferences will be critical to the non-active phases of ITER operation and will take on added importance during the alpha heating phase. Here, we describe our design for the diagnostic that will measure spatial and temporal profiles of Te using electron cyclotron emission (ECE). Other measurement capability includes high frequency instabilities (e.g. ELMs, NTMs, and TAEs). Since results from TFTR and JET suggest that Thomson Scattering and ECE differ at high Te due to driven non-Maxwellian distributions, non-thermal features of the ITER electron distribution must be documented. The ITER environment presents other challenges including space limitations, vacuum requirements, and very high-neutron-fluence. Plasma control in ITER will require real-time Te. The diagnosic design that evolved from these sometimes-conflicting needs and requirements will be described component by component with special emphasis on the integration to form a single effective diagnostic system. Supported by PPPL/US-DA via subcontract S013464-C to UT Austin.
Comparing host and target environments for distributed Ada programs
NASA Technical Reports Server (NTRS)
Paulk, Mark C.
1986-01-01
The Ada programming language provides a means of specifying logical concurrency by using multitasking. Extending the Ada multitasking concurrency mechanism into a physically concurrent distributed environment which imposes its own requirements can lead to incompatibilities. These problems are discussed. Using distributed Ada for a target system may be appropriate, but when using the Ada language in a host environment, a multiprocessing model may be more suitable than retargeting an Ada compiler for the distributed environment. The tradeoffs between multitasking on distributed targets and multiprocessing on distributed hosts are discussed. Comparisons of the multitasking and multiprocessing models indicate different areas of application.
The implementation and use of Ada on distributed systems with high reliability requirements
NASA Technical Reports Server (NTRS)
Knight, J. C.; Gregory, S. T.; Urquhart, J. I. A.
1985-01-01
The use and implementation of Ada in distributed environments in which reliability is the primary concern were investigated. In particular, the concept that a distributed system may be programmed entirely in Ada so that the individual tasks of the system are unconcerned with which processors they are executing on, and that failures may occur in the software or underlying hardware was examined. Progress is discussed for the following areas: continued development and testing of the fault-tolerant Ada testbed; development of suggested changes to Ada so that it might more easily cope with the failure of interest; and design of new approaches to fault-tolerant software in real-time systems, and integration of these ideas into Ada.
Design of Magnetic Charged Particle Lens Using Analytical Potential Formula
NASA Astrophysics Data System (ADS)
Al-Batat, A. H.; Yaseen, M. J.; Abbas, S. R.; Al-Amshani, M. S.; Hasan, H. S.
2018-05-01
In the current research was to benefit from the potential of the two cylindrical electric lenses to be used in the product a mathematical model from which, one can determine the magnetic field distribution of the charged particle objective lens. With aid of simulink in matlab environment, some simulink models have been building to determine the distribution of the target function and their related axial functions along the optical axis of the charged particle lens. The present study showed that the physical parameters (i.e., the maximum value, Bmax, and the half width W of the field distribution) and the objective properties of the charged particle lens have been affected by varying the main geometrical parameter of the lens named the bore radius R.
Adaptation of Control Center Software to Commerical Real-Time Display Applications
NASA Technical Reports Server (NTRS)
Collier, Mark D.
1994-01-01
NASA-Marshall Space Flight Center (MSFC) is currently developing an enhanced Huntsville Operation Support Center (HOSC) system designed to support multiple spacecraft missions. The Enhanced HOSC is based upon a distributed computing architecture using graphic workstation hardware and industry standard software including POSIX, X Windows, Motif, TCP/IP, and ANSI C. Southwest Research Institute (SwRI) is currently developing a prototype of the Display Services application for this system. Display Services provides the capability to generate and operate real-time data-driven graphic displays. This prototype is a highly functional application designed to allow system end users to easily generate complex data-driven displays. The prototype is easy to use, flexible, highly functional, and portable. Although this prototype is being developed for NASA-MSFC, the general-purpose real-time display capability can be reused in similar mission and process control environments. This includes any environment depending heavily upon real-time data acquisition and display. Reuse of the prototype will be a straight-forward transition because the prototype is portable, is designed to add new display types easily, has a user interface which is separated from the application code, and is very independent of the specifics of NASA-MSFC's system. Reuse of this prototype in other environments is a excellent alternative to creation of a new custom application, or for environments with a large number of users, to purchasing a COTS package.
VERSE - Virtual Equivalent Real-time Simulation
NASA Technical Reports Server (NTRS)
Zheng, Yang; Martin, Bryan J.; Villaume, Nathaniel
2005-01-01
Distributed real-time simulations provide important timing validation and hardware in the- loop results for the spacecraft flight software development cycle. Occasionally, the need for higher fidelity modeling and more comprehensive debugging capabilities - combined with a limited amount of computational resources - calls for a non real-time simulation environment that mimics the real-time environment. By creating a non real-time environment that accommodates simulations and flight software designed for a multi-CPU real-time system, we can save development time, cut mission costs, and reduce the likelihood of errors. This paper presents such a solution: Virtual Equivalent Real-time Simulation Environment (VERSE). VERSE turns the real-time operating system RTAI (Real-time Application Interface) into an event driven simulator that runs in virtual real time. Designed to keep the original RTAI architecture as intact as possible, and therefore inheriting RTAI's many capabilities, VERSE was implemented with remarkably little change to the RTAI source code. This small footprint together with use of the same API allows users to easily run the same application in both real-time and virtual time environments. VERSE has been used to build a workstation testbed for NASA's Space Interferometry Mission (SIM PlanetQuest) instrument flight software. With its flexible simulation controls and inexpensive setup and replication costs, VERSE will become an invaluable tool in future mission development.
NASA Technical Reports Server (NTRS)
Johnston, William E.; Gannon, Dennis; Nitzberg, Bill; Feiereisen, William (Technical Monitor)
2000-01-01
The term "Grid" refers to distributed, high performance computing and data handling infrastructure that incorporates geographically and organizationally dispersed, heterogeneous resources that are persistent and supported. The vision for NASN's Information Power Grid - a computing and data Grid - is that it will provide significant new capabilities to scientists and engineers by facilitating routine construction of information based problem solving environments / frameworks that will knit together widely distributed computing, data, instrument, and human resources into just-in-time systems that can address complex and large-scale computing and data analysis problems. IPG development and deployment is addressing requirements obtained by analyzing a number of different application areas, in particular from the NASA Aero-Space Technology Enterprise. This analysis has focussed primarily on two types of users: The scientist / design engineer whose primary interest is problem solving (e.g., determining wing aerodynamic characteristics in many different operating environments), and whose primary interface to IPG will be through various sorts of problem solving frameworks. The second type of user if the tool designer: The computational scientists who convert physics and mathematics into code that can simulate the physical world. These are the two primary users of IPG, and they have rather different requirements. This paper describes the current state of IPG (the operational testbed), the set of capabilities being put into place for the operational prototype IPG, as well as some of the longer term R&D tasks.
Distributed Engine Control Empirical/Analytical Verification Tools
NASA Technical Reports Server (NTRS)
DeCastro, Jonathan; Hettler, Eric; Yedavalli, Rama; Mitra, Sayan
2013-01-01
NASA's vision for an intelligent engine will be realized with the development of a truly distributed control system featuring highly reliable, modular, and dependable components capable of both surviving the harsh engine operating environment and decentralized functionality. A set of control system verification tools was developed and applied to a C-MAPSS40K engine model, and metrics were established to assess the stability and performance of these control systems on the same platform. A software tool was developed that allows designers to assemble easily a distributed control system in software and immediately assess the overall impacts of the system on the target (simulated) platform, allowing control system designers to converge rapidly on acceptable architectures with consideration to all required hardware elements. The software developed in this program will be installed on a distributed hardware-in-the-loop (DHIL) simulation tool to assist NASA and the Distributed Engine Control Working Group (DECWG) in integrating DCS (distributed engine control systems) components onto existing and next-generation engines.The distributed engine control simulator blockset for MATLAB/Simulink and hardware simulator provides the capability to simulate virtual subcomponents, as well as swap actual subcomponents for hardware-in-the-loop (HIL) analysis. Subcomponents can be the communication network, smart sensor or actuator nodes, or a centralized control system. The distributed engine control blockset for MATLAB/Simulink is a software development tool. The software includes an engine simulation, a communication network simulation, control algorithms, and analysis algorithms set up in a modular environment for rapid simulation of different network architectures; the hardware consists of an embedded device running parts of the CMAPSS engine simulator and controlled through Simulink. The distributed engine control simulation, evaluation, and analysis technology provides unique capabilities to study the effects of a given change to the control system in the context of the distributed paradigm. The simulation tool can support treatment of all components within the control system, both virtual and real; these include communication data network, smart sensor and actuator nodes, centralized control system (FADEC full authority digital engine control), and the aircraft engine itself. The DECsim tool can allow simulation-based prototyping of control laws, control architectures, and decentralization strategies before hardware is integrated into the system. With the configuration specified, the simulator allows a variety of key factors to be systematically assessed. Such factors include control system performance, reliability, weight, and bandwidth utilization.
ERIC Educational Resources Information Center
Ozmen, Haluk; Karamustafaoglu, Orhan
2006-01-01
Turkey has a central educational system in which all of the programs are designed by the Ministry of National Education and distributed to the implementing institutions. As a part of this system, the textbooks written by different writers need to be approved by the commission called Talim Terbiye Kurulu, and these approved books are chosen as…
Potential distribution of mosquito vector species in a primary malaria endemic region of Colombia
Altamiranda-Saavedra, Mariano; Arboleda, Sair; Parra, Juan L.; Peterson, A. Townsend
2017-01-01
Rapid transformation of natural ecosystems changes ecological conditions for important human disease vector species; therefore, an essential task is to identify and understand the variables that shape distributions of these species to optimize efforts toward control and mitigation. Ecological niche modeling was used to estimate the potential distribution and to assess hypotheses of niche similarity among the three main malaria vector species in northern Colombia: Anopheles nuneztovari, An. albimanus, and An. darlingi. Georeferenced point collection data and remotely sensed, fine-resolution satellite imagery were integrated across the Urabá –Bajo Cauca–Alto Sinú malaria endemic area using a maximum entropy algorithm. Results showed that An. nuneztovari has the widest geographic distribution, occupying almost the entire study region; this niche breadth is probably related to the ability of this species to colonize both, natural and disturbed environments. The model for An. darlingi showed that most suitable localities for this species in Bajo Cauca were along the Cauca and Nechí river. The riparian ecosystems in this region and the potential for rapid adaptation by this species to novel environments, may favor the establishment of populations of this species. Apparently, the three main Colombian Anopheles vector species in this endemic area do not occupy environments either with high seasonality, or with low seasonality and high NDVI values. Estimated overlap in geographic space between An. nuneztovari and An. albimanus indicated broad spatial and environmental similarity between these species. An. nuneztovari has a broader niche and potential distribution. Dispersal ability of these species and their ability to occupy diverse environmental situations may facilitate sympatry across many environmental and geographic contexts. These model results may be useful for the design and implementation of malaria species-specific vector control interventions optimized for this important malaria region. PMID:28594942
Cardiological database management system as a mediator to clinical decision support.
Pappas, C; Mavromatis, A; Maglaveras, N; Tsikotis, A; Pangalos, G; Ambrosiadou, V
1996-03-01
An object-oriented medical database management system is presented for a typical cardiologic center, facilitating epidemiological trials. Object-oriented analysis and design were used for the system design, offering advantages for the integrity and extendibility of medical information systems. The system was developed using object-oriented design and programming methodology, the C++ language and the Borland Paradox Relational Data Base Management System on an MS-Windows NT environment. Particular attention was paid to system compatibility, portability, the ease of use, and the suitable design of the patient record so as to support the decisions of medical personnel in cardiovascular centers. The system was designed to accept complex, heterogeneous, distributed data in various formats and from different kinds of examinations such as Holter, Doppler and electrocardiography.
NASA Astrophysics Data System (ADS)
Frezzo, Dennis C.; Behrens, John T.; Mislevy, Robert J.
2010-04-01
Simulation environments make it possible for science and engineering students to learn to interact with complex systems. Putting these capabilities to effective use for learning, and assessing learning, requires more than a simulation environment alone. It requires a conceptual framework for the knowledge, skills, and ways of thinking that are meant to be developed, in order to design activities that target these capabilities. The challenges of using simulation environments effectively are especially daunting in dispersed social systems. This article describes how these challenges were addressed in the context of the Cisco Networking Academies with a simulation tool for computer networks called Packet Tracer. The focus is on a conceptual support framework for instructors in over 9,000 institutions around the world for using Packet Tracer in instruction and assessment, by learning to create problem-solving scenarios that are at once tuned to the local needs of their students and consistent with the epistemic frame of "thinking like a network engineer." We describe a layered framework of tools and interfaces above the network simulator that supports the use of Packet Tracer in the distributed community of instructors and students.
IpexT: Integrated Planning and Execution for Military Satellite Tele-Communications
NASA Technical Reports Server (NTRS)
Plaunt, Christian; Rajan, Kanna
2004-01-01
The next generation of military communications satellites may be designed as a fast packet-switched constellation of spacecraft able to withstand substantial bandwidth capacity fluctuation in the face of dynamic resource utilization and rapid environmental changes including jamming of communication frequencies and unstable weather phenomena. We are in the process of designing an integrated scheduling and execution tool which will aid in the analysis of the design parameters needed for building such a distributed system for nominal and battlefield communications. This paper discusses the design of such a system based on a temporal constraint posting planner/scheduler and a smart executive which can cope with a dynamic environment to make a more optimal utilization of bandwidth than the current circuit switched based approach.
2017-12-01
SYSTEM ARCHITECTURE TO INVESTIGATE THE IMPACT OF INTEGRATED AIR AND MISSILE DEFENSE IN A DISTRIBUTED LETHALITY ENVIRONMENT by Justin K. Davis...TO INVESTIGATE THE IMPACT OF INTEGRATED AIR AND MISSILE DEFENSE IN A DISTRIBUTED LETHALITY ENVIRONMENT 5. FUNDING NUMBERS 6. AUTHOR(S) Justin K...ARCHITECTURE TO INVESTIGATE THE IMPACT OF INTEGRATED AIR AND MISSILE DEFENSE IN A DISTRIBUTED LETHALITY ENVIRONMENT Justin K. Davis Lieutenant
NASA Astrophysics Data System (ADS)
Herbert, B. E.; Schroeder, C.; Brody, S.; Cahill, T.; Kenimer, A.; Loving, C.; Schielack, J.
2003-12-01
The ITS Center for Teaching and Learning is a five-year NSF-funded collaborative effort to engage scientists and university and school or district-based science educators in the use of information technology to improve science teaching and learning at all levels. One assumption is that science and mathematics teaching and learning will be improved when they become more connected to the authentic science research done in field settings or laboratories. The effective use of information technology in science classrooms has been shown to help achieve this objective. As a design study that is -working toward a greater understanding of a -learning ecology", the research related to the creation and refinement of the ITS Centeres collaborative environment for professional development is contributing information about an important setting not often included in the descriptions of professional development, a setting that incorporates distributed expertise and resulting distributed growth in the various categories of participants: scientists, science graduate students, education researchers, education graduate students, and master teachers. Design-based research is an emerging paradigm for the study of learning in context through the systematic design and study of instructional strategies and tools. This presentation will discuss the results of the formative evaluation process that has moved the ITS Centeres collaborative environment for professional development through the iterative process from Phase I (the planned program designed in-house) to Phase II (the experimental program being tested in-house). In particular, we will focus on the development of the ITS Centeres Project Teams, which create learning experiences over two summers focused on the exploration of science, technology, engineering or mathematics (STEM) topics through the use of modeling, visualization and complex data sets to explore authentic scientific questions that can be integrated within the K-16 curriculum. Ongoing formative assessment of the Cohort I project teams led to a greater emphasis on participant exploration of authentic scientific questions and tighter integration of scientific explorations and development of participant inquiry projects.
Inflow characteristics associated with high-blade-loading events in a wind farm
NASA Astrophysics Data System (ADS)
Kelley, N. D.
1993-07-01
The stochastic characteristics of the turbulent inflow have been shown to be of major significance in the accumulation of fatigue in wind turbines. Because most of the wind turbine installations in the U.S. have taken place in multi-turbine or windfarm configurations, the fatigue damage associated with the higher turbulence levels within such arrangements must be taken into account when making estimates of component service lifetimes. The simultaneous monitoring of two adjacent wind turbines over a wide range of turbulent inflow conditions has given the authors more confidence in describing the structural load distributions that can be expected in such an environment. The adjacent testing of the two turbines allowed the authors to postulate that observed similarities in the response dynamics and load distributions could be considered quasi-universal, while the dissimilarities could be considered to result from the differing design of the rotors. The format has also allowed them to begin to define appropriate statistical load distribution models for many of the critical components in which fatigue is a major driver of the design. In addition to the adjacent turbine measurements, they also briefly discuss load distributions measured on a teetered-hub turbine.
New frontier, new power: the retail environment in Australia's dark market.
Carter, S M
2003-12-01
To investigate the role of the retail environment in cigarette marketing in Australia, one of the "darkest" markets in the world. Analysis of 172 tobacco industry documents; and articles and advertisements found by hand searching Australia's three leading retail trade journals. As Australian cigarette marketing was increasingly restricted, the retail environment became the primary communication vehicle for building cigarette brands. When retail marketing was restricted, the industry conceded only incrementally and under duress, and at times continues to break the law. The tobacco industry targets retailers via trade promotional expenditure, financial and practical assistance with point of sale marketing, alliance building, brand advertising, and distribution. Cigarette brand advertising in retail magazines are designed to build brand identities. Philip Morris and British American Tobacco are now competing to control distribution of all products to retailers, placing themselves at the heart of retail business. Cigarette companies prize retail marketing in Australia's dark market. Stringent point of sale marketing restrictions should be included in any comprehensive tobacco control measures. Relationships between retailers and the industry will be more difficult to regulate. Retail press advertising and trade promotional expenditure could be banned. In-store marketing assistance, retail-tobacco industry alliance building, and new electronic retail distribution systems may be less amenable to regulation. Alliances between the health and retail sectors and financial support for a move away from retail dependence on tobacco may be necessary to effect cultural change.
A virtual data language and system for scientific workflow management in data grid environments
NASA Astrophysics Data System (ADS)
Zhao, Yong
With advances in scientific instrumentation and simulation, scientific data is growing fast in both size and analysis complexity. So-called Data Grids aim to provide high performance, distributed data analysis infrastructure for data- intensive sciences, where scientists distributed worldwide need to extract information from large collections of data, and to share both data products and the resources needed to produce and store them. However, the description, composition, and execution of even logically simple scientific workflows are often complicated by the need to deal with "messy" issues like heterogeneous storage formats and ad-hoc file system structures. We show how these difficulties can be overcome via a typed workflow notation called virtual data language, within which issues of physical representation are cleanly separated from logical typing, and by the implementation of this notation within the context of a powerful virtual data system that supports distributed execution. The resulting language and system are capable of expressing complex workflows in a simple compact form, enacting those workflows in distributed environments, monitoring and recording the execution processes, and tracing the derivation history of data products. We describe the motivation, design, implementation, and evaluation of the virtual data language and system, and the application of the virtual data paradigm in various science disciplines, including astronomy, cognitive neuroscience.
A distributed Petri Net controller for a dual arm testbed
NASA Technical Reports Server (NTRS)
Bjanes, Atle
1991-01-01
This thesis describes the design and functionality of a Distributed Petri Net Controller (DPNC). The controller runs under X Windows to provide a graphical interface. The DPNC allows users to distribute a Petri Net across several host computers linked together via a TCP/IP interface. A sub-net executes on each host, interacting with the other sub-nets by passing a token vector from host to host. One host has a command window which monitors and controls the distributed controller. The input to the DPNC is a net definition file generated by Great SPN. Thus, a net may be designed, analyzed and verified using this package before implementation. The net is distributed to the hosts by tagging transitions that are host-critical with the appropriate host number. The controller will then distribute the remaining places and transitions to the hosts by generating the local nets, the local marking vectors and the global marking vector. Each transition can have one or more preconditions which must be fulfilled before the transition can fire, as well as one or more post-processes to be executed after the transition fires. These implement the actual input/output to the environment (machines, signals, etc.). The DPNC may also be used to simulate a Great SPN net since stochastic and deterministic firing rates are implemented in the controller for timed transitions.
NASA Technical Reports Server (NTRS)
Reuther, James; Alonso, Juan Jose; Rimlinger, Mark J.; Jameson, Antony
1996-01-01
This work describes the application of a control theory-based aerodynamic shape optimization method to the problem of supersonic aircraft design. The design process is greatly accelerated through the use of both control theory and a parallel implementation on distributed memory computers. Control theory is employed to derive the adjoint differential equations whose solution allows for the evaluation of design gradient information at a fraction of the computational cost required by previous design methods (13, 12, 44, 38). The resulting problem is then implemented on parallel distributed memory architectures using a domain decomposition approach, an optimized communication schedule, and the MPI (Message Passing Interface) Standard for portability and efficiency. The final result achieves very rapid aerodynamic design based on higher order computational fluid dynamics methods (CFD). In our earlier studies, the serial implementation of this design method (19, 20, 21, 23, 39, 25, 40, 41, 42, 43, 9) was shown to be effective for the optimization of airfoils, wings, wing-bodies, and complex aircraft configurations using both the potential equation and the Euler equations (39, 25). In our most recent paper, the Euler method was extended to treat complete aircraft configurations via a new multiblock implementation. Furthermore, during the same conference, we also presented preliminary results demonstrating that the basic methodology could be ported to distributed memory parallel computing architectures [241. In this paper, our concem will be to demonstrate that the combined power of these new technologies can be used routinely in an industrial design environment by applying it to the case study of the design of typical supersonic transport configurations. A particular difficulty of this test case is posed by the propulsion/airframe integration.
A Cloud-Based X73 Ubiquitous Mobile Healthcare System: Design and Implementation
Ji, Zhanlin; O'Droma, Máirtín; Zhang, Xin; Zhang, Xueji
2014-01-01
Based on the user-centric paradigm for next generation networks, this paper describes a ubiquitous mobile healthcare (uHealth) system based on the ISO/IEEE 11073 personal health data (PHD) standards (X73) and cloud computing techniques. A number of design issues associated with the system implementation are outlined. The system includes a middleware on the user side, providing a plug-and-play environment for heterogeneous wireless sensors and mobile terminals utilizing different communication protocols and a distributed “big data” processing subsystem in the cloud. The design and implementation of this system are envisaged as an efficient solution for the next generation of uHealth systems. PMID:24737958
Design, implementation, and extension of thermal invisibility cloaks
NASA Astrophysics Data System (ADS)
Zhang, Youming; Xu, Hongyi; Zhang, Baile
2015-05-01
A thermal invisibility cloak, as inspired by optical invisibility cloaks, is a device which can steer the conductive heat flux around an isolated object without changing the ambient temperature distribution so that the object can be "invisible" to external thermal environment. While designs of thermal invisibility cloaks inherit previous theories from optical cloaks, the uniqueness of heat diffusion leads to more achievable implementations. Thermal invisibility cloaks, as well as the variations including thermal concentrator, rotator, and illusion devices, have potentials to be applied in thermal management, sensing and imaging applications. Here, we review the current knowledge of thermal invisibility cloaks in terms of their design and implementation in cloaking studies, and their extension as other functional devices.
UBioLab: a web-LABoratory for Ubiquitous in-silico experiments.
Bartocci, E; Di Berardini, M R; Merelli, E; Vito, L
2012-03-01
The huge and dynamic amount of bioinformatic resources (e.g., data and tools) available nowadays in Internet represents a big challenge for biologists -for what concerns their management and visualization- and for bioinformaticians -for what concerns the possibility of rapidly creating and executing in-silico experiments involving resources and activities spread over the WWW hyperspace. Any framework aiming at integrating such resources as in a physical laboratory has imperatively to tackle -and possibly to handle in a transparent and uniform way- aspects concerning physical distribution, semantic heterogeneity, co-existence of different computational paradigms and, as a consequence, of different invocation interfaces (i.e., OGSA for Grid nodes, SOAP for Web Services, Java RMI for Java objects, etc.). The framework UBioLab has been just designed and developed as a prototype following the above objective. Several architectural features -as those ones of being fully Web-based and of combining domain ontologies, Semantic Web and workflow techniques- give evidence of an effort in such a direction. The integration of a semantic knowledge management system for distributed (bioinformatic) resources, a semantic-driven graphic environment for defining and monitoring ubiquitous workflows and an intelligent agent-based technology for their distributed execution allows UBioLab to be a semantic guide for bioinformaticians and biologists providing (i) a flexible environment for visualizing, organizing and inferring any (semantics and computational) "type" of domain knowledge (e.g., resources and activities, expressed in a declarative form), (ii) a powerful engine for defining and storing semantic-driven ubiquitous in-silico experiments on the domain hyperspace, as well as (iii) a transparent, automatic and distributed environment for correct experiment executions.
Technical and Economic Evaluation of Advanced Air Cargo Systems
NASA Technical Reports Server (NTRS)
Whitehead, A. H., Jr.
1978-01-01
The current air cargo environment and the relevance of advanced technology aircraft in enhancing the efficiency of the 1990 air cargo system are discussed. NASA preliminary design studies are shown to indicate significant potential gains in aircraft efficiency and operational economics for future freighter concepts. Required research and technology elements are outlined to develop a better base for evaluating advanced design concepts. Current studies of the market operation are reviewed which will develop design criteria for a future dedicated cargo transport. Design features desirable in an all-freighter design are reviewed. NASA-sponsored studies of large, distributed-load freighters are reviewed and these designs are compared to current wide-body aircraft. These concepts vary in gross takeoff weight from 0.5 Gg (one million lbs.) to 1.5 Gg (three million lbs.) and are found to exhibit economic advantages over conventional design concepts.
Terrestrial Applications of Extreme Environment Stirling Space Power Systems
NASA Technical Reports Server (NTRS)
Dyson, Rodger. W.
2012-01-01
NASA has been developing power systems capable of long-term operation in extreme environments such as the surface of Venus. This technology can use any external heat source to efficiently provide electrical power and cooling; and it is designed to be extremely efficient and reliable for extended space missions. Terrestrial applications include: use in electric hybrid vehicles; distributed home co-generation/cooling; and quiet recreational vehicle power generation. This technology can reduce environmental emissions, petroleum consumption, and noise while eliminating maintenance and environmental damage from automotive fluids such as oil lubricants and air conditioning coolant. This report will provide an overview of this new technology and its applications.
Requirements for migration of NSSD code systems from LTSS to NLTSS
NASA Technical Reports Server (NTRS)
Pratt, M.
1984-01-01
The purpose of this document is to address the requirements necessary for a successful conversion of the Nuclear Design (ND) application code systems to the NLTSS environment. The ND application code system community can be characterized as large-scale scientific computation carried out on supercomputers. NLTSS is a distributed operating system being developed at LLNL to replace the LTSS system currently in use. The implications of change are examined including a description of the computational environment and users in ND. The discussion then turns to requirements, first in a general way, followed by specific requirements, including a proposal for managing the transition.
Grid-wide neuroimaging data federation in the context of the NeuroLOG project
Michel, Franck; Gaignard, Alban; Ahmad, Farooq; Barillot, Christian; Batrancourt, Bénédicte; Dojat, Michel; Gibaud, Bernard; Girard, Pascal; Godard, David; Kassel, Gilles; Lingrand, Diane; Malandain, Grégoire; Montagnat, Johan; Pélégrini-Issac, Mélanie; Pennec, Xavier; Rojas Balderrama, Javier; Wali, Bacem
2010-01-01
Grid technologies are appealing to deal with the challenges raised by computational neurosciences and support multi-centric brain studies. However, core grids middleware hardly cope with the complex neuroimaging data representation and multi-layer data federation needs. Moreover, legacy neuroscience environments need to be preserved and cannot be simply superseded by grid services. This paper describes the NeuroLOG platform design and implementation, shedding light on its Data Management Layer. It addresses the integration of brain image files, associated relational metadata and neuroscience semantic data in a heterogeneous distributed environment, integrating legacy data managers through a mediation layer. PMID:20543431
NASA Technical Reports Server (NTRS)
Hale, Mark A.; Craig, James I.; Mistree, Farrokh; Schrage, Daniel P.
1995-01-01
Computing architectures are being assembled that extend concurrent engineering practices by providing more efficient execution and collaboration on distributed, heterogeneous computing networks. Built on the successes of initial architectures, requirements for a next-generation design computing infrastructure can be developed. These requirements concentrate on those needed by a designer in decision-making processes from product conception to recycling and can be categorized in two areas: design process and design information management. A designer both designs and executes design processes throughout design time to achieve better product and process capabilities while expanding fewer resources. In order to accomplish this, information, or more appropriately design knowledge, needs to be adequately managed during product and process decomposition as well as recomposition. A foundation has been laid that captures these requirements in a design architecture called DREAMS (Developing Robust Engineering Analysis Models and Specifications). In addition, a computing infrastructure, called IMAGE (Intelligent Multidisciplinary Aircraft Generation Environment), is being developed that satisfies design requirements defined in DREAMS and incorporates enabling computational technologies.
Trust Model to Enhance Security and Interoperability of Cloud Environment
NASA Astrophysics Data System (ADS)
Li, Wenjuan; Ping, Lingdi
Trust is one of the most important means to improve security and enable interoperability of current heterogeneous independent cloud platforms. This paper first analyzed several trust models used in large and distributed environment and then introduced a novel cloud trust model to solve security issues in cross-clouds environment in which cloud customer can choose different providers' services and resources in heterogeneous domains can cooperate. The model is domain-based. It divides one cloud provider's resource nodes into the same domain and sets trust agent. It distinguishes two different roles cloud customer and cloud server and designs different strategies for them. In our model, trust recommendation is treated as one type of cloud services just like computation or storage. The model achieves both identity authentication and behavior authentication. The results of emulation experiments show that the proposed model can efficiently and safely construct trust relationship in cross-clouds environment.
The Comet Halley dust and gas environment
NASA Technical Reports Server (NTRS)
Divine, N.; Hanner, M. S.; Newburn, R. L., Jr.; Sekanina, Z.; Yeomans, D. K.
1986-01-01
Quantitative descriptions of environments near the nucleus of comet P/Halley have been developed to support spacecraft and mission design for the flyby encounters in March, 1986. To summarize these models as they exist just before the encounters, the relevant data from prior Halley apparitions and from recent cometary research are reviewed. Orbital elements, visual magnitudes, and parameter values and analysis for the nucleus, gas and dust are combined to predict Halley's position, production rates, gas and dust distributions, and electromagnetic radiation field for the current perihelion passage. The predicted numerical results have been useful for estimating likely spacecraft effects, such as impact damage and attitude perturbations. Sample applications are cited, including design of a dust shield for spacecraft structure, and threshold and dynamic range selection for flight experiments. It is expected that the comet's activity may be more irregular than these smoothly varying models predict, and that comparison with the flyby data will be instructive.
Capstone Depleted Uranium Aerosols: Generation and Characterization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parkhurst, MaryAnn; Szrom, Fran; Guilmette, Ray
2004-10-19
In a study designed to provide an improved scientific basis for assessing possible health effects from inhaling depleted uranium (DU) aerosols, a series of DU penetrators was fired at an Abrams tank and a Bradley fighting vehicle. A robust sampling system was designed to collect aerosols in this difficult environment and continuously monitor the sampler flow rates. Aerosols collected were analyzed for uranium concentration and particle size distribution as a function of time. They were also analyzed for uranium oxide phases, particle morphology, and dissolution in vitro. The resulting data provide input useful in human health risk assessments.
NASA Astrophysics Data System (ADS)
ter Kuile, Willem M.; van Veen, J. J.; Knoll, Bas
1995-02-01
Usual sampling methods and instruments for checking compliance with `threshold limit values' (TLV) of gaseous components do not provide much information on the mechanism which caused the measured workday average concentration. In the case of noncompliance this information is indispensable for the design of cost effective measures. The infrared gas cloud (IGC) scanner visualizes the spatial distribution of specific gases at a workplace in a quantitative image with a calibrated grayvalue scale. This helps to find the cause of an over- exposure, and so it permits effective abatement of high exposures in the working environment. This paper deals with the technical design of the IGC scanner. Its use is illustrated by some real-world problems. The measuring principle and the technical operation of the IGC-scanner are described. Special attention is given to the pros and cons of retro-reflector screens, the noise reduction methods and image presentation and interpretation. The latter is illustrated by the images produced by the measurements. Essentially the IGC scanner can be used for selective open-path measurement of all gases with a concentration in the ppm range and sufficiently strong distinct absorption lines in the infrared region between 2.5 micrometers and 14.0 micrometers . Further it could be useful for testing the efficiency of ventilation systems and the remote detection of gas leaks. We conclude that a new powerful technique has been added to the industrial hygiene facilities for controlling and improving the work environment.
Design of Distortion-Invariant Optical ID Tags for Remote Identification and Verification of Objects
NASA Astrophysics Data System (ADS)
Pérez-Cabré, Elisabet; Millán, María Sagrario; Javidi, Bahram
Optical identification (ID) tags [1] have a promising future in a number of applications such as the surveillance of vehicles in transportation, control of restricted areas for homeland security, item tracking on conveyor belts or other industrial environment, etc. More specifically, passive optical ID tag [1] was introduced as an optical code containing a signature (that is, a characteristic image or other relevant information of the object), which permits its real-time remote detection and identification. Since their introduction in the literature [1], some contributions have been proposed to increase their usefulness and robustness. To increase security and avoid counterfeiting, the signature was introduced in the optical code as an encrypted function [2-5] following the double-phase encryption technique [6]. Moreover, the design of the optical ID tag was done in such a way that tolerance to variations in scale and rotation was achieved [2-5]. To do that, the encrypted information was multiplexed and distributed in the optical code following an appropriate topology. Further studies were carried out to analyze the influence of different sources of noise. In some proposals [5, 7], the designed ID tag consists of two optical codes where the complex-valued encrypted signature was separately introduced in two real-valued functions according to its magnitude and phase distributions. This solution was introduced to overcome some difficulties in the readout of complex values in outdoors environments. Recently, the fully phase encryption technique [8] has been proposed to increase noise robustness of the authentication system.
Analyzing Study of Path loss Propagation Models in Wireless Communications at 0.8 GHz
NASA Astrophysics Data System (ADS)
Kadhim Hoomod, Haider; Al-Mejibli, Intisar; Issa Jabboory, Abbas
2018-05-01
The paths loss propagation model is an important tool in wireless network planning, allowing network planner to optimize the cell towers distribution and meet expected service level requirements. However, each type of path loss propagation model is designed to predict path loss in a particular environment that may be inaccurate in other different environment. In this research different propagation models (Hata Model, ICC-33 Model, Ericson Model and Coast-231 Model) have been analyzed and compared based on the measured data. The measured data represent signal strength of two cell towers placed in two different environments which obtained by a drive test of them. First one in AL-Habebea represents an urban environment (high-density region) and the second in AL-Hindea district represents a rural environment (low-density region) with operating frequency 0.8 GHz. The results of performing the analysis and comparison conclude that Hata model and Ericsson model shows small deviation from real measurements in urban environment and Hata model generally gives better prediction in the rural environment.
Copilot: Monitoring Embedded Systems
NASA Technical Reports Server (NTRS)
Pike, Lee; Wegmann, Nis; Niller, Sebastian; Goodloe, Alwyn
2012-01-01
Runtime verification (RV) is a natural fit for ultra-critical systems, where correctness is imperative. In ultra-critical systems, even if the software is fault-free, because of the inherent unreliability of commodity hardware and the adversity of operational environments, processing units (and their hosted software) are replicated, and fault-tolerant algorithms are used to compare the outputs. We investigate both software monitoring in distributed fault-tolerant systems, as well as implementing fault-tolerance mechanisms using RV techniques. We describe the Copilot language and compiler, specifically designed for generating monitors for distributed, hard real-time systems. We also describe two case-studies in which we generated Copilot monitors in avionics systems.
Opto-mechanical design of optical window for aero-optics effect simulation instruments
NASA Astrophysics Data System (ADS)
Wang, Guo-ming; Dong, Dengfeng; Zhou, Weihu; Ming, Xing; Zhang, Yan
2016-10-01
A complete theory is established for opto-mechanical systems design of the window in this paper, which can make the design more rigorous .There are three steps about the design. First, the universal model of aerodynamic environment is established based on the theory of Computational Fluid Dynamics, and the pneumatic pressure distribution and temperature data of optical window surface is obtained when aircraft flies in 5-30km altitude, 0.5-3Ma speed and 0-30°angle of attack. The temperature and pressure distribution values for the maximum constraint is selected as the initial value of external conditions on the optical window surface. Then, the optical window and mechanical structure are designed, which is also divided into two parts: First, mechanical structure which meet requirements of the security and tightness is designed. Finally, rigorous analysis and evaluation are given about the structure of optics and mechanics we have designed. There are two parts to be analyzed. First, the Fluid-Solid-Heat Coupled Model is given based on finite element analysis. And the deformation of the glass and structure can be obtained by the model, which can assess the feasibility of the designed optical windows and ancillary structure; Second, the new optical surface is fitted by Zernike polynomials according to the deformation of the surface of the optical window, which can evaluate imaging quality impact of spectral camera by the deformation of window.
Robust Feedback Control of Reconfigurable Multi-Agent Systems in Uncertain Adversarial Environments
2015-07-09
R. G., Optimal Lunar Landing and Retargeting using a Hybrid Control Strategy. Proceedings of the 2013 AAS/AIAA Space Flight Mechanics Meeting (AAS...Furfaro, R. & Sanfelice, R. G., Switching System Model for Pinpoint Lunar Landing Guidance Using a Hybrid Control Strategy. Proceedings of the AIAA...methods in distributed settings and the design of numerical methods to properly compute their trajectories . We have generate results showing that
Coordinated scheduling for dynamic real-time systems
NASA Technical Reports Server (NTRS)
Natarajan, Swaminathan; Zhao, Wei
1994-01-01
In this project, we addressed issues in coordinated scheduling for dynamic real-time systems. In particular, we concentrated on design and implementation of a new distributed real-time system called R-Shell. The design objective of R-Shell is to provide computing support for space programs that have large, complex, fault-tolerant distributed real-time applications. In R-shell, the approach is based on the concept of scheduling agents, which reside in the application run-time environment, and are customized to provide just those resource management functions which are needed by the specific application. With this approach, we avoid the need for a sophisticated OS which provides a variety of generalized functionality, while still not burdening application programmers with heavy responsibility for resource management. In this report, we discuss the R-Shell approach, summarize the achievement of the project, and describe a preliminary prototype of R-Shell system.
Integrated Solar-Energy-Harvesting and -Storage Device
NASA Technical Reports Server (NTRS)
whitacre, Jay; Fleurial, Jean-Pierre; Mojarradi, Mohammed; Johnson, Travis; Ryan, Margaret Amy; Bugga, Ratnakumar; West, William; Surampudi, Subbarao; Blosiu, Julian
2004-01-01
A modular, integrated, completely solid-state system designed to harvest and store solar energy is under development. Called the power tile, the hybrid device consists of a photovoltaic cell, a battery, a thermoelectric device, and a charge-control circuit that are heterogeneously integrated to maximize specific energy capacity and efficiency. Power tiles could be used in a variety of space and terrestrial environments and would be designed to function with maximum efficiency in the presence of anticipated temperatures, temperature gradients, and cycles of sunlight and shadow. Because they are modular in nature, one could use a single power tile or could construct an array of as many tiles as needed. If multiple tiles are used in an array, the distributed and redundant nature of the charge control and distribution hardware provides an extremely fault-tolerant system. The figure presents a schematic view of the device.
Development of a Temperature Sensor for Jet Engine and Space Mission Applications
NASA Technical Reports Server (NTRS)
Patterson, Richard L.; Hammoud, Ahmad; Elbuluk, Malik; Culley, Dennis
2008-01-01
Electronics for Distributed Turbine Engine Control and Space Exploration Missions are expected to encounter extreme temperatures and wide thermal swings. In particular, circuits deployed in a jet engine compartment are likely to be exposed to temperatures well exceeding 150 C. To meet this requirement, efforts exist at the NASA Glenn Research Center (GRC), in support of the Fundamental Aeronautics Program/Subsonic Fixed Wing Project, to develop temperature sensors geared for use in high temperature environments. The sensor and associated circuitry need to be located in the engine compartment under distributed control architecture to simplify system design, improve reliability, and ease signal multiplexing. Several circuits were designed using commercial-off-the-shelf as well as newly-developed components to perform temperature sensing at high temperatures. The temperature-sensing circuits will be described along with the results pertaining to their performance under extreme temperature.
Biomedical innovation in the era of health care spending constraints.
Robinson, James C
2015-02-01
Insurers, hospitals, physicians, and consumers are increasingly weighing price against performance in their decisions to purchase and use new drugs, devices, and other medical technologies. This approach will tend to affect biomedical innovation adversely by reducing the revenues available for research and development. However, a more constrained funding environment may also have positive impacts. The passing era of largely cost-unconscious demand fostered the development of incremental innovations priced at premium levels. The new constrained-funding era will require medical technology firms to design their products with the features most valued by payers and patients, price them at levels justified by clinical performance, and manage distribution through organizations rather than to individual physicians. The emerging era has the potential to increase the social value of innovation by focusing industry on design, pricing, and distribution principles that are more closely aligned with the preferences-and pocketbooks-of its customers. Project HOPE—The People-to-People Health Foundation, Inc.
Specification and Design of a Fault Recovery Model for the Reliable Multicast Protocol
NASA Technical Reports Server (NTRS)
Montgomery, Todd; Callahan, John R.; Whetten, Brian
1996-01-01
The Reliable Multicast Protocol (RMP) provides a unique, group-based model for distributed programs that need to handle reconfiguration events at the application layer. This model, called membership views, provides an abstraction in which events such as site failures, network partitions, and normal join-leave events are viewed as group reformations. RMP provides access to this model through an application programming interface (API) that notifies an application when a group is reformed as the result of a some event. RMP provides applications with reliable delivery of messages using an underlying IP Multicast media to other group members in a distributed environment even in the case of reformations. A distributed application can use various Quality of Service (QoS) levels provided by RMP to tolerate group reformations. This paper explores the implementation details of the mechanisms in RMP that provide distributed applications with membership view information and fault recovery capabilities.
The Researches on Food Traceability System of University takeout
NASA Astrophysics Data System (ADS)
lu, Jia xin; zhao, Ce; li, Zhuang zhuang; shao, Zi rong; pi, Kun yi
2018-06-01
In recent years, campus takeaway has developed rapidly, and all kinds of online ordering platforms are running. The problem of distribution in the campus can not only save the time cost of the businessmen, but also guarantee the effective management of the school, which is beneficial to the construction of the standard health system for the takeout. But distribution according to the existing mode will cause certain safety and health risks. The establishment of the University takeaway food traceability system can solve this problem. This paper first analyzes the sharing mode and distribution process of campus takeaway, and then designs the intelligent tracing system for the campus takeaway; the construction of the food distribution information platform and the problem of the recycling of the green environment of the dining box. Finally, the intelligent tracing system of the school takeout is analyzed with the braised chicken as an example.
The implementation and use of Ada on distributed systems with high reliability requirements
NASA Technical Reports Server (NTRS)
Knight, J. C.
1988-01-01
The use and implementation of Ada were investigated in distributed environments in which reliability is the primary concern. In particular, the focus was on the possibility that a distributed system may be programmed entirely in Ada so that the individual tasks of the system are unconcerned with which processors are being executed, and that failures may occur in the software and underlying hardware. A secondary interest is in the performance of Ada systems and how that performance can be gauged reliably. Primary activities included: analysis of the original approach to recovery in distributed Ada programs using the Advanced Transport Operating System (ATOPS) example; review and assessment of the original approach which was found to be capable of improvement; development of a refined approach to recovery that was applied to the ATOPS example; and design and development of a performance assessment scheme for Ada programs based on a flexible user-driven benchmarking system.
NASA Astrophysics Data System (ADS)
Marlowe, Ashley E.; Singh, Abhishek; Semichaevsky, Andrey V.; Yingling, Yaroslava G.
2009-03-01
Nucleic acid nanoparticles can self-assembly through the formation of complementary loop-loop interactions or stem-stem interactions. Presence and concentration of ions can significantly affect the self-assembly process and the stability of the nanostructure. In this presentation we use explicit molecular dynamics simulations to examine the variations in cationic distributions and hydration environment around DNA and RNA helices and loop-loop interactions. Our simulations show that the potassium and sodium ionic distributions are different around RNA and DNA motifs which could be indicative of ion mediated relative stability of loop-loop complexes. Moreover in RNA loop-loop motifs ions are consistently present and exchanged through a distinct electronegative channel. We will also show how we used the specific RNA loop-loop motif to design a RNA hexagonal nanoparticle.
NASA Astrophysics Data System (ADS)
Manfredi, Sabato
2016-06-01
Large-scale dynamic systems are becoming highly pervasive in their occurrence with applications ranging from system biology, environment monitoring, sensor networks, and power systems. They are characterised by high dimensionality, complexity, and uncertainty in the node dynamic/interactions that require more and more computational demanding methods for their analysis and control design, as well as the network size and node system/interaction complexity increase. Therefore, it is a challenging problem to find scalable computational method for distributed control design of large-scale networks. In this paper, we investigate the robust distributed stabilisation problem of large-scale nonlinear multi-agent systems (briefly MASs) composed of non-identical (heterogeneous) linear dynamical systems coupled by uncertain nonlinear time-varying interconnections. By employing Lyapunov stability theory and linear matrix inequality (LMI) technique, new conditions are given for the distributed control design of large-scale MASs that can be easily solved by the toolbox of MATLAB. The stabilisability of each node dynamic is a sufficient assumption to design a global stabilising distributed control. The proposed approach improves some of the existing LMI-based results on MAS by both overcoming their computational limits and extending the applicative scenario to large-scale nonlinear heterogeneous MASs. Additionally, the proposed LMI conditions are further reduced in terms of computational requirement in the case of weakly heterogeneous MASs, which is a common scenario in real application where the network nodes and links are affected by parameter uncertainties. One of the main advantages of the proposed approach is to allow to move from a centralised towards a distributed computing architecture so that the expensive computation workload spent to solve LMIs may be shared among processors located at the networked nodes, thus increasing the scalability of the approach than the network size. Finally, a numerical example shows the applicability of the proposed method and its advantage in terms of computational complexity when compared with the existing approaches.
NASA Astrophysics Data System (ADS)
Goodarzi, Avesta; Mohammadi, Masoud
2014-04-01
In this paper, vehicle stability control and fuel economy for a 4-wheel-drive hybrid vehicle are investigated. The integrated controller is designed within three layers. The first layer determines the total yaw moment and total lateral force made by using an optimal controller method to follow the desired dynamic behaviour of a vehicle. The second layer determines optimum tyre force distribution in order to optimise tyre usage and find out how the tyres should share longitudinal and lateral forces to achieve a target vehicle response under the assumption that all four wheels can be independently steered, driven, and braked. In the third layer, the active steering, wheel slip, and electrical motor torque controllers are designed. In the front axle, internal combustion engine (ICE) is coupled to an electric motor (EM). The control strategy has to determine the power distribution between ICE and EM to minimise fuel consumption and allowing the vehicle to be charge sustaining. Finally, simulations performed in MATLAB/SIMULINK environment show that the proposed structure could enhance the vehicle stability and fuel economy in different manoeuvres.
The Identity Mapping Project: Demographic differences in patterns of distributed identity.
Gilbert, Richard L; Dionisio, John David N; Forney, Andrew; Dorin, Philip
2015-01-01
The advent of cloud computing and a multi-platform digital environment is giving rise to a new phase of human identity called "The Distributed Self." In this conception, aspects of the self are distributed into a variety of 2D and 3D digital personas with the capacity to reflect any number of combinations of now malleable personality traits. In this way, the source of human identity remains internal and embodied, but the expression or enactment of the self becomes increasingly external, disembodied, and distributed on demand. The Identity Mapping Project (IMP) is an interdisciplinary collaboration between psychology and computer Science designed to empirically investigate the development of distributed forms of identity. Methodologically, it collects a large database of "identity maps" - computerized graphical representations of how active someone is online and how their identity is expressed and distributed across 7 core digital domains: email, blogs/personal websites, social networks, online forums, online dating sites, character based digital games, and virtual worlds. The current paper reports on gender and age differences in online identity based on an initial database of distributed identity profiles.
NASA Astrophysics Data System (ADS)
Romanosky, Robert R.
2017-05-01
he National Energy Technology Laboratory (NETL) under the Department of Energy (DOE) Fossil Energy (FE) Program is leading the effort to not only develop near zero emission power generation systems, but to increaser the efficiency and availability of current power systems. The overarching goal of the program is to provide clean affordable power using domestic resources. Highly efficient, low emission power systems can have extreme conditions of high temperatures up to 1600 oC, high pressures up to 600 psi, high particulate loadings, and corrosive atmospheres that require monitoring. Sensing in these harsh environments can provide key information that directly impacts process control and system reliability. The lack of suitable measurement technology serves as a driver for the innovations in harsh environment sensor development. Advancements in sensing using optical fibers are key efforts within NETL's sensor development program as these approaches offer the potential to survive and provide critical information about these processes. An overview of the sensor development supported by the National Energy Technology Laboratory (NETL) will be given, including research in the areas of sensor materials, designs, and measurement types. New approaches to intelligent sensing, sensor placement and process control using networked sensors will be discussed as will novel approaches to fiber device design concurrent with materials development research and development in modified and coated silica and sapphire fiber based sensors. The use of these sensors for both single point and distributed measurements of temperature, pressure, strain, and a select suite of gases will be addressed. Additional areas of research includes novel control architecture and communication frameworks, device integration for distributed sensing, and imaging and other novel approaches to monitoring and controlling advanced processes. The close coupling of the sensor program with process modeling and control will be discussed for the overarching goal of clean power production.
A Software Architecture for Intelligent Synthesis Environments
NASA Technical Reports Server (NTRS)
Filman, Robert E.; Norvig, Peter (Technical Monitor)
2001-01-01
The NASA's Intelligent Synthesis Environment (ISE) program is a grand attempt to develop a system to transform the way complex artifacts are engineered. This paper discusses a "middleware" architecture for enabling the development of ISE. Desirable elements of such an Intelligent Synthesis Architecture (ISA) include remote invocation; plug-and-play applications; scripting of applications; management of design artifacts, tools, and artifact and tool attributes; common system services; system management; and systematic enforcement of policies. This paper argues that the ISA extend conventional distributed object technology (DOT) such as CORBA and Product Data Managers with flexible repositories of product and tool annotations and "plug-and-play" mechanisms for inserting "ility" or orthogonal concerns into the system. I describe the Object Infrastructure Framework, an Aspect Oriented Programming (AOP) environment for developing distributed systems that provides utility insertion and enables consistent annotation maintenance. This technology can be used to enforce policies such as maintaining the annotations of artifacts, particularly the provenance and access control rules of artifacts-, performing automatic datatype transformations between representations; supplying alternative servers of the same service; reporting on the status of jobs and the system; conveying privileges throughout an application; supporting long-lived transactions; maintaining version consistency; and providing software redundancy and mobility.
An Automated Design Framework for Multicellular Recombinase Logic.
Guiziou, Sarah; Ulliana, Federico; Moreau, Violaine; Leclere, Michel; Bonnet, Jerome
2018-05-18
Tools to systematically reprogram cellular behavior are crucial to address pressing challenges in manufacturing, environment, or healthcare. Recombinases can very efficiently encode Boolean and history-dependent logic in many species, yet current designs are performed on a case-by-case basis, limiting their scalability and requiring time-consuming optimization. Here we present an automated workflow for designing recombinase logic devices executing Boolean functions. Our theoretical framework uses a reduced library of computational devices distributed into different cellular subpopulations, which are then composed in various manners to implement all desired logic functions at the multicellular level. Our design platform called CALIN (Composable Asynchronous Logic using Integrase Networks) is broadly accessible via a web server, taking truth tables as inputs and providing corresponding DNA designs and sequences as outputs (available at http://synbio.cbs.cnrs.fr/calin ). We anticipate that this automated design workflow will streamline the implementation of Boolean functions in many organisms and for various applications.
High temperature antenna development for space shuttle, volume 1
NASA Technical Reports Server (NTRS)
Kuhlman, E. A.
1973-01-01
Design concepts for high temperature flush mounted Space Shuttle Orbiter antenna systems are discussed. The design concepts include antenna systems for VHF, L-band, S-band, C-band and Ku-band frequencies. The S-band antenna system design was completed and test hardware fabricated. It was then subjected to electrical and thermal testing to establish design requirements and determine reuse capabilities. The thermal tests consisted of applying ten high temperature cycles simulating the Orbiter entry heating environment in an arc tunnel plasma facility and observing the temperature distributions. Radiation pattern and impedance measurements before and after high temperature exposure were used to evaluated the antenna systems performance. Alternate window design concepts are considered. Layout drawings, supported by thermal and strength analyses, are given for each of the antenna system designs. The results of the electrical and thermal testing of the S-band antenna system are given.
Huang, Yili; Feng, Hao; Lu, Hang; Zeng, Yanhua
2017-07-01
It is believed that sphingomonads are ubiquitously distributed in environments. However detailed information about their community structure and their co-relationship with environmental parameters remain unclear. In this study, novel sphingomonads-specific primers based on the 16S rRNA gene were designed to investigate the distribution of sphingomonads in 10 different niches. Both in silico and in-practice tests on pure cultures and environmental samples showed that Sph384f/Sph701r was an efficient primer set. Illumina MiSeq sequencing revealed that community structures of sphingomonads were significantly different among the 10 samples, although 12 sphingomonad genera were present in all samples. Based on RDA analysis and Monte Carlo permutation test, sphingomonad community structure was significantly correlated with limnetic and marine habitat types. Among these niches, the genus Sphingomicrobium showed strong positive correlation with marine habitats, whereas genera Sphingobium, Novosphingobium, Sphingopyxis, and Sphingorhabdus showed strong positive correlation with limnetic habitats. Our study provided direct evidence that sphingomonads are ubiquitously distributed in environments, and revealed for the first time that their community structure can be correlated with habitats.
Nebot, Patricio; Torres-Sospedra, Joaquín; Martínez, Rafael J
2011-01-01
The control architecture is one of the most important part of agricultural robotics and other robotic systems. Furthermore its importance increases when the system involves a group of heterogeneous robots that should cooperate to achieve a global goal. A new control architecture is introduced in this paper for groups of robots in charge of doing maintenance tasks in agricultural environments. Some important features such as scalability, code reuse, hardware abstraction and data distribution have been considered in the design of the new architecture. Furthermore, coordination and cooperation among the different elements in the system is allowed in the proposed control system. By integrating a network oriented device server Player, Java Agent Development Framework (JADE) and High Level Architecture (HLA), the previous concepts have been considered in the new architecture presented in this paper. HLA can be considered the most important part because it not only allows the data distribution and implicit communication among the parts of the system but also allows to simultaneously operate with simulated and real entities, thus allowing the use of hybrid systems in the development of applications.
NASA Technical Reports Server (NTRS)
Monell, D.; Mathias, D.; Reuther, J.; Garn, M.
2003-01-01
A new engineering environment constructed for the purposes of analyzing and designing Reusable Launch Vehicles (RLVs) is presented. The new environment has been developed to allow NASA to perform independent analysis and design of emerging RLV architectures and technologies. The new Advanced Engineering Environment (AEE) is both collaborative and distributed. It facilitates integration of the analyses by both vehicle performance disciplines and life-cycle disciplines. Current performance disciplines supported include: weights and sizing, aerodynamics, trajectories, propulsion, structural loads, and CAD-based geometries. Current life-cycle disciplines supported include: DDT&E cost, production costs, operations costs, flight rates, safety and reliability, and system economics. Involving six NASA centers (ARC, LaRC, MSFC, KSC, GRC and JSC), AEE has been tailored to serve as a web-accessed agency-wide source for all of NASA's future launch vehicle systems engineering functions. Thus, it is configured to facilitate (a) data management, (b) automated tool/process integration and execution, and (c) data visualization and presentation. The core components of the integrated framework are a customized PTC Windchill product data management server, a set of RLV analysis and design tools integrated using Phoenix Integration's Model Center, and an XML-based data capture and transfer protocol. The AEE system has seen production use during the Initial Architecture and Technology Review for the NASA 2nd Generation RLV program, and it continues to undergo development and enhancements in support of its current main customer, the NASA Next Generation Launch Technology (NGLT) program.
Network-based production quality control
NASA Astrophysics Data System (ADS)
Kwon, Yongjin; Tseng, Bill; Chiou, Richard
2007-09-01
This study investigates the feasibility of remote quality control using a host of advanced automation equipment with Internet accessibility. Recent emphasis on product quality and reduction of waste stems from the dynamic, globalized and customer-driven market, which brings opportunities and threats to companies, depending on the response speed and production strategies. The current trends in industry also include a wide spread of distributed manufacturing systems, where design, production, and management facilities are geographically dispersed. This situation mandates not only the accessibility to remotely located production equipment for monitoring and control, but efficient means of responding to changing environment to counter process variations and diverse customer demands. To compete under such an environment, companies are striving to achieve 100%, sensor-based, automated inspection for zero-defect manufacturing. In this study, the Internet-based quality control scheme is referred to as "E-Quality for Manufacturing" or "EQM" for short. By its definition, EQM refers to a holistic approach to design and to embed efficient quality control functions in the context of network integrated manufacturing systems. Such system let designers located far away from the production facility to monitor, control and adjust the quality inspection processes as production design evolves.
Kingsolver, Joel
1981-03-01
To explore principles of organismic design in fluctuating environments, morphological design of the leaf of the pitcher-plant, Sarracenia purpurea, was studied for a population in northern Michigan. The design criterion focused upon the leaf shape and minimum size which effectively avoids leaf desiccation (complete loss of fluid from the leaf cavity) in the face of fluctuating rainfall and meteorological conditions. Bowl- and pitcher-shaped leaves were considered. Simulations show that the pitcher geometry experiences less frequent desiccation than bowls of the same size. Desiccation frequency is inversely related to leaf size; the size distribution of pitcher leaves in the field shows that the majority of pitchers desiccate only 1-3 times per season on average, while smaller pitchers may average up to 8 times per season. A linear filter model of an organism in a fluctuating environment is presented, in which the organism selectively filters the temporal patterns of environmental input. General measures of rainfall predictability based upon information theory and spectral analysis are consistent with the model of a pitcher leaf as a low-pass (frequency) filter which avoids desiccation by eliminating high-frequency rainfall variability.
A cooperative model for IS security risk management in distributed environment.
Feng, Nan; Zheng, Chundong
2014-01-01
Given the increasing cooperation between organizations, the flexible exchange of security information across the allied organizations is critical to effectively manage information systems (IS) security in a distributed environment. In this paper, we develop a cooperative model for IS security risk management in a distributed environment. In the proposed model, the exchange of security information among the interconnected IS under distributed environment is supported by Bayesian networks (BNs). In addition, for an organization's IS, a BN is utilized to represent its security environment and dynamically predict its security risk level, by which the security manager can select an optimal action to safeguard the firm's information resources. The actual case studied illustrates the cooperative model presented in this paper and how it can be exploited to manage the distributed IS security risk effectively.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tong, L.; Yang, K.; Chen, Z.
1999-07-01
The distribution of solar radiant energy inside the specific air-conditioned automobile chamber is studied on the basis of the unique wavelength spectrum. Some important optical parameters of the internal materials are mostly determined by experiments with monochromator, electron-multiplier phototube, etc. Some optical parameters of the thin transparent object are analyzed theoretically. Based on random model, Monte Carlo method is adopted to get the detailed distribution of solar radiant energy. The procedures of absorptivity, reflection and transmission of each ray are simulated and traced during the calculation. The universal software calculates two cases with different kind of glass. The relevant resultsmore » show the importance of solar radiant energy on the thermal environment inside the air-conditioned automobile chamber. Furthermore, the necessity of shield quality of the automobile glass is also obvious. This study is also the basis of the following researches on fluid and temperature fields. The results are also useful for further thermal comfort design.« less
BioconductorBuntu: a Linux distribution that implements a web-based DNA microarray analysis server.
Geeleher, Paul; Morris, Dermot; Hinde, John P; Golden, Aaron
2009-06-01
BioconductorBuntu is a custom distribution of Ubuntu Linux that automatically installs a server-side microarray processing environment, providing a user-friendly web-based GUI to many of the tools developed by the Bioconductor Project, accessible locally or across a network. System installation is via booting off a CD image or by using a Debian package provided to upgrade an existing Ubuntu installation. In its current version, several microarray analysis pipelines are supported including oligonucleotide, dual-or single-dye experiments, including post-processing with Gene Set Enrichment Analysis. BioconductorBuntu is designed to be extensible, by server-side integration of further relevant Bioconductor modules as required, facilitated by its straightforward underlying Python-based infrastructure. BioconductorBuntu offers an ideal environment for the development of processing procedures to facilitate the analysis of next-generation sequencing datasets. BioconductorBuntu is available for download under a creative commons license along with additional documentation and a tutorial from (http://bioinf.nuigalway.ie).
NASA Technical Reports Server (NTRS)
Xapsos, M. A.; Barth, J. L.; Stassinopoulos, E. G.; Burke, E. A.; Gee, G. B.
1999-01-01
The effects that solar proton events have on microelectronics and solar arrays are important considerations for spacecraft in geostationary and polar orbits and for interplanetary missions. Designers of spacecraft and mission planners are required to assess the performance of microelectronic systems under a variety of conditions. A number of useful approaches exist for predicting information about solar proton event fluences and, to a lesser extent, peak fluxes. This includes the cumulative fluence over the course of a mission, the fluence of a worst-case event during a mission, the frequency distribution of event fluences, and the frequency distribution of large peak fluxes. Naval Research Laboratory (NRL) and NASA Goddard Space Flight Center, under the sponsorship of NASA's Space Environments and Effects (SEE) Program, have developed a new model for predicting cumulative solar proton fluences and worst-case solar proton events as functions of mission duration and user confidence level. This model is called the Emission of Solar Protons (ESP) model.
NASA Technical Reports Server (NTRS)
Xapsos, M. A.; Barth, J. L.; Stassinopoulos, E. G.; Burke, Edward A.; Gee, G. B.
1999-01-01
The effects that solar proton events have on microelectronics and solar arrays are important considerations for spacecraft in geostationary and polar orbits and for interplanetary missions. Designers of spacecraft and mission planners are required to assess the performance of microelectronic systems under a variety of conditions. A number of useful approaches exist for predicting information about solar proton event fluences and, to a lesser extent, peak fluxes. This includes the cumulative fluence over the course of a mission, the fluence of a worst-case event during a mission, the frequency distribution of event fluences, and the frequency distribution of large peak fluxes. Naval Research Laboratory (NRL) and NASA Goddard Space Flight Center, under the sponsorship of NASA's Space Environments and Effects (SEE) Program, have developed a new model for predicting cumulative solar proton fluences and worst-case solar proton events as functions of mission duration and user confidence level. This model is called the Emission of Solar Protons (ESP) model.
A multi-agent intelligent environment for medical knowledge.
Vicari, Rosa M; Flores, Cecilia D; Silvestre, André M; Seixas, Louise J; Ladeira, Marcelo; Coelho, Helder
2003-03-01
AMPLIA is a multi-agent intelligent learning environment designed to support training of diagnostic reasoning and modelling of domains with complex and uncertain knowledge. AMPLIA focuses on the medical area. It is a system that deals with uncertainty under the Bayesian network approach, where learner-modelling tasks will consist of creating a Bayesian network for a problem the system will present. The construction of a network involves qualitative and quantitative aspects. The qualitative part concerns the network topology, that is, causal relations among the domain variables. After it is ready, the quantitative part is specified. It is composed of the distribution of conditional probability of the variables represented. A negotiation process (managed by an intelligent MediatorAgent) will treat the differences of topology and probability distribution between the model the learner built and the one built-in in the system. That negotiation process occurs between the agents that represent the expert knowledge domain (DomainAgent) and the agent that represents the learner knowledge (LearnerAgent).
An Empirical Study Analyzing Job Productivity in Toxic Workplace Environments
Anjum, Amna; Ming, Xu; Siddiqi, Ahmed Faisal
2018-01-01
Purpose: This empirical study aims to determine the effects of a toxic workplace environment, which can negatively impact the job productivity of an employee. Methodology: Three hundred questionnaires were randomly distributed among the staff members of seven private universities in Pakistan with a final response rate of 89%. For analysis purposes, AMOS 22 was used to study the direct and indirect effects of the toxic workplace environment on job productivity. Confirmatory Factor Analysis (CFA) was conducted to ensure the convergent and discriminant validity of the factors, while the Hayes mediation approach was used to verify the mediating role of job burnout between the four dimensions of toxic workplace environment and job productivity. A toxic workplace with multiple dimensions, such as workplace ostracism, workplace incivility, workplace harassment, and workplace bullying, was used in this study. Findings: By using the multiple statistical tools and techniques, it has been proven that ostracism, incivility, harassment, and bullying have direct negative significant effects on job productivity, while job burnout was shown to be a statistical significant mediator between the dimensions of a toxic workplace environment and job productivity. Finally, we concluded that organizations need to eradicate the factors of toxic workplace environments to ensure their prosperity and success. Practical Implications: This study encourages managers, leaders, and top management to adopt appropriate policies for enhancing employees’ productivity. Limitations: This study was conducted by using a cross-sectional research design. Future research aims to expand the study by using a longitudinal research design. PMID:29883424
An Empirical Study Analyzing Job Productivity in Toxic Workplace Environments.
Anjum, Amna; Ming, Xu; Siddiqi, Ahmed Faisal; Rasool, Samma Faiz
2018-05-21
Purpose: This empirical study aims to determine the effects of a toxic workplace environment, which can negatively impact the job productivity of an employee. Methodology: Three hundred questionnaires were randomly distributed among the staff members of seven private universities in Pakistan with a final response rate of 89%. For analysis purposes, AMOS 22 was used to study the direct and indirect effects of the toxic workplace environment on job productivity. Confirmatory Factor Analysis (CFA) was conducted to ensure the convergent and discriminant validity of the factors, while the Hayes mediation approach was used to verify the mediating role of job burnout between the four dimensions of toxic workplace environment and job productivity. A toxic workplace with multiple dimensions, such as workplace ostracism, workplace incivility, workplace harassment, and workplace bullying, was used in this study. Findings: By using the multiple statistical tools and techniques, it has been proven that ostracism, incivility, harassment, and bullying have direct negative significant effects on job productivity, while job burnout was shown to be a statistical significant mediator between the dimensions of a toxic workplace environment and job productivity. Finally, we concluded that organizations need to eradicate the factors of toxic workplace environments to ensure their prosperity and success. Practical Implications: This study encourages managers, leaders, and top management to adopt appropriate policies for enhancing employees’ productivity. Limitations: This study was conducted by using a cross-sectional research design. Future research aims to expand the study by using a longitudinal research design.
NASA Astrophysics Data System (ADS)
Bailey, Brent Andrew
Structural designs by humans and nature are wholly distinct in their approaches. Engineers model components to verify that all mechanical requirements are satisfied before assembling a product. Nature, on the other hand; creates holistically: each part evolves in conjunction with the others. The present work is a synthesis of these two design approaches; namely, spatial models that evolve. Topology optimization determines the amount and distribution of material within a model; which corresponds to the optimal connectedness and shape of a structure. Smooth designs are obtained by using higher-order B-splines in the definition of the material distribution. Higher-fidelity is achieved using adaptive meshing techniques at the interface between solid and void. Nature is an exemplary basis for mass minimization, as processing material requires both resources and energy. Topological optimization techniques were originally formulated as the maximization of the structural stiffness subject to a volume constraint. This research inverts the optimization problem: the mass is minimized subject to deflection constraints. Active materials allow a structure to interact with its environment in a manner similar to muscles and sensory organs in animals. By specifying the material properties and design requirements, adaptive structures with integrated sensors and actuators can evolve.
Shinderman, Matt
2015-09-01
In 2010, the American pika (Ochotona princeps fenisex) was denied federal protection based on limited evidence of persistence in low-elevation environments. Studies in nonalpine areas have been limited to relatively few environments, and it is unclear whether patterns observed elsewhere (e.g., Bodie, CA) represent other nonalpine habitats. This study was designed to establish pika presence in a new location, determine distribution within the surveyed area, and evaluate influences of elevation, vegetation, lava complexity, and distance to habitat edge on pika site occupancy. In 2011 and 2012, we conducted surveys for American pika on four distinct subalpine lava flows of Newberry National Volcanic Monument, Oregon, USA. Field surveys were conducted at predetermined locations within lava flows via silent observation and active searching for pika sign. Site habitat characteristics were included as predictors of occupancy in multinomial regression models. Above and belowground temperatures were recorded at a subsample of pika detection sites. Pika were detected in 26% (2011) and 19% (2012) of survey plots. Seventy-four pika were detected outside survey plot boundaries. Lava complexity was the strongest predictor of pika occurrence, where pika were up to seven times more likely to occur in the most complicated lava formations. Pika were two times more likely to occur with increasing elevation, although they were found at all elevations in the study area. This study expands the known distribution of the species and provides additional evidence for persistence in nonalpine habitats. Results partially support the predictive occupancy model developed for pika at Craters of the Moon National Monument, another lava environment. Characteristics of the lava environment clearly influence pika site occupancy, but habitat variables reported as important in other studies were inconclusive here. Further work is needed to gain a better understanding of the species' current distribution and ability to persist under future climate conditions.
Shinderman, Matt
2015-01-01
In 2010, the American pika (Ochotona princeps fenisex) was denied federal protection based on limited evidence of persistence in low-elevation environments. Studies in nonalpine areas have been limited to relatively few environments, and it is unclear whether patterns observed elsewhere (e.g., Bodie, CA) represent other nonalpine habitats. This study was designed to establish pika presence in a new location, determine distribution within the surveyed area, and evaluate influences of elevation, vegetation, lava complexity, and distance to habitat edge on pika site occupancy. In 2011 and 2012, we conducted surveys for American pika on four distinct subalpine lava flows of Newberry National Volcanic Monument, Oregon, USA. Field surveys were conducted at predetermined locations within lava flows via silent observation and active searching for pika sign. Site habitat characteristics were included as predictors of occupancy in multinomial regression models. Above and belowground temperatures were recorded at a subsample of pika detection sites. Pika were detected in 26% (2011) and 19% (2012) of survey plots. Seventy-four pika were detected outside survey plot boundaries. Lava complexity was the strongest predictor of pika occurrence, where pika were up to seven times more likely to occur in the most complicated lava formations. Pika were two times more likely to occur with increasing elevation, although they were found at all elevations in the study area. This study expands the known distribution of the species and provides additional evidence for persistence in nonalpine habitats. Results partially support the predictive occupancy model developed for pika at Craters of the Moon National Monument, another lava environment. Characteristics of the lava environment clearly influence pika site occupancy, but habitat variables reported as important in other studies were inconclusive here. Further work is needed to gain a better understanding of the species’ current distribution and ability to persist under future climate conditions. PMID:26380695
RADECS Short Course Session I: The Space Radiation Environment
NASA Technical Reports Server (NTRS)
Xapsos, Michael; Bourdarie, Sebastien
2007-01-01
The presented slides and accompanying paper focus on radiation in the space environment. Since space exploration has begun it has become evident that the space environment is a highly aggressive medium. Beyond the natural protection provided by the Earth's atmosphere, various types of radiation can be encountered. Their characteristics (energy and nature), origins and distributions in space are extremely variable. This environment degrades electronic systems and on-board equipment in particular and creates radiobiological hazards during manned space flights. Based on several years of space exploration, a detailed analysis of the problems on satellites shows that the part due to the space environment is not negligible. It appears that the malfunctions are due to problems linked to the space environment, electronic problems, design problems, quality problems, other issues, and unexplained reasons. The space environment is largely responsible for about 20% of the anomalies occurring on satellites and a better knowledge of that environment could only increase the average lifetime of space vehicles. This naturally leads to a detailed study of the space environment and of the effects that it induces on space vehicles and astronauts. Sources of radiation in the space environment are discussed here and include the solar activity cycle, galactic cosmic rays, solar particle events, and Earth radiation belts. Future challenges for space radiation environment models are briefly addressed.
Geodiversity and biodiversity assessment of the Słupsk Bank, Baltic Sea
NASA Astrophysics Data System (ADS)
Najwer, Alicja; Zelewska, Izabela; Zwoliński, Zbigniew
2017-04-01
Recognizing the most diversified parts of the territory turns out to be very crucial for management and planning of natural protected areas. There is an increasing number of studies concerning assessing geodiversity and biodiversity of the land areas. However, there is noticeable lack of such publications for submerged zones. The study area consists of 100km2 Słupsk sandy shoal sporadically covered with boulder layers, located in the southern part of the Baltic Sea. It is characterised by landscapes of a significant nature value protected by Natura 2000 and is as well designated as an open sea by Helsinki Commission Baltic Sea Protected Area (HELCOM BSPA). The main aim of the presentation is an attempt to integrate geodiversity and biodiversity assessments of the submerged area using GIS platform. The basis for the diversity assessment is the proper selection of features of the marine environment, its reclassification and integration by the map algebra analysis. The map of geodiversity is based on three factor maps: a relief energy map (classification based on bathymetric model, a landform fragmentation/geomorphological map (expert classification using BPI - Bathymetric Position Index), and a lithological map (classification of the average size of grain fraction). The map of biodiversity is based on the following factor maps: a map of biomass distribution of Ceraminum Diaphanum, a map of biomass distribution of Coccotylus Truncatus, a map of biomass distribution of Polysiphonia Fucoides, a map of biomass distribution of Mytilus Edulis Trossulus, a map of distribution of macroalgae, and finally a map of distribution of macrozoobenthos. It was decided to use four classes of diversity (from low through medium and high, up to very high). The designation of the lowest class was abandoned because it characterizes areas with high anthropopressure. Maps of geodiversity and biodiversity may prove to be helpful in determining the directions for management of the most valuable parts of the areas from the nature point of view, as well as delimitation of the geodiveristy/biodiversity hotspots for purpose of the strict nature protection. This study is the first attempt to use methods of diversity assessment for marine environment.
NASA Technical Reports Server (NTRS)
Holder, Donald W.; Parker, David
2000-01-01
The Volatile Removal Assembly (VRA) is a high temperature catalytic oxidation process that will be used as the final treatment for recycled water aboard the International Space Station (ISS). The multiphase nature of the process had raised concerns as to the performance of the VRA in a microgravity environment. To address these concerns, two experiments were designed. The VRA Flight Experiment (VRAFE) was designed to test a full size VRA under controlled conditions in microgravity aboard the SPACEHAB module and in a 1 -g environment and compare the performance results. The second experiment relied on visualization of two-phase flow through small column packed beds and was designed to fly aboard NASA's microgravity test bed plane (KC-135). The objective of the KC-135 experiment was to understand the two-phase fluid flow distribution in a packed bed in microgravity. On Space Transportation System (STS) flight 96 (May 1999), the VRA FE was successfully operated and in June 1999 the KC-135 packed bed testing was completed. This paper provides an overview of the experiments and a summary of the results and findings.
New frontiers in design synthesis
NASA Technical Reports Server (NTRS)
Goldin, D. S.; Venneri, S. L.; Noor, A. K.
1999-01-01
The Intelligent Synthesis Environment (ISE), which is one of the major strategic technologies under development at NASA centers and the University of Virginia, is described. One of the major objectives of ISE is to significantly enhance the rapid creation of innovative affordable products and missions. ISE uses a synergistic combination of leading-edge technologies, including high performance computing, high capacity communications and networking, human-centered computing, knowledge-based engineering, computational intelligence, virtual product development, and product information management. The environment will link scientists, design teams, manufacturers, suppliers, and consultants who participate in the mission synthesis as well as in the creation and operation of the aerospace system. It will radically advance the process by which complex science missions are synthesized, and high-tech engineering Systems are designed, manufactured and operated. The five major components critical to ISE are human-centered computing, infrastructure for distributed collaboration, rapid synthesis and simulation tools, life cycle integration and validation, and cultural change in both the engineering and science creative process. The five components and their subelements are described. Related U.S. government programs are outlined and the future impact of ISE on engineering research and education is discussed.
NASA Astrophysics Data System (ADS)
Work, Paul R.
1991-12-01
This thesis investigates the parallelization of existing serial programs in computational electromagnetics for use in a parallel environment. Existing algorithms for calculating the radar cross section of an object are covered, and a ray-tracing code is chosen for implementation on a parallel machine. Current parallel architectures are introduced and a suitable parallel machine is selected for the implementation of the chosen ray-tracing algorithm. The standard techniques for the parallelization of serial codes are discussed, including load balancing and decomposition considerations, and appropriate methods for the parallelization effort are selected. A load balancing algorithm is modified to increase the efficiency of the application, and a high level design of the structure of the serial program is presented. A detailed design of the modifications for the parallel implementation is also included, with both the high level and the detailed design specified in a high level design language called UNITY. The correctness of the design is proven using UNITY and standard logic operations. The theoretical and empirical results show that it is possible to achieve an efficient parallel application for a serial computational electromagnetic program where the characteristics of the algorithm and the target architecture critically influence the development of such an implementation.
Job-Oriented Basic Skills (JOBS) Program: Administrator’s Guide.
1981-02-01
direction systems, in a clean, corn- target designation fortable shop-like systems, and electro- environment. FT. hydr- aulic fire con- usually work...situations, all types of guninery clean or dirty work, equipment from dadh or shop, and missiles to small any kind of climate arms. or temperature. They...string Their duties may wires, and install be carried out in transformers and tropical or arctic distribution panels, climates in many different work 3
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Song
CFD (Computational Fluid Dynamics) is a widely used technique in engineering design field. It uses mathematical methods to simulate and predict flow characteristics in a certain physical space. Since the numerical result of CFD computation is very hard to understand, VR (virtual reality) and data visualization techniques are introduced into CFD post-processing to improve the understandability and functionality of CFD computation. In many cases CFD datasets are very large (multi-gigabytes), and more and more interactions between user and the datasets are required. For the traditional VR application, the limitation of computing power is a major factor to prevent visualizing largemore » dataset effectively. This thesis presents a new system designing to speed up the traditional VR application by using parallel computing and distributed computing, and the idea of using hand held device to enhance the interaction between a user and VR CFD application as well. Techniques in different research areas including scientific visualization, parallel computing, distributed computing and graphical user interface designing are used in the development of the final system. As the result, the new system can flexibly be built on heterogeneous computing environment, dramatically shorten the computation time.« less
Integration of the SSPM and STAGE with the MPACT Virtual Facility Distributed Test Bed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cipiti, Benjamin B.; Shoman, Nathan
The Material Protection Accounting and Control Technologies (MPACT) program within DOE NE is working toward a 2020 milestone to demonstrate a Virtual Facility Distributed Test Bed. The goal of the Virtual Test Bed is to link all MPACT modeling tools, technology development, and experimental work to create a Safeguards and Security by Design capability for fuel cycle facilities. The Separation and Safeguards Performance Model (SSPM) forms the core safeguards analysis tool, and the Scenario Toolkit and Generation Environment (STAGE) code forms the core physical security tool. These models are used to design and analyze safeguards and security systems and generatemore » performance metrics. Work over the past year has focused on how these models will integrate with the other capabilities in the MPACT program and specific model changes to enable more streamlined integration in the future. This report describes the model changes and plans for how the models will be used more collaboratively. The Virtual Facility is not designed to integrate all capabilities into one master code, but rather to maintain stand-alone capabilities that communicate results between codes more effectively.« less
Simulation of Hydrogen Distribution in Ignalina NPP ALS Compartments During BDBA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Babilas, Egidijus; Urbonavicius, Egidijus; Rimkevicius, Sigitas
2006-07-01
Accident Localisation System (ALS) of Ignalina NPP is a 'pressure suppression' type confinement, which protects the population, employees and environment from the radiation hazards. According to the Safety Analysis Report for Ignalina NPP {approx}110 m{sup 3} of hydrogen is released to ALS compartments during the Maximum Design Basis Accident. However in case of beyond design basis accident, when the oxidation of zirconium starts, the amount of generated hydrogen could be significantly higher. If the volume concentration of hydrogen in the compartment reaches 4%, there is a possibility for a combustible mixture to appear. To prevent the possible hydrogen accumulation inmore » the ALS of the Ignalina NPP during an accident the H{sub 2} control system is installed. The results of the performed analysis derived the places of the possible H{sub 2} accumulation in the ALS compartments during the transient processes and assessed the mixture combustibility in these places for a beyond design basis accident scenario. Such analysis of H{sub 2} distribution in the ALS of Ignalina NPP in case of BDBA was not performed before. (authors)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldsmith, Steven Y.; Spires, Shannon V.
There are currently two proposed standards for agent communication languages, namely, KQML (Finin, Lobrou, and Mayfield 1994) and the FIPA ACL. Neither standard has yet achieved primacy, and neither has been evaluated extensively in an open environment such as the Internet. It seems prudent therefore to design a general-purpose agent communications facility for new agent architectures that is flexible yet provides an architecture that accepts many different specializations. In this paper we exhibit the salient features of an agent communications architecture based on distributed metaobjects. This architecture captures design commitments at a metaobject level, leaving the base-level design and implementationmore » up to the agent developer. The scope of the metamodel is broad enough to accommodate many different communication protocols, interaction protocols, and knowledge sharing regimes through extensions to the metaobject framework. We conclude that with a powerful distributed object substrate that supports metaobject communications, a general framework can be developed that will effectively enable different approaches to agent communications in the same agent system. We have implemented a KQML-based communications protocol and have several special-purpose interaction protocols under development.« less
NASA Technical Reports Server (NTRS)
Johnson, D. L. (Editor)
2008-01-01
This document provides guidelines for the terrestrial environment that are specifically applicable in the development of design requirements/specifications for NASA aerospace vehicles, payloads, and associated ground support equipment. The primary geographic areas encompassed are the John F. Kennedy Space Center, FL; Vandenberg AFB, CA; Edwards AFB, CA; Michoud Assembly Facility, New Orleans, LA; John C. Stennis Space Center, MS; Lyndon B. Johnson Space Center, Houston, TX; George C. Marshall Space Flight Center, Huntsville, AL; and the White Sands Missile Range, NM. This document presents the latest available information on the terrestrial environment applicable to the design and operations of aerospace vehicles and supersedes information presented in NASA-HDBK-1001 and TM X-64589, TM X-64757, TM-78118, TM-82473, and TM-4511. Information is included on winds, atmospheric thermodynamic models, radiation, humidity, precipitation, severe weather, sea state, lightning, atmospheric chemistry, seismic criteria, and a model to predict atmospheric dispersion of aerospace engine exhaust cloud rise and growth. In addition, a section has been included to provide information on the general distribution of natural environmental extremes in the conterminous United States, and world-wide, that may be needed to specify design criteria in the transportation of space vehicle subsystems and components. A section on atmospheric attenuation has been added since measurements by sensors on certain Earth orbital experiment missions are influenced by the Earth s atmosphere. There is also a section on mission analysis, prelaunch monitoring, and flight evaluation as related to the terrestrial environment inputs. The information in these guidelines is recommended for use in the development of aerospace vehicle and related equipment design and associated operational criteria, unless otherwise stated in contract work specifications. The terrestrial environmental data in these guidelines are primarily limited to information below 90 km altitude.
Orchestrating Distributed Resource Ensembles for Petascale Science
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baldin, Ilya; Mandal, Anirban; Ruth, Paul
2014-04-24
Distributed, data-intensive computational science applications of interest to DOE scientific com- munities move large amounts of data for experiment data management, distributed analysis steps, remote visualization, and accessing scientific instruments. These applications need to orchestrate ensembles of resources from multiple resource pools and interconnect them with high-capacity multi- layered networks across multiple domains. It is highly desirable that mechanisms are designed that provide this type of resource provisioning capability to a broad class of applications. It is also important to have coherent monitoring capabilities for such complex distributed environments. In this project, we addressed these problems by designing an abstractmore » API, enabled by novel semantic resource descriptions, for provisioning complex and heterogeneous resources from multiple providers using their native provisioning mechanisms and control planes: computational, storage, and multi-layered high-speed network domains. We used an extensible resource representation based on semantic web technologies to afford maximum flexibility to applications in specifying their needs. We evaluated the effectiveness of provisioning using representative data-intensive ap- plications. We also developed mechanisms for providing feedback about resource performance to the application, to enable closed-loop feedback control and dynamic adjustments to resource allo- cations (elasticity). This was enabled through development of a novel persistent query framework that consumes disparate sources of monitoring data, including perfSONAR, and provides scalable distribution of asynchronous notifications.« less
Representation of chromatic distribution for lighting system
NASA Astrophysics Data System (ADS)
Rossi, Maurizio; Musante, Fulvio
2015-01-01
For the luminaire manufacturer, the measurement of the lighting intensity distribution (LID) emitted by lighting fixture is based on photometry. So light is measured as an achromatic value of intensity and there is no the possibility to discriminate the measurement of white vs. colored light. At the Laboratorio Luce of Politecnico di Milano a new instrument for the measurement of spectral radiant intensities distribution for lighting system has been built: the goniospectra- radiometer. This new measuring tool is based on a traditional mirror gonio-photometer with a CCD spectraradiometer controlled by a PC. Beside the traditional representation of photometric distribution we have introduced a new representation where, in addition to the information about the distribution of luminous intensity in space, new details about the chromaticity characteristic of the light sources have been implemented. Some of the results of this research have been applied in developing and testing a new line of lighting system "My White Light" (the research project "Light, Environment and Humans" funded in the Italian Lombardy region Metadistretti Design Research Program involving Politecnico di Milano, Artemide, Danese, and some other SME of the Lighting Design district), giving scientific notions and applicative in order to support the assumption that colored light sources can be used for the realization of interior luminaries that, other than just have low power consumption and long life, may positively affect the mood of people.
Alverson, Dale C; Saiki, Stanley M; Jacobs, Joshua; Saland, Linda; Keep, Marcus F; Norenberg, Jeffrey; Baker, Rex; Nakatsu, Curtis; Kalishman, Summers; Lindberg, Marlene; Wax, Diane; Mowafi, Moad; Summers, Kenneth L; Holten, James R; Greenfield, John A; Aalseth, Edward; Nickles, David; Sherstyuk, Andrei; Haines, Karen; Caudell, Thomas P
2004-01-01
Medical knowledge and skills essential for tomorrow's healthcare professionals continue to change faster than ever before creating new demands in medical education. Project TOUCH (Telehealth Outreach for Unified Community Health) has been developing methods to enhance learning by coupling innovations in medical education with advanced technology in high performance computing and next generation Internet2 embedded in virtual reality environments (VRE), artificial intelligence and experiential active learning. Simulations have been used in education and training to allow learners to make mistakes safely in lieu of real-life situations, learn from those mistakes and ultimately improve performance by subsequent avoidance of those mistakes. Distributed virtual interactive environments are used over distance to enable learning and participation in dynamic, problem-based, clinical, artificial intelligence rules-based, virtual simulations. The virtual reality patient is programmed to dynamically change over time and respond to the manipulations by the learner. Participants are fully immersed within the VRE platform using a head-mounted display and tracker system. Navigation, locomotion and handling of objects are accomplished using a joy-wand. Distribution is managed via the Internet2 Access Grid using point-to-point or multi-casting connectivity through which the participants can interact. Medical students in Hawaii and New Mexico (NM) participated collaboratively in problem solving and managing of a simulated patient with a closed head injury in VRE; dividing tasks, handing off objects, and functioning as a team. Students stated that opportunities to make mistakes and repeat actions in the VRE were extremely helpful in learning specific principles. VRE created higher performance expectations and some anxiety among VRE users. VRE orientation was adequate but students needed time to adapt and practice in order to improve efficiency. This was also demonstrated successfully between Western Australia and UNM. We successfully demonstrated the ability to fully immerse participants in a distributed virtual environment independent of distance for collaborative team interaction in medical simulation designed for education and training. The ability to make mistakes in a safe environment is well received by students and has a positive impact on their understanding, as well as memory of the principles involved in correcting those mistakes. Bringing people together as virtual teams for interactive experiential learning and collaborative training, independent of distance, provides a platform for distributed "just-in-time" training, performance assessment and credentialing. Further validation is necessary to determine the potential value of the distributed VRE in knowledge transfer, improved future performance and should entail training participants to competence in using these tools.
A Cooperative Model for IS Security Risk Management in Distributed Environment
Zheng, Chundong
2014-01-01
Given the increasing cooperation between organizations, the flexible exchange of security information across the allied organizations is critical to effectively manage information systems (IS) security in a distributed environment. In this paper, we develop a cooperative model for IS security risk management in a distributed environment. In the proposed model, the exchange of security information among the interconnected IS under distributed environment is supported by Bayesian networks (BNs). In addition, for an organization's IS, a BN is utilized to represent its security environment and dynamically predict its security risk level, by which the security manager can select an optimal action to safeguard the firm's information resources. The actual case studied illustrates the cooperative model presented in this paper and how it can be exploited to manage the distributed IS security risk effectively. PMID:24563626
A Computational framework for telemedicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Foster, I.; von Laszewski, G.; Thiruvathukal, G. K.
1998-07-01
Emerging telemedicine applications require the ability to exploit diverse and geographically distributed resources. Highspeed networks are used to integrate advanced visualization devices, sophisticated instruments, large databases, archival storage devices, PCs, workstations, and supercomputers. This form of telemedical environment is similar to networked virtual supercomputers, also known as metacomputers. Metacomputers are already being used in many scientific application areas. In this article, we analyze requirements necessary for a telemedical computing infrastructure and compare them with requirements found in a typical metacomputing environment. We will show that metacomputing environments can be used to enable a more powerful and unified computational infrastructure formore » telemedicine. The Globus metacomputing toolkit can provide the necessary low level mechanisms to enable a large scale telemedical infrastructure. The Globus toolkit components are designed in a modular fashion and can be extended to support the specific requirements for telemedicine.« less
An Analysis of the Orbital Distribution of Solid Rocket Motor Slag
NASA Technical Reports Server (NTRS)
Horstman, Matthew F.; Mulrooney, Mark
2007-01-01
The contribution made by orbiting solid rocket motors (SRMs) to the orbital debris environment is both potentially significant and insufficiently studied. A combination of rocket motor design and the mechanisms of the combustion process can lead to the emission of sufficiently large and numerous by-products to warrant assessment of their contribution to the orbital debris environment. These particles are formed during SRM tail-off, or the termination of burn, by the rapid expansion, dissemination, and solidification of the molten Al2O3 slag pool accumulated during the main burn phase of SRMs utilizing immersion-type nozzles. Though the usage of SRMs is low compared to the usage of liquid fueled motors, the propensity of SRMs to generate particles in the 100 m and larger size regime has caused concern regarding their contributing to the debris environment. Particle sizes as large as 1 cm have been witnessed in ground tests conducted under vacuum conditions and comparable sizes have been estimated via ground-based telescopic and in-situ observations of sub-orbital SRM tail-off events. Using sub-orbital and post recovery observations, a simplistic number-size-velocity distribution of slag from on-orbit SRM firings was postulated. In this paper we have developed more elaborate distributions and emission scenarios and modeled the resultant orbital population and its time evolution by incorporating a historical database of SRM launches, propellant masses, and likely location and time of particulate deposition. From this analysis a more comprehensive understanding has been obtained of the role of SRM ejecta in the orbital debris environment, indicating that SRM slag is a significant component of the current and future population.
UBioLab: a web-laboratory for ubiquitous in-silico experiments.
Bartocci, Ezio; Cacciagrano, Diletta; Di Berardini, Maria Rita; Merelli, Emanuela; Vito, Leonardo
2012-07-09
The huge and dynamic amount of bioinformatic resources (e.g., data and tools) available nowadays in Internet represents a big challenge for biologists –for what concerns their management and visualization– and for bioinformaticians –for what concerns the possibility of rapidly creating and executing in-silico experiments involving resources and activities spread over the WWW hyperspace. Any framework aiming at integrating such resources as in a physical laboratory has imperatively to tackle –and possibly to handle in a transparent and uniform way– aspects concerning physical distribution, semantic heterogeneity, co-existence of different computational paradigms and, as a consequence, of different invocation interfaces (i.e., OGSA for Grid nodes, SOAP for Web Services, Java RMI for Java objects, etc.). The framework UBioLab has been just designed and developed as a prototype following the above objective. Several architectural features –as those ones of being fully Web-based and of combining domain ontologies, Semantic Web and workflow techniques– give evidence of an effort in such a direction. The integration of a semantic knowledge management system for distributed (bioinformatic) resources, a semantic-driven graphic environment for defining and monitoring ubiquitous workflows and an intelligent agent-based technology for their distributed execution allows UBioLab to be a semantic guide for bioinformaticians and biologists providing (i) a flexible environment for visualizing, organizing and inferring any (semantics and computational) "type" of domain knowledge (e.g., resources and activities, expressed in a declarative form), (ii) a powerful engine for defining and storing semantic-driven ubiquitous in-silico experiments on the domain hyperspace, as well as (iii) a transparent, automatic and distributed environment for correct experiment executions.
Web-Accessible Scientific Workflow System for Performance Monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roelof Versteeg; Roelof Versteeg; Trevor Rowe
2006-03-01
We describe the design and implementation of a web accessible scientific workflow system for environmental monitoring. This workflow environment integrates distributed, automated data acquisition with server side data management and information visualization through flexible browser based data access tools. Component technologies include a rich browser-based client (using dynamic Javascript and HTML/CSS) for data selection, a back-end server which uses PHP for data processing, user management, and result delivery, and third party applications which are invoked by the back-end using webservices. This environment allows for reproducible, transparent result generation by a diverse user base. It has been implemented for several monitoringmore » systems with different degrees of complexity.« less
NASA Astrophysics Data System (ADS)
Fuqua, Peter D.; Presser, Nathan; Barrie, James D.; Meshishnek, Michael J.; Coleman, Dianne J.
2002-06-01
Certain spaceborne telescope designs require that dielectric-coated lenses be exposed to the energetic electrons and protons associated with the space environment. Test coupons that were exposed to a simulated space environment showed extensive pitting as a result of dielectric breakdown. A typical pit was 50-100 mum at the surface and extended to the substrate material, in which a 10-mum-diameter melt region was found. Pitting was not observed on similar samples that had also been overcoated with a transparent conductive thin film. Measurement of the bidirectional reflectance distribution transfer function showed that pitting caused a fivefold to tenfold increase in the scattering of visible light.
NASA Technical Reports Server (NTRS)
Sforza, Mario; Buonomo, Sergio
1993-01-01
During the period 1983-1992 the European Space Agency (ESA) carried out several experimental campaigns to investigate the propagation impairments of the Land Mobile Satellite (LMS) communication channel. A substantial amount of data covering quite a large range of elevation angles, environments, and frequencies was obtained. Results from the data analyses are currently used for system planning and design applications within the framework of the future ESA LMS projects. This comprehensive experimental data base is presently utilized also for channel modeling purposes and preliminary results are given. Cumulative Distribution Functions (PDF) and Duration of Fades (DoF) statistics at different elevation angles and environments were also included.
Network-based collaborative research environment LDRD final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davies, B.R.; McDonald, M.J.
1997-09-01
The Virtual Collaborative Environment (VCE) and Distributed Collaborative Workbench (DCW) are new technologies that make it possible for diverse users to synthesize and share mechatronic, sensor, and information resources. Using these technologies, university researchers, manufacturers, design firms, and others can directly access and reconfigure systems located throughout the world. The architecture for implementing VCE and DCW has been developed based on the proposed National Information Infrastructure or Information Highway and a tool kit of Sandia-developed software. Further enhancements to the VCE and DCW technologies will facilitate access to other mechatronic resources. This report describes characteristics of VCE and DCW andmore » also includes background information about the evolution of these technologies.« less
Oliveira, Marcos L S; Navarro, Orlando G; Crissien, Tito J; Tutikian, Bernardo F; da Boit, Kátia; Teixeira, Elba C; Cabello, Juan J; Agudelo-Castañeda, Dayana M; Silva, Luis F O
2017-10-01
There are multiple elements which enable coal geochemistry: (1) boiler and pollution control system design parameters, (2) temperature of flue gas at collection point, (3) feed coal and also other fuels like petroleum coke, tires and biomass geochemistry and (4) fuel feed particle size distribution homogeneity distribution, maintenance of pulverisers, etc. Even though there is a large number of hazardous element pollutants in the coal-processing industry, investigations on micrometer and nanometer-sized particles including their aqueous colloids formation reactions and their behaviour entering the environment are relatively few in numbers. X-ray diffraction (XRD), High Resolution-Transmission Electron microscopy (HR-TEM)/ (Energy Dispersive Spectroscopy) EDS/ (selected-area diffraction pattern) SAED, Field Emission-Scanning Electron Microscopy (FE-SEM)/EDS and granulometric distribution analysis were used as an integrated characterization techniques tool box to determine both geochemistry and nanomineralogy for coal fly ashes (CFAs) from Brazil´s largest coal power plant. Ultrafine/nano-particles size distribution from coal combustion emissions was estimated during the tests. In addition the iron and silicon content was determined as 54.6% of the total 390 different particles observed by electron bean, results aimed that these two particles represent major minerals in the environment particles normally. These data may help in future investigations to asses human health actions related with nano-particles. Copyright © 2017 Elsevier Inc. All rights reserved.
A Process for Comparing Dynamics of Distributed Space Systems Simulations
NASA Technical Reports Server (NTRS)
Cures, Edwin Z.; Jackson, Albert A.; Morris, Jeffery C.
2009-01-01
The paper describes a process that was developed for comparing the primary orbital dynamics behavior between space systems distributed simulations. This process is used to characterize and understand the fundamental fidelities and compatibilities of the modeling of orbital dynamics between spacecraft simulations. This is required for high-latency distributed simulations such as NASA s Integrated Mission Simulation and must be understood when reporting results from simulation executions. This paper presents 10 principal comparison tests along with their rationale and examples of the results. The Integrated Mission Simulation (IMSim) (formerly know as the Distributed Space Exploration Simulation (DSES)) is a NASA research and development project focusing on the technologies and processes that are related to the collaborative simulation of complex space systems involved in the exploration of our solar system. Currently, the NASA centers that are actively participating in the IMSim project are the Ames Research Center, the Jet Propulsion Laboratory (JPL), the Johnson Space Center (JSC), the Kennedy Space Center, the Langley Research Center and the Marshall Space Flight Center. In concept, each center participating in IMSim has its own set of simulation models and environment(s). These simulation tools are used to build the various simulation products that are used for scientific investigation, engineering analysis, system design, training, planning, operations and more. Working individually, these production simulations provide important data to various NASA projects.
Hyperswitch Communication Network Computer
NASA Technical Reports Server (NTRS)
Peterson, John C.; Chow, Edward T.; Priel, Moshe; Upchurch, Edwin T.
1993-01-01
Hyperswitch Communications Network (HCN) computer is prototype multiple-processor computer being developed. Incorporates improved version of hyperswitch communication network described in "Hyperswitch Network For Hypercube Computer" (NPO-16905). Designed to support high-level software and expansion of itself. HCN computer is message-passing, multiple-instruction/multiple-data computer offering significant advantages over older single-processor and bus-based multiple-processor computers, with respect to price/performance ratio, reliability, availability, and manufacturing. Design of HCN operating-system software provides flexible computing environment accommodating both parallel and distributed processing. Also achieves balance among following competing factors; performance in processing and communications, ease of use, and tolerance of (and recovery from) faults.
A mobile robots experimental environment with event-based wireless communication.
Guinaldo, María; Fábregas, Ernesto; Farias, Gonzalo; Dormido-Canto, Sebastián; Chaos, Dictino; Sánchez, José; Dormido, Sebastián
2013-07-22
An experimental platform to communicate between a set of mobile robots through a wireless network has been developed. The mobile robots get their position through a camera which performs as sensor. The video images are processed in a PC and a Waspmote card sends the corresponding position to each robot using the ZigBee standard. A distributed control algorithm based on event-triggered communications has been designed and implemented to bring the robots into the desired formation. Each robot communicates to its neighbors only at event times. Furthermore, a simulation tool has been developed to design and perform experiments with the system. An example of usage is presented.
MTL distributed magnet measurement system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nogiec, J.M.; Craker, P.A.; Garbarini, J.P.
1993-04-01
The Magnet Test Laboratory (MTL) at the Superconducting Super collider Laboratory will be required to precisely and reliably measure properties of magnets in a production environment. The extensive testing of the superconducting magnets comprises several types of measurements whose main purpose is to evaluate some basic parameters characterizing magnetic, mechanic and cryogenic properties of magnets. The measurement process will produce a significant amount of data which will be subjected to complex analysis. Such massive measurements require a careful design of both the hardware and software of computer systems, having in mind a reliable, maximally automated system. In order to fulfillmore » this requirement a dedicated Distributed Magnet Measurement System (DMMS) is being developed.« less
Parallel/distributed direct method for solving linear systems
NASA Technical Reports Server (NTRS)
Lin, Avi
1990-01-01
A new family of parallel schemes for directly solving linear systems is presented and analyzed. It is shown that these schemes exhibit a near optimal performance and enjoy several important features: (1) For large enough linear systems, the design of the appropriate paralleled algorithm is insensitive to the number of processors as its performance grows monotonically with them; (2) It is especially good for large matrices, with dimensions large relative to the number of processors in the system; (3) It can be used in both distributed parallel computing environments and tightly coupled parallel computing systems; and (4) This set of algorithms can be mapped onto any parallel architecture without any major programming difficulties or algorithmical changes.
Qualitative Description of Electric Power System Future States
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hardy, Trevor D.; Corbin, Charles D.
The simulation and evaluation of transactive systems depends to a large extent on the context in which those efforts are performed. Assumptions regarding the composition of the electric power system, the regulatory and policy environment, the distribution of renewable and other distributed energy resources (DERs), technological advances, and consumer engagement all contribute to, and affect, the evaluation of any given transactive system, regardless of its design. It is our position that the assumptions made about the state of the future power grid will determine, to some extent, the systems ultimately deployed, and that the transactive system itself may play anmore » important role in the evolution of the power system.« less
NASA Technical Reports Server (NTRS)
Babrauckas, Theresa
2000-01-01
The Affordable High Performance Computing (AHPC) project demonstrated that high-performance computing based on a distributed network of computer workstations is a cost-effective alternative to vector supercomputers for running CPU and memory intensive design and analysis tools. The AHPC project created an integrated system called a Network Supercomputer. By connecting computer work-stations through a network and utilizing the workstations when they are idle, the resulting distributed-workstation environment has the same performance and reliability levels as the Cray C90 vector Supercomputer at less than 25 percent of the C90 cost. In fact, the cost comparison between a Cray C90 Supercomputer and Sun workstations showed that the number of distributed networked workstations equivalent to a C90 costs approximately 8 percent of the C90.
A Performance Comparison of Tree and Ring Topologies in Distributed System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Min
A distributed system is a collection of computers that are connected via a communication network. Distributed systems have become commonplace due to the wide availability of low-cost, high performance computers and network devices. However, the management infrastructure often does not scale well when distributed systems get very large. Some of the considerations in building a distributed system are the choice of the network topology and the method used to construct the distributed system so as to optimize the scalability and reliability of the system, lower the cost of linking nodes together and minimize the message delay in transmission, and simplifymore » system resource management. We have developed a new distributed management system that is able to handle the dynamic increase of system size, detect and recover the unexpected failure of system services, and manage system resources. The topologies used in the system are the tree-structured network and the ring-structured network. This thesis presents the research background, system components, design, implementation, experiment results and the conclusions of our work. The thesis is organized as follows: the research background is presented in chapter 1. Chapter 2 describes the system components, including the different node types and different connection types used in the system. In chapter 3, we describe the message types and message formats in the system. We discuss the system design and implementation in chapter 4. In chapter 5, we present the test environment and results, Finally, we conclude with a summary and describe our future work in chapter 6.« less
Survivability design for a hybrid underwater vehicle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Biao; Wu, Chao; Li, Xiang
A novel hybrid underwater robotic vehicle (HROV) capable of working to the full ocean depth has been developed. The battery powered vehicle operates in two modes: operate as an untethered autonomous vehicle in autonomous underwater vehicle (AUV) mode and operate under remote control connected to the surface vessel by a lightweight, fiber optic tether in remotely operated vehicle (ROV) mode. Considering the hazardous underwater environment at the limiting depth and the hybrid operating modes, survivability has been placed on an equal level with the other design attributes of the HROV since the beginning of the project. This paper reports themore » survivability design elements for the HROV including basic vehicle design of integrated navigation and integrated communication, emergency recovery strategy, distributed architecture, redundant bus, dual battery package, emergency jettison system and self-repairing control system.« less
An Intelligent System for Document Retrieval in Distributed Office Environments.
ERIC Educational Resources Information Center
Mukhopadhyay, Uttam; And Others
1986-01-01
MINDS (Multiple Intelligent Node Document Servers) is a distributed system of knowledge-based query engines for efficiently retrieving multimedia documents in an office environment of distributed workstations. By learning document distribution patterns and user interests and preferences during system usage, it customizes document retrievals for…
NASA Astrophysics Data System (ADS)
McKenna, Ann F.; Hynes, Morgan M.; Johnson, Amy M.; Carberry, Adam R.
2016-07-01
Product archaeology as an educational approach asks engineering students to consider and explore the broader societal and global impacts of a product's manufacturing, distribution, use, and disposal on people, economics, and the environment. This study examined the impact of product archaeology in a project-based engineering design course on student attitudes and perceptions about engineering and abilities to extend and refine knowledge about broader contexts. Two design scenarios were created: one related to dental hygiene and one related to vaccination delivery. Design scenarios were used to (1) assess knowledge of broader contexts, and (2) test variability of student responses across different contextual situations. Results from pre- to post-surveying revealed improved student perceptions of knowledge of broader contexts. Significant differences were observed between the two design scenarios. The findings support the assumption that different design scenarios elicit consideration of different contexts and design scenarios can be constructed to target specific contextual considerations.
Continuous cost movement models
NASA Technical Reports Server (NTRS)
Limp, W. Fredrick
1991-01-01
Use of current space imaging systems and airborne platforms has direct use in survey design and site location when used in concert with a comprehensive GIS environment. Local conditions and site physical and chemical properties are key factors in successful applications. Conjoining of environmental constraints and site properties are present for the later prehistoric occupations in the Arkansas and Mississippi River areas. Direct linkages between comprehensive site databases and satellite images can be used to evaluate site distributions for research and management.
NASA Technical Reports Server (NTRS)
Ortega, J. M.
1985-01-01
Synopses are given for NASA supported work in computer science at the University of Virginia. Some areas of research include: error seeding as a testing method; knowledge representation for engineering design; analysis of faults in a multi-version software experiment; implementation of a parallel programming environment; two computer graphics systems for visualization of pressure distribution and convective density particles; task decomposition for multiple robot arms; vectorized incomplete conjugate gradient; and iterative methods for solving linear equations on the Flex/32.
Seaworthy Quantum Key Distribution Design and Validation (SEAKEY)
2014-07-25
link in a free- space channel through a marine environment (such as loss, noise and turbulence) and (2) parametrically calculating the secret key rate...width. Parametric calculations of the expected secret key rate As can be seen in Figure 6, the secret key rate of the BB84 protocol in the presence...Figure 9 shows the effect of various detriments on the secret -kay rate, for laser-decoy BB84. Figure 9: Effects of detriments on secret-key rate
1986-07-01
The fish temporarily lost their (1973), designed to test the effects ability to osmoregulate when exposed of suspended sediments on the hatching to...moving through a efficient osmoregulators in either pool-and-weir fishway, indicated environment (Stanley and Colby 1971). moderate activity and...J. E., J. P. Miller, and J. electrolyte balance and Davis 1969. Distribution of osmoregulation in the alewife in juvenile river herring in thefresh
Security and Efficiency Concerns With Distributed Collaborative Networking Environments
2003-09-01
have the ability to access Web communications services of the WebEx MediaTone Network from a single login. [24] WebEx provides a range of secure...Web. WebEx services enable secure data, voice and video communications through the browser and are supported by the WebEx MediaTone Network, a global...designed to host large-scale, structured events and conferences, featuring a Q&A Manager that allows multiple moderators to handle questions while
NASA Technical Reports Server (NTRS)
Szuszczewicz, Edward P.
1996-01-01
We have carried out a proof-of-concept development and test effort that not only promises the reduction of parasitic effects of surface contamination (therefore increasing the integrity of 'in situ' measurements in the 60-130 km regime), but promises a uniquely expanded measurement set that includes electron densities, plasma conductivities, charged-particle mobilities, and mass discrimination of positive and negative ion distributions throughout the continuum to free-molecular-flow regimes. Three different sensor configurations were designed, built and tested, along with specialized driving voltage, electrometer and channeltron control electronics. The individual systems were tested in a variety of simulated space environments ranging from pressures near the continuum limit of 100 mTorr to the collisionless regime at 10(exp -6) Torr. Swept modes were initially employed to better understand ion optics and ion 'beam' losses to end walls and to control electrodes. This swept mode also helped better understand and mitigate the influences of secondary electrons on the overall performance of the PIMS design concept. Final results demonstrated the utility of the concept in dominant single-ion plasma environments. Accumulated information, including theoretical concepts and laboratory data, suggest that multi-ion diagnostics are fully within the instrument capabilities and that cold plasma tests with minimized pre-aperture sheath acceleration are the key ingredients to multi-ion success.
NASA Astrophysics Data System (ADS)
Duan, Pengfei; Lei, Wenping
2017-11-01
A number of disciplines (mechanics, structures, thermal, and optics) are needed to design and build Space Camera. Separate design models are normally constructed by each discipline CAD/CAE tools. Design and analysis is conducted largely in parallel subject to requirements that have been levied on each discipline, and technical interaction between the different disciplines is limited and infrequent. As a result a unified view of the Space Camera design across discipline boundaries is not directly possible in the approach above, and generating one would require a large manual, and error-prone process. A collaborative environment that is built on abstract model and performance template allows engineering data and CAD/CAE results to be shared across above discipline boundaries within a common interface, so that it can help to attain speedy multivariate design and directly evaluate optical performance under environment loadings. A small interdisciplinary engineering team from Beijing Institute of Space Mechanics and Electricity has recently conducted a Structural/Thermal/Optical (STOP) analysis of a space camera with this collaborative environment. STOP analysis evaluates the changes in image quality that arise from the structural deformations when the thermal environment of the camera changes throughout its orbit. STOP analyses were conducted for four different test conditions applied during final thermal vacuum (TVAC) testing of the payload on the ground. The STOP Simulation Process begins with importing an integrated CAD model of the camera geometry into the collaborative environment, within which 1. Independent thermal and structural meshes are generated. 2. The thermal mesh and relevant engineering data for material properties and thermal boundary conditions are then used to compute temperature distributions at nodal points in both the thermal and structures mesh through Thermal Desktop, a COTS thermal design and analysis code. 3. Thermally induced structural deformations of the camera are then evaluated in Nastran, an industry standard code for structural design and analysis. 4. Thermal and structural results are next imported into SigFit, another COTS tool that computes deformation and best fit rigid body displacements for the optical surfaces. 5. SigFit creates a modified optical prescription that is imported into CODE V for evaluation of optical performance impacts. The integrated STOP analysis was validated using TVAC test data. For the four different TVAC tests, the relative errors between simulation and test data of measuring points temperatures were almost around 5%, while in some test conditions, they were even much lower to 1%. As to image quality MTF, relative error between simulation and test was 8.3% in the worst condition, others were all below 5%. Through the validation, it has been approved that the collaborative design and simulation environment can achieved the integrated STOP analysis of Space Camera efficiently. And further, the collaborative environment allows an interdisciplinary analysis that formerly might take several months to perform to be completed in two or three weeks, which is very adaptive to scheme demonstration of projects in earlier stages.
NASA Technical Reports Server (NTRS)
Koontz, Steven L.; Boeder, Paul A.; Pankop, Courtney; Reddell, Brandon
2005-01-01
The role of structural shielding mass in the design, verification, and in-flight performance of International Space Station (ISS), in both the natural and induced orbital ionizing radiation (IR) environments, is reported. Detailed consideration of the effects of both the natural and induced ionizing radiation environment during ISS design, development, and flight operations has produced a safe, efficient manned space platform that is largely immune to deleterious effects of the LEO ionizing radiation environment. The assumption of a small shielding mass for purposes of design and verification has been shown to be a valid worst-case approximation approach to design for reliability, though predicted dependences of single event effect (SEE) effects on latitude, longitude, SEP events, and spacecraft structural shielding mass are not observed. The Figure of Merit (FOM) method over predicts the rate for median shielding masses of about 10g/cm(exp 2) by only a factor of 3, while the Scott Effective Flux Approach (SEFA) method overestimated by about one order of magnitude as expected. The Integral Rectangular Parallelepiped (IRPP), SEFA, and FOM methods for estimating on-orbit (Single Event Upsets) SEU rates all utilize some version of the CREME-96 treatment of energetic particle interaction with structural shielding, which has been shown to underestimate the production of secondary particles in heavily shielded manned spacecraft. The need for more work directed to development of a practical understanding of secondary particle production in massive structural shielding for SEE design and verification is indicated. In contrast, total dose estimates using CAD based shielding mass distributions functions and the Shieldose Code provided a reasonable accurate estimate of accumulated dose in Grays internal to the ISS pressurized elements, albeit as a result of using worst-on-worst case assumptions (500 km altitude x 2) that compensate for ignoring both GCR and secondary particle production in massive structural shielding.
NASA Technical Reports Server (NTRS)
Chow, Edward T.; Woo, Simon S.; James, Mark; Paloulian, George K.
2012-01-01
As communication and networking technologies advance, networks will become highly complex and heterogeneous, interconnecting different network domains. There is a need to provide user authentication and data protection in order to further facilitate critical mission operations, especially in the tactical and mission-critical net-centric networking environment. The Autonomous Information Unit (AIU) technology was designed to provide the fine-grain data access and user control in a net-centric system-testing environment to meet these objectives. The AIU is a fundamental capability designed to enable fine-grain data access and user control in the cross-domain networking environments, where an AIU is composed of the mission data, metadata, and policy. An AIU provides a mechanism to establish trust among deployed AIUs based on recombining shared secrets, authentication and verify users with a username, X.509 certificate, enclave information, and classification level. AIU achieves data protection through (1) splitting data into multiple information pieces using the Shamir's secret sharing algorithm, (2) encrypting each individual information piece using military-grade AES-256 encryption, and (3) randomizing the position of the encrypted data based on the unbiased and memory efficient in-place Fisher-Yates shuffle method. Therefore, it becomes virtually impossible for attackers to compromise data since attackers need to obtain all distributed information as well as the encryption key and the random seeds to properly arrange the data. In addition, since policy can be associated with data in the AIU, different user access and data control strategies can be included. The AIU technology can greatly enhance information assurance and security management in the bandwidth-limited and ad hoc net-centric environments. In addition, AIU technology can be applicable to general complex network domains and applications where distributed user authentication and data protection are necessary. AIU achieves fine-grain data access and user control, reducing the security risk significantly, simplifying the complexity of various security operations, and providing the high information assurance across different network domains.
NASA Technical Reports Server (NTRS)
Smith, O. E.; Adelfang, S. I.
1998-01-01
The wind profile with all of its variations with respect to altitude has been, is now, and will continue to be important for aerospace vehicle design and operations. Wind profile databases and models are used for the vehicle ascent flight design for structural wind loading, flight control systems, performance analysis, and launch operations. This report presents the evolution of wind statistics and wind models from the empirical scalar wind profile model established for the Saturn Program through the development of the vector wind profile model used for the Space Shuttle design to the variations of this wind modeling concept for the X-33 program. Because wind is a vector quantity, the vector wind models use the rigorous mathematical probability properties of the multivariate normal probability distribution. When the vehicle ascent steering commands (ascent guidance) are wind biased to the wind profile measured on the day-of-launch, ascent structural wind loads are reduced and launch probability is increased. This wind load alleviation technique is recommended in the initial phase of vehicle development. The vehicle must fly through the largest load allowable versus altitude to achieve its mission. The Gumbel extreme value probability distribution is used to obtain the probability of exceeding (or not exceeding) the load allowable. The time conditional probability function is derived from the Gumbel bivariate extreme value distribution. This time conditional function is used for calculation of wind loads persistence increments using 3.5-hour Jimsphere wind pairs. These increments are used to protect the commit-to-launch decision. Other topics presented include the Shuttle Shuttle load-response to smoothed wind profiles, a new gust model, and advancements in wind profile measuring systems. From the lessons learned and knowledge gained from past vehicle programs, the development of future launch vehicles can be accelerated. However, new vehicle programs by their very nature will require specialized support for new databases and analyses for wind, atmospheric parameters (pressure, temperature, and density versus altitude), and weather. It is for this reason that project managers are encouraged to collaborate with natural environment specialists early in the conceptual design phase. Such action will give the lead time necessary to meet the natural environment design and operational requirements, and thus, reduce development costs.
LYDIAN: An Extensible Educational Animation Environment for Distributed Algorithms
ERIC Educational Resources Information Center
Koldehofe, Boris; Papatriantafilou, Marina; Tsigas, Philippas
2006-01-01
LYDIAN is an environment to support the teaching and learning of distributed algorithms. It provides a collection of distributed algorithms as well as continuous animations. Users can combine algorithms and animations with arbitrary network structures defining the interconnection and behavior of the distributed algorithm. Further, it facilitates…
Efficient Process Migration for Parallel Processing on Non-Dedicated Networks of Workstations
NASA Technical Reports Server (NTRS)
Chanchio, Kasidit; Sun, Xian-He
1996-01-01
This paper presents the design and preliminary implementation of MpPVM, a software system that supports process migration for PVM application programs in a non-dedicated heterogeneous computing environment. New concepts of migration point as well as migration point analysis and necessary data analysis are introduced. In MpPVM, process migrations occur only at previously inserted migration points. Migration point analysis determines appropriate locations to insert migration points; whereas, necessary data analysis provides a minimum set of variables to be transferred at each migration pint. A new methodology to perform reliable point-to-point data communications in a migration environment is also discussed. Finally, a preliminary implementation of MpPVM and its experimental results are presented, showing the correctness and promising performance of our process migration mechanism in a scalable non-dedicated heterogeneous computing environment. While MpPVM is developed on top of PVM, the process migration methodology introduced in this study is general and can be applied to any distributed software environment.
INTERIM -- Starlink Software Environment
NASA Astrophysics Data System (ADS)
Pearce, Dave; Pavelin, Cliff; Lawden, M. D.
Early versions of this paper were based on a number of other papers produced at a very early stage of the Starlink project. They contained a description of a specific implementation of a subroutine library, speculations on the desirable attributes of a software environment, and future development plans. They reflected the experimental nature of the Starlink software environment at that time. Since then, the situation has changed. The implemented subroutine library, INTERIM_DIR:INTERIM.OLB, is now a well established and widely used piece of software. A completely new Starlink software environment (ADAM) has been developed and distributed. Thus the library released in 1980 as `STARLINK' and now called `INTERIM' has reached the end of its development cycle and is now frozen in its current state, apart from bug corrections. This paper has, therefore, been completely rewritten and restructured to reflect the new situation. Its aim is to describe the facilities of the INTERIM subroutine library as clearly and concisely as possible. It avoids speculation, discussion of design decisions, and announcements of future plans.
Aeroheating Design Issues for Reusable Launch Vehicles: A Perspective
NASA Technical Reports Server (NTRS)
Zoby, E. Vincent; Thompson, Richard A.; Wurster, Kathryn E.
2004-01-01
An overview of basic aeroheating design issues for Reusable Launch Vehicles (RLV), which addresses the application of hypersonic ground-based testing, and computational fluid dynamic (CFD) and engineering codes, is presented. Challenges inherent to the prediction of aeroheating environments required for the successful design of the RLV Thermal Protection System (TPS) are discussed in conjunction with the importance of employing appropriate experimental/computational tools. The impact of the information garnered by using these tools in the resulting analyses, ultimately enhancing the RLV TPS design is illustrated. A wide range of topics is presented in this overview; e.g. the impact of flow physics issues such as boundary-layer transition, including effects of distributed and discrete roughness, shock-shock interactions, and flow separation/reattachment. Also, the benefit of integrating experimental and computational studies to gain an improved understanding of flow phenomena is illustrated. From computational studies, the effect of low-density conditions and of uncertainties in material surface properties on the computed heating rates a r e highlighted as well as the significant role of CFD in improving the Outer Mold Line (OML) definition to reduce aeroheating while maintaining aerodynamic performance. Appropriate selection of the TPS design trajectories and trajectory shaping to mitigate aeroheating levels and loads are discussed. Lastly, an illustration of an aeroheating design process is presented whereby data from hypersonic wind-tunnel tests are integrated with predictions from CFD codes and engineering methods to provide heating environments along an entry trajectory as required for TPS design.
Aeroheating Design Issues for Reusable Launch Vehicles: A Perspective
NASA Technical Reports Server (NTRS)
Zoby, E. Vincent; Thompson, Richard A.; Wurster, Kathryn E.
2004-01-01
An overview of basic aeroheating design issues for Reusable Launch Vehicles (RLV), which addresses the application of hypersonic ground-based testing, and computational fluid dynamic (CFD) and engineering codes, is presented. Challenges inherent to the prediction of aeroheating environments required for the successful design of the RLV Thermal Protection System (TPS) are discussed in conjunction with the importance of employing appropriate experimental/computational tools. The impact of the information garnered by using these tools in the resulting analyses, ultimately enhancing the RLV TPS design is illustrated. A wide range of topics is presented in this overview; e.g. the impact of flow physics issues such as boundary-layer transition, including effects of distributed and discrete roughness, shockshock interactions, and flow separation/reattachment. Also, the benefit of integrating experimental and computational studies to gain an improved understanding of flow phenomena is illustrated. From computational studies, the effect of low-density conditions and of uncertainties in material surface properties on the computed heating rates are highlighted as well as the significant role of CFD in improving the Outer Mold Line (OML) definition to reduce aeroheating while maintaining aerodynamic performance. Appropriate selection of the TPS design trajectories and trajectory shaping to mitigate aeroheating levels and loads are discussed. Lastly, an illustration of an aeroheating design process is presented whereby data from hypersonic wind-tunnel tests are integrated with predictions from CFD codes and engineering methods to provide heating environments along an entry trajectory as required for TPS design.
Framework for Development of Object-Oriented Software
NASA Technical Reports Server (NTRS)
Perez-Poveda, Gus; Ciavarella, Tony; Nieten, Dan
2004-01-01
The Real-Time Control (RTC) Application Framework is a high-level software framework written in C++ that supports the rapid design and implementation of object-oriented application programs. This framework provides built-in functionality that solves common software development problems within distributed client-server, multi-threaded, and embedded programming environments. When using the RTC Framework to develop software for a specific domain, designers and implementers can focus entirely on the details of the domain-specific software rather than on creating custom solutions, utilities, and frameworks for the complexities of the programming environment. The RTC Framework was originally developed as part of a Space Shuttle Launch Processing System (LPS) replacement project called Checkout and Launch Control System (CLCS). As a result of the framework s development, CLCS software development time was reduced by 66 percent. The framework is generic enough for developing applications outside of the launch-processing system domain. Other applicable high-level domains include command and control systems and simulation/ training systems.
ETICS: the international software engineering service for the grid
NASA Astrophysics Data System (ADS)
Meglio, A. D.; Bégin, M.-E.; Couvares, P.; Ronchieri, E.; Takacs, E.
2008-07-01
The ETICS system is a distributed software configuration, build and test system designed to fulfil the needs of improving the quality, reliability and interoperability of distributed software in general and grid software in particular. The ETICS project is a consortium of five partners (CERN, INFN, Engineering Ingegneria Informatica, 4D Soft and the University of Wisconsin-Madison). The ETICS service consists of a build and test job execution system based on the Metronome software and an integrated set of web services and software engineering tools to design, maintain and control build and test scenarios. The ETICS system allows taking into account complex dependencies among applications and middleware components and provides a rich environment to perform static and dynamic analysis of the software and execute deployment, system and interoperability tests. This paper gives an overview of the system architecture and functionality set and then describes how the EC-funded EGEE, DILIGENT and OMII-Europe projects are using the software engineering services to build, validate and distribute their software. Finally a number of significant use and test cases will be described to show how ETICS can be used in particular to perform interoperability tests of grid middleware using the grid itself.
New model for distributed multimedia databases and its application to networking of museums
NASA Astrophysics Data System (ADS)
Kuroda, Kazuhide; Komatsu, Naohisa; Komiya, Kazumi; Ikeda, Hiroaki
1998-02-01
This paper proposes a new distributed multimedia data base system where the databases storing MPEG-2 videos and/or super high definition images are connected together through the B-ISDN's, and also refers to an example of the networking of museums on the basis of the proposed database system. The proposed database system introduces a new concept of the 'retrieval manager' which functions an intelligent controller so that the user can recognize a set of image databases as one logical database. A user terminal issues a request to retrieve contents to the retrieval manager which is located in the nearest place to the user terminal on the network. Then, the retrieved contents are directly sent through the B-ISDN's to the user terminal from the server which stores the designated contents. In this case, the designated logical data base dynamically generates the best combination of such a retrieving parameter as a data transfer path referring to directly or data on the basis of the environment of the system. The generated retrieving parameter is then executed to select the most suitable data transfer path on the network. Therefore, the best combination of these parameters fits to the distributed multimedia database system.
Collaborative mining and transfer learning for relational data
NASA Astrophysics Data System (ADS)
Levchuk, Georgiy; Eslami, Mohammed
2015-06-01
Many of the real-world problems, - including human knowledge, communication, biological, and cyber network analysis, - deal with data entities for which the essential information is contained in the relations among those entities. Such data must be modeled and analyzed as graphs, with attributes on both objects and relations encode and differentiate their semantics. Traditional data mining algorithms were originally designed for analyzing discrete objects for which a set of features can be defined, and thus cannot be easily adapted to deal with graph data. This gave rise to the relational data mining field of research, of which graph pattern learning is a key sub-domain [11]. In this paper, we describe a model for learning graph patterns in collaborative distributed manner. Distributed pattern learning is challenging due to dependencies between the nodes and relations in the graph, and variability across graph instances. We present three algorithms that trade-off benefits of parallelization and data aggregation, compare their performance to centralized graph learning, and discuss individual benefits and weaknesses of each model. Presented algorithms are designed for linear speedup in distributed computing environments, and learn graph patterns that are both closer to ground truth and provide higher detection rates than centralized mining algorithm.
NASA Technical Reports Server (NTRS)
Renaud, John E.; Batill, Stephen M.; Brockman, Jay B.
1999-01-01
This research effort is a joint program between the Departments of Aerospace and Mechanical Engineering and the Computer Science and Engineering Department at the University of Notre Dame. The purpose of the project was to develop a framework and systematic methodology to facilitate the application of Multidisciplinary Design Optimization (MDO) to a diverse class of system design problems. For all practical aerospace systems, the design of a systems is a complex sequence of events which integrates the activities of a variety of discipline "experts" and their associated "tools". The development, archiving and exchange of information between these individual experts is central to the design task and it is this information which provides the basis for these experts to make coordinated design decisions (i.e., compromises and trade-offs) - resulting in the final product design. Grant efforts focused on developing and evaluating frameworks for effective design coordination within a MDO environment. Central to these research efforts was the concept that the individual discipline "expert", using the most appropriate "tools" available and the most complete description of the system should be empowered to have the greatest impact on the design decisions and final design. This means that the overall process must be highly interactive and efficiently conducted if the resulting design is to be developed in a manner consistent with cost and time requirements. The methods developed as part of this research effort include; extensions to a sensitivity based Concurrent Subspace Optimization (CSSO) NMO algorithm; the development of a neural network response surface based CSSO-MDO algorithm; and the integration of distributed computing and process scheduling into the MDO environment. This report overviews research efforts in each of these focus. A complete bibliography of research produced with support of this grant is attached.
A Mobile Sensor Network System for Monitoring of Unfriendly Environments.
Song, Guangming; Zhou, Yaoxin; Ding, Fei; Song, Aiguo
2008-11-14
Observing microclimate changes is one of the most popular applications of wireless sensor networks. However, some target environments are often too dangerous or inaccessible to humans or large robots and there are many challenges for deploying and maintaining wireless sensor networks in those unfriendly environments. This paper presents a mobile sensor network system for solving this problem. The system architecture, the mobile node design, the basic behaviors and advanced network capabilities have been investigated respectively. A wheel-based robotic node architecture is proposed here that can add controlled mobility to wireless sensor networks. A testbed including some prototype nodes has also been created for validating the basic functions of the proposed mobile sensor network system. Motion performance tests have been done to get the positioning errors and power consumption model of the mobile nodes. Results of the autonomous deployment experiment show that the mobile nodes can be distributed evenly into the previously unknown environments. It provides powerful support for network deployment and maintenance and can ensure that the sensor network will work properly in unfriendly environments.
NASA Technical Reports Server (NTRS)
Williams, P.; Sagraniching, E.; Bennett, M.; Singh, R.
1991-01-01
A walking robot was designed, analyzed, and tested as an intelligent, mobile, and a terrain adaptive system. The robot's design was an application of existing technologies. The design of the six legs modified and combines well understood mechanisms and was optimized for performance, flexibility, and simplicity. The body design incorporated two tripods for walking stability and ease of turning. The electrical hardware design used modularity and distributed processing to drive the motors. The software design used feedback to coordinate the system and simple keystrokes to give commands. The walking machine can be easily adapted to hostile environments such as high radiation zones and alien terrain. The primary goal of the leg design was to create a leg capable of supporting a robot's body and electrical hardware while walking or performing desired tasks, namely those required for planetary exploration. The leg designers intent was to study the maximum amount of flexibility and maneuverability achievable by the simplest and lightest leg design. The main constraints for the leg design were leg kinematics, ease of assembly, degrees of freedom, number of motors, overall size, and weight.
Collaborative environments for capability-based planning
NASA Astrophysics Data System (ADS)
McQuay, William K.
2005-05-01
Distributed collaboration is an emerging technology for the 21st century that will significantly change how business is conducted in the defense and commercial sectors. Collaboration involves two or more geographically dispersed entities working together to create a "product" by sharing and exchanging data, information, and knowledge. A product is defined broadly to include, for example, writing a report, creating software, designing hardware, or implementing robust systems engineering and capability planning processes in an organization. Collaborative environments provide the framework and integrate models, simulations, domain specific tools, and virtual test beds to facilitate collaboration between the multiple disciplines needed in the enterprise. The Air Force Research Laboratory (AFRL) is conducting a leading edge program in developing distributed collaborative technologies targeted to the Air Force's implementation of systems engineering for a simulation-aided acquisition and capability-based planning. The research is focusing on the open systems agent-based framework, product and process modeling, structural architecture, and the integration technologies - the glue to integrate the software components. In past four years, two live assessment events have been conducted to demonstrate the technology in support of research for the Air Force Agile Acquisition initiatives. The AFRL Collaborative Environment concept will foster a major cultural change in how the acquisition, training, and operational communities conduct business.
Dusty Plasmas on the Lunar Surface
NASA Astrophysics Data System (ADS)
Horanyi, M.; Andersson, L.; Colwell, J.; Ergun, R.; Gruen, E.; McClintock, B.; Peterson, W. K.; Robertson, S.; Sternovsky, Z.; Wang, X.
2006-12-01
The electrostatic levitation and transport of lunar dust remains one of the most interesting and controversial science issues from the Apollo era. This issue is also of great engineering importance in designing human habitats and protecting optical and mechanical devices. As function of time and location, the lunar surface is exposed to solar wind plasma, UV radiation, and/or the plasma environment of our magnetosphere. Dust grains on the lunar surface collect an electrostatic charge; alter the large-scale surface charge density distribution, ?and subsequently develop an interface region to the background plasma and radiation. There are several in situ and remote sensing observations that indicate that dusty plasma processes are likely to be responsible for the mobilization and transport of lunar soil. These processes are relevant to: a) understanding the lunar surface environment; b) develop dust mitigation strategies; c) to understand the basic physical processes involved in the birth and collapse of dust loaded plasma sheaths. This talk will focus on the dusty plasma processes on the lunar surface. We will review the existing body of observations, and will also consider future opportunities for the combination of in situ and remote sensing observations. Our goals are to characterize: a) the temporal variation of the spatial and size distributions of the levitated/transported dust; and b) the surface plasma environment
Tailoring the energy distribution and loss of 2D plasmons
Lin, Xiao; Rivera, Nicholas; Lopez, Josue J.; ...
2016-10-25
Here, the ability to tailor the energy distribution of plasmons at the nanoscale has many applications in nanophotonics, such as designing plasmon lasers, spasers, and quantum emitters. To this end, we analytically study the energy distribution and the proper field quantization of 2D plasmons with specific examples for graphene plasmons. We find that the portion of the plasmon energy contained inside graphene (energy confinement factor) can exceed 50%, despite graphene being infinitely thin. In fact, this very high energy confinement can make it challenging to tailor the energy distribution of graphene plasmons just by modifying the surrounding dielectric environment ormore » the geometry, such as changing the separation distance between two coupled graphene layers. However, by adopting concepts of parity-time symmetry breaking, we show that tuning the loss in one of the two coupled graphene layers can simultaneously tailor the energy confinement factor and propagation characteristics, causing the phenomenon of loss-induced plasmonic transparency.« less
NASA Astrophysics Data System (ADS)
Jones, Scott B.; Or, Dani
1999-04-01
Plants grown in porous media are part of a bioregenerative life support system designed for long-duration space missions. Reduced gravity conditions of orbiting spacecraft (microgravity) alter several aspects of liquid flow and distribution within partially saturated porous media. The objectives of this study were to evaluate the suitability of conventional capillary flow theory in simulating water distribution in porous media measured in a microgravity environment. Data from experiments aboard the Russian space station Mir and a U.S. space shuttle were simulated by elimination of the gravitational term from the Richards equation. Qualitative comparisons with media hydraulic parameters measured on Earth suggest narrower pore size distributions and inactive or nonparticipating large pores in microgravity. Evidence of accentuated hysteresis, altered soil-water characteristic, and reduced unsaturated hydraulic conductivity from microgravity simulations may be attributable to a number of proposed secondary mechanisms. These are likely spawned by enhanced and modified paths of interfacial flows and an altered force ratio of capillary to body forces in microgravity.
Organizational commitment, work environment conditions, and life satisfaction among Iranian nurses.
Vanaki, Zohreh; Vagharseyyedin, Seyyed Abolfazl
2009-12-01
Employee commitment to the organization is a crucial issue in today's health-care market. In Iran, few studies have sought to evaluate the factors that contribute to forms of commitment. The aim of this study was to investigate the relationship between nurses' organizational commitment, work environment conditions, and life satisfaction. A cross-sectional design was utilized. Questionnaires were distributed to all the staff nurses who had permanent employment (with at least 2 years of experience in nursing) in the five hospitals affiliated to Birjand Medical Sciences University. Two hundred and fifty participants returned completed questionnaires. Most were female and married. The correlation of the total scores of nurses' affective organizational commitment and work environment conditions indicated a significant and positive relationship. Also, a statistically significant relationship was found between affective organizational commitment and life satisfaction. The implementation of a comprehensive program to improve the work conditions and life satisfaction of nurses could enhance their organizational commitment.
Secure dissemination of electronic healthcare records in distributed wireless environments.
Belsis, Petros; Vassis, Dimitris; Skourlas, Christos; Pantziou, Grammati
2008-01-01
A new networking paradigm has emerged with the appearance of wireless computing. Among else ad-hoc networks, mobile and ubiquitous environments can boost the performance of systems in which they get applied. Among else, medical environments are a convenient example of their applicability. With the utilisation of wireless infrastructures, medical data may be accessible to healthcare practitioners, enabling continuous access to medical data. Due to the critical nature of medical information, the design and implementation of these infrastructures demands special treatment in order to meet specific requirements; among else, special care should be taken in order to manage interoperability, security, and in order to deal with bandwidth and hardware resource constraints that characterize the wireless topology. In this paper we present an architecture that attempts to deal with these issues; moreover, in order to prove the validity of our approach we have also evaluated the performance of our platform through simulation in different operating scenarios.
Ecogeographic Genetic Epidemiology
Sloan, Chantel D.; Duell, Eric J.; Shi, Xun; Irwin, Rebecca; Andrew, Angeline S.; Williams, Scott M.; Moore, Jason H.
2009-01-01
Complex diseases such as cancer and heart disease result from interactions between an individual's genetics and environment, i.e. their human ecology. Rates of complex diseases have consistently demonstrated geographic patterns of incidence, or spatial “clusters” of increased incidence relative to the general population. Likewise, genetic subpopulations and environmental influences are not evenly distributed across space. Merging appropriate methods from genetic epidemiology, ecology and geography will provide a more complete understanding of the spatial interactions between genetics and environment that result in spatial patterning of disease rates. Geographic Information Systems (GIS), which are tools designed specifically for dealing with geographic data and performing spatial analyses to determine their relationship, are key to this kind of data integration. Here the authors introduce a new interdisciplinary paradigm, ecogeographic genetic epidemiology, which uses GIS and spatial statistical analyses to layer genetic subpopulation and environmental data with disease rates and thereby discern the complex gene-environment interactions which result in spatial patterns of incidence. PMID:19025788
Measuring Small Debris - What You Can't See Can Hurt You
NASA Technical Reports Server (NTRS)
Matney, Mark
2016-01-01
While modeling gives us a tool to better understand the Earth orbit debris environment, it is measurements that give us "ground truth" about what is happening in space. Assets that can detect orbital debris remotely from the surface of the Earth, such as radars and telescopes, give us a statistical view of how debris are distributed in space, how they are being created, and how they are evolving over time. In addition, in situ detectors in space are giving us a better picture of how the small particle environment is actually damaging spacecraft today. IN addition, simulation experiments on the ground help us to understand what we are seeing in orbit. This talk will summarize the history of space debris measurements, how it has changed our view of the Earth orbit environment, and how we are designing the experiments of tomorrow.
VEVI: A Virtual Reality Tool For Robotic Planetary Explorations
NASA Technical Reports Server (NTRS)
Piguet, Laurent; Fong, Terry; Hine, Butler; Hontalas, Phil; Nygren, Erik
1994-01-01
The Virtual Environment Vehicle Interface (VEVI), developed by the NASA Ames Research Center's Intelligent Mechanisms Group, is a modular operator interface for direct teleoperation and supervisory control of robotic vehicles. Virtual environments enable the efficient display and visualization of complex data. This characteristic allows operators to perceive and control complex systems in a natural fashion, utilizing the highly-evolved human sensory system. VEVI utilizes real-time, interactive, 3D graphics and position / orientation sensors to produce a range of interface modalities from the flat panel (windowed or stereoscopic) screen displays to head mounted/head-tracking stereo displays. The interface provides generic video control capability and has been used to control wheeled, legged, air bearing, and underwater vehicles in a variety of different environments. VEVI was designed and implemented to be modular, distributed and easily operated through long-distance communication links, using a communication paradigm called SYNERGY.
The component-based architecture of the HELIOS medical software engineering environment.
Degoulet, P; Jean, F C; Engelmann, U; Meinzer, H P; Baud, R; Sandblad, B; Wigertz, O; Le Meur, R; Jagermann, C
1994-12-01
The constitution of highly integrated health information networks and the growth of multimedia technologies raise new challenges for the development of medical applications. We describe in this paper the general architecture of the HELIOS medical software engineering environment devoted to the development and maintenance of multimedia distributed medical applications. HELIOS is made of a set of software components, federated by a communication channel called the HELIOS Unification Bus. The HELIOS kernel includes three main components, the Analysis-Design and Environment, the Object Information System and the Interface Manager. HELIOS services consist in a collection of toolkits providing the necessary facilities to medical application developers. They include Image Related services, a Natural Language Processor, a Decision Support System and Connection services. The project gives special attention to both object-oriented approaches and software re-usability that are considered crucial steps towards the development of more reliable, coherent and integrated applications.
Cooling of Electric Motors Used for Propulsion on SCEPTOR
NASA Technical Reports Server (NTRS)
Christie, Robert J.; Dubois, Arthur; Derlaga, Joseph M.
2017-01-01
NASA is developing a suite of hybrid-electric propulsion technologies for aircraft. These technologies have the benefit of lower emissions, diminished noise, increased efficiency, and reduced fuel burn. These will provide lower operating costs for aircraft operators. Replacing internal combustion engines with distributed electric propulsion is a keystone of this technology suite, but presents many new problems to aircraft system designers. One of the problems is how to cool these electric motors without adding significant aerodynamic drag, cooling system weight or fan power. This paper discusses the options evaluated for cooling the motors on SCEPTOR (Scalable Convergent Electric Propulsion Technology and Operations Research): a project that will demonstrate Distributed Electric Propulsion technology in flight. Options for external and internal cooling, inlet and exhaust locations, ducting and adjustable cowling, and axial and centrifugal fans were evaluated. The final design was based on a trade between effectiveness, simplicity, robustness, mass and performance over a range of ground and flight operation environments.
Software design and implementation concepts for an interoperable medical communication framework.
Besting, Andreas; Bürger, Sebastian; Kasparick, Martin; Strathen, Benjamin; Portheine, Frank
2018-02-23
The new IEEE 11073 service-oriented device connectivity (SDC) standard proposals for networked point-of-care and surgical devices constitutes the basis for improved interoperability due to its independence of vendors. To accelerate the distribution of the standard a reference implementation is indispensable. However, the implementation of such a framework has to overcome several non-trivial challenges. First, the high level of complexity of the underlying standard must be reflected in the software design. An efficient implementation has to consider the limited resources of the underlying hardware. Moreover, the frameworks purpose of realizing a distributed system demands a high degree of reliability of the framework itself and its internal mechanisms. Additionally, a framework must provide an easy-to-use and fail-safe application programming interface (API). In this work, we address these challenges by discussing suitable software engineering principles and practical coding guidelines. A descriptive model is developed that identifies key strategies. General feasibility is shown by outlining environments in which our implementation has been utilized.
Optimized growth and reorientation of anisotropic material based on evolution equations
NASA Astrophysics Data System (ADS)
Jantos, Dustin R.; Junker, Philipp; Hackl, Klaus
2018-07-01
Modern high-performance materials have inherent anisotropic elastic properties. The local material orientation can thus be considered to be an additional design variable for the topology optimization of structures containing such materials. In our previous work, we introduced a variational growth approach to topology optimization for isotropic, linear-elastic materials. We solved the optimization problem purely by application of Hamilton's principle. In this way, we were able to determine an evolution equation for the spatial distribution of density mass, which can be evaluated in an iterative process within a solitary finite element environment. We now add the local material orientation described by a set of three Euler angles as additional design variables into the three-dimensional model. This leads to three additional evolution equations that can be separately evaluated for each (material) point. Thus, no additional field unknown within the finite element approach is needed, and the evolution of the spatial distribution of density mass and the evolution of the Euler angles can be evaluated simultaneously.
Developing Smartphone Apps for Education, Outreach, Science, and Engineering
NASA Astrophysics Data System (ADS)
Weatherwax, A. T.; Fitzsimmons, Z.; Czajkowski, J.; Breimer, E.; Hellman, S. B.; Hunter, S.; Dematteo, J.; Savery, T.; Melsert, K.; Sneeringer, J.
2010-12-01
The increased popularity of mobile phone apps provide scientists with a new avenue for sharing and distributing data and knowledge with colleagues, while also providing meaningful education and outreach products for consumption by the general public. Our initial development of iPhone and Android apps centered on the distribution of exciting auroral images taken at the South Pole for education and outreach purposes. These portable platforms, with limited resources when compared to computers, presented a unique set of design and implementation challenges that we will discuss in this presentation. For example, the design must account for limited memory; screen size; processing power; battery life; and potentially high data transport costs. Some of these unique requirements created an environment that enabled undergraduate and high-school students to participate in the creation of these apps. Additionally, during development it became apparent that these apps could also serve as data analysis and engineering tools. Our presentation will further discuss our plans to use apps not only for Education and Public Outreach, but for teaching, science and engineering.
Hypervelocity Impact Test Facility: A gun for hire
NASA Technical Reports Server (NTRS)
Johnson, Calvin R.; Rose, M. F.; Hill, D. C.; Best, S.; Chaloupka, T.; Crawford, G.; Crumpler, M.; Stephens, B.
1994-01-01
An affordable technique has been developed to duplicate the types of impacts observed on spacecraft, including the Shuttle, by use of a certified Hypervelocity Impact Facility (HIF) which propels particulates using capacitor driven electric gun techniques. The fully operational facility provides a flux of particles in the 10-100 micron diameter range with a velocity distribution covering the space debris and interplanetary dust particle environment. HIF measurements of particle size, composition, impact angle and velocity distribution indicate that such parameters can be controlled in a specified, tailored test designed for or by the user. Unique diagnostics enable researchers to fully describe the impact for evaluating the 'targets' under full power or load. Users regularly evaluate space hardware, including solar cells, coatings, and materials, exposing selected portions of space-qualified items to a wide range of impact events and environmental conditions. Benefits include corroboration of data obtained from impact events, flight simulation of designs, accelerated aging of systems, and development of manufacturing techniques.
Design of a QoS-controlled ATM-based communications system in chorus
NASA Astrophysics Data System (ADS)
Coulson, Geoff; Campbell, Andrew; Robin, Philippe; Blair, Gordon; Papathomas, Michael; Shepherd, Doug
1995-05-01
We describe the design of an application platform able to run distributed real-time and multimedia applications alongside conventional UNIX programs. The platform is embedded in a microkernel/PC environment and supported by an ATM-based, QoS-driven communications stack. In particular, we focus on resource-management aspects of the design and deal with CPU scheduling, network resource-management and memory-management issues. An architecture is presented that guarantees QoS levels of both communications and processing with varying degrees of commitment as specified by user-level QoS parameters. The architecture uses admission tests to determine whether or not new activities can be accepted and includes modules to translate user-level QoS parameters into representations usable by the scheduling, network, and memory-management subsystems.
The Particle Physics Data Grid. Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Livny, Miron
2002-08-16
The main objective of the Particle Physics Data Grid (PPDG) project has been to implement and evaluate distributed (Grid-enabled) data access and management technology for current and future particle and nuclear physics experiments. The specific goals of PPDG have been to design, implement, and deploy a Grid-based software infrastructure capable of supporting the data generation, processing and analysis needs common to the physics experiments represented by the participants, and to adapt experiment-specific software to operate in the Grid environment and to exploit this infrastructure. To accomplish these goals, the PPDG focused on the implementation and deployment of several critical services:more » reliable and efficient file replication service, high-speed data transfer services, multisite file caching and staging service, and reliable and recoverable job management services. The focus of the activity was the job management services and the interplay between these services and distributed data access in a Grid environment. Software was developed to study the interaction between HENP applications and distributed data storage fabric. One key conclusion was the need for a reliable and recoverable tool for managing large collections of interdependent jobs. An attached document provides an overview of the current status of the Directed Acyclic Graph Manager (DAGMan) with its main features and capabilities.« less
Parallel task processing of very large datasets
NASA Astrophysics Data System (ADS)
Romig, Phillip Richardson, III
This research concerns the use of distributed computer technologies for the analysis and management of very large datasets. Improvements in sensor technology, an emphasis on global change research, and greater access to data warehouses all are increase the number of non-traditional users of remotely sensed data. We present a framework for distributed solutions to the challenges of datasets which exceed the online storage capacity of individual workstations. This framework, called parallel task processing (PTP), incorporates both the task- and data-level parallelism exemplified by many image processing operations. An implementation based on the principles of PTP, called Tricky, is also presented. Additionally, we describe the challenges and practical issues in modeling the performance of parallel task processing with large datasets. We present a mechanism for estimating the running time of each unit of work within a system and an algorithm that uses these estimates to simulate the execution environment and produce estimated runtimes. Finally, we describe and discuss experimental results which validate the design. Specifically, the system (a) is able to perform computation on datasets which exceed the capacity of any one disk, (b) provides reduction of overall computation time as a result of the task distribution even with the additional cost of data transfer and management, and (c) in the simulation mode accurately predicts the performance of the real execution environment.
Vroom: designing an augmented environment for remote collaboration in digital cinema production
NASA Astrophysics Data System (ADS)
Margolis, Todd; Cornish, Tracy
2013-03-01
As media technologies become increasingly affordable, compact and inherently networked, new generations of telecollaborative platforms continue to arise which integrate these new affordances. Virtual reality has been primarily concerned with creating simulations of environments that can transport participants to real or imagined spaces that replace the "real world". Meanwhile Augmented Reality systems have evolved to interleave objects from Virtual Reality environments into the physical landscape. Perhaps now there is a new class of systems that reverse this precept to enhance dynamic media landscapes and immersive physical display environments to enable intuitive data exploration through collaboration. Vroom (Virtual Room) is a next-generation reconfigurable tiled display environment in development at the California Institute for Telecommunications and Information Technology (Calit2) at the University of California, San Diego. Vroom enables freely scalable digital collaboratories, connecting distributed, high-resolution visualization resources for collaborative work in the sciences, engineering and the arts. Vroom transforms a physical space into an immersive media environment with large format interactive display surfaces, video teleconferencing and spatialized audio built on a highspeed optical network backbone. Vroom enables group collaboration for local and remote participants to share knowledge and experiences. Possible applications include: remote learning, command and control, storyboarding, post-production editorial review, high resolution video playback, 3D visualization, screencasting and image, video and multimedia file sharing. To support these various scenarios, Vroom features support for multiple user interfaces (optical tracking, touch UI, gesture interface, etc.), support for directional and spatialized audio, giga-pixel image interactivity, 4K video streaming, 3D visualization and telematic production. This paper explains the design process that has been utilized to make Vroom an accessible and intuitive immersive environment for remote collaboration specifically for digital cinema production.
Duerr, Adam E.; Miller, Tricia A.; Cornell Duerr, Kerri L; Lanzone, Michael J.; Fesnock, Amy; Katzner, Todd E.
2015-01-01
Anthropogenic development has great potential to affect fragile desert environments. Large-scale development of renewable energy infrastructure is planned for many desert ecosystems. Development plans should account for anthropogenic effects to distributions and abundance of rare or sensitive wildlife; however, baseline data on abundance and distribution of such wildlife are often lacking. We surveyed for predatory birds in the Sonoran and Mojave Deserts of southern California, USA, in an area designated for protection under the “Desert Renewable Energy Conservation Plan”, to determine how these birds are distributed across the landscape and how this distribution is affected by existing development. We developed species-specific models of resight probability to adjust estimates of abundance and density of each individual common species. Second, we developed combined-species models of resight probability for common and rare species so that we could make use of sparse data on the latter. We determined that many common species, such as red-tailed hawks, loggerhead shrikes, and especially common ravens, are associated with human development and likely subsidized by human activity. Species-specific and combined-species models of resight probability performed similarly, although the former model type provided higher quality information. Comparing abundance estimates with past surveys in the Mojave Desert suggests numbers of predatory birds associated with human development have increased while other sensitive species not associated with development have decreased. This approach gave us information beyond what we would have collected by focusing either on common or rare species, thus it provides a low-cost framework for others conducting surveys in similar desert environments outside of California.
NASA Astrophysics Data System (ADS)
Lynch, Amanda H.; Abramson, David; Görgen, Klaus; Beringer, Jason; Uotila, Petteri
2007-10-01
Fires in the Australian savanna have been hypothesized to affect monsoon evolution, but the hypothesis is controversial and the effects have not been quantified. A distributed computing approach allows the development of a challenging experimental design that permits simultaneous variation of all fire attributes. The climate model simulations are distributed around multiple independent computer clusters in six countries, an approach that has potential for a range of other large simulation applications in the earth sciences. The experiment clarifies that savanna burning can shape the monsoon through two mechanisms. Boundary-layer circulation and large-scale convergence is intensified monotonically through increasing fire intensity and area burned. However, thresholds of fire timing and area are evident in the consequent influence on monsoon rainfall. In the optimal band of late, high intensity fires with a somewhat limited extent, it is possible for the wet season to be significantly enhanced.
NASA Technical Reports Server (NTRS)
Schweikhard, Keith A.; Richards, W. Lance; Theisen, John; Mouyos, William; Garbos, Raymond
2001-01-01
The X-33 reusable launch vehicle demonstrator has identified the need to implement a vehicle health monitoring system that can acquire data that monitors system health and performance. Sanders, a Lockheed Martin Company, has designed and developed a COTS-based open architecture system that implements a number of technologies that have not been previously used in a flight environment. NASA Dryden Flight Research Center and Sanders teamed to demonstrate that the distributed remote health nodes, fiber optic distributed strain sensor, and fiber distributed data interface communications components of the X-33 vehicle health management (VHM) system could be successfully integrated and flown on a NASA F-18 aircraft. This paper briefly describes components of X-33 VHM architecture flown at Dryden and summarizes the integration and flight demonstration of these X-33 VHM components. Finally, it presents early results from the integration and flight efforts.
NASA Technical Reports Server (NTRS)
Schweikhard, Keith A.; Richards, W. Lance; Theisen, John; Mouyos, William; Garbos, Raymond; Schkolnik, Gerald (Technical Monitor)
1998-01-01
The X-33 reusable launch vehicle demonstrator has identified the need to implement a vehicle health monitoring system that can acquire data that monitors system health and performance. Sanders, a Lockheed Martin Company, has designed and developed a commercial off-the-shelf (COTS)-based open architecture system that implements a number of technologies that have not been previously used in a flight environment. NASA Dryden Flight Research Center and Sanders teamed to demonstrate that the distributed remote health nodes, fiber optic distributed strain sensor, and fiber distributed data interface communications components of the X-33 vehicle health management (VHM) system could be successfully integrated and flown on a NASA F-18 aircraft. This paper briefly describes components of X-33 VHM architecture flown at Dryden and summarizes the integration and flight demonstration of these X-33 VHM components. Finally, it presents early results from the integration and flight efforts.
NASA Technical Reports Server (NTRS)
Hornstein, Rhoda S.; Willoughby, John K.; Gardner, Jo A.; Shinkle, Gerald L.
1993-01-01
In 1992, NASA made the decision to evolve a Consolidated Planning System (CPS) by adding the Space Transportation System (STS) requirements to the Space Station Freedom (SSF) planning software. This paper describes this evolutionary process, which began with a series of six-month design-build-test cycles, using a domain-independent architecture and a set of developmental tools known as the Advanced Scheduling Environment. It is shown that, during these tests, the CPS could be used at multiple organizational levels of planning and for integrating schedules from geographically distributed (including international) planning environments. The potential for using the CPS for other planning and scheduling tasks in the SSF program is being currently examined.
Kalvelage, Thomas A.; Willems, Jennifer
2005-01-01
The US Geological Survey's EROS Data Center (EDC) hosts the Land Processes Distributed Active Archive Center (LP DAAC). The LP DAAC supports NASA's Earth Observing System (EOS), which is a series of polar-orbiting and low inclination satellites for long-term global observations of the land surface, biosphere, solid Earth, atmosphere, and oceans. The EOS Data and Information Systems (EOSDIS) was designed to acquire, archive, manage and distribute Earth observation data to the broadest possible user community.The LP DAAC is one of four DAACs that utilize the EOSDIS Core System (ECS) to manage and archive their data. Since the ECS was originally designed, significant changes have taken place in technology, user expectations, and user requirements. Therefore the LP DAAC has implemented additional systems to meet the evolving needs of scientific users, tailored to an integrated working environment. These systems provide a wide variety of services to improve data access and to enhance data usability through subsampling, reformatting, and reprojection. These systems also support the wide breadth of products that are handled by the LP DAAC.The LP DAAC is the primary archive for the Landsat 7 Enhanced Thematic Mapper Plus (ETM+) data; it is the only facility in the United States that archives, processes, and distributes data from the Advanced Spaceborne Thermal Emission/Reflection Radiometer (ASTER) on NASA's Terra spacecraft; and it is responsible for the archive and distribution of “land products” generated from data acquired by the Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA's Terra and Aqua satellites.
NASA Technical Reports Server (NTRS)
Felder, James L.; Kim, Huyn Dae; Brown, Gerald V.; Chu, Julio
2011-01-01
A Turboelectric Distributed Propulsion (TeDP) system differs from other propulsion systems by the use of electrical power to transmit power from the turbine to the fan. Electrical power can be efficiently transmitted over longer distances and with complex topologies. Also the use of power inverters allows the generator and motors speeds to be independent of one another. This decoupling allows the aircraft designer to place the core engines and the fans in locations most advantageous for each. The result can be very different installation environments for the different devices. Thus the installation effects on this system can be quite different than conventional turbofans where the fan and core both see the same installed environments. This paper examines a propulsion system consisting of two superconducting generators, each driven by a turboshaft engine located so that their inlets ingest freestream air, superconducting electrical transmission lines, and an array of superconducting motor driven fan positioned across the upper/rear fuselage area of a hybrid wing body aircraft in a continuous nacelle that ingests all of the upper fuselage boundary layer. The effect of ingesting the boundary layer on the design of the system with a range of design pressure ratios is examined. Also the impact of ingesting the boundary layer on off-design performance is examined. The results show that when examining different design fan pressure ratios it is important to recalculate of the boundary layer mass-average Pt and MN up the height for each inlet height during convergence of the design point for each fan design pressure ratio examined. Correct estimation of off-design performance is dependent on the height of the column of air measured from the aircraft surface immediately prior to any external diffusion that will flow through the fan propulsors. The mass-averaged Pt and MN calculated for this column of air determine the Pt and MN seen by the propulsor inlet. Since the height of this column will change as the amount of air passing through the fans change as the propulsion system is throttled, and since the mass-average Pt and MN varies by height, this capture height must be recalculated as the airflow through the propulsor is varied as the off-design performance point is converged.
Combining Agile and Traditional: Customer Communication in Distributed Environment
NASA Astrophysics Data System (ADS)
Korkala, Mikko; Pikkarainen, Minna; Conboy, Kieran
Distributed development is a radically increasing phenomenon in modern software development environments. At the same time, traditional and agile methodologies and combinations of those are being used in the industry. Agile approaches place a large emphasis on customer communication. However, existing knowledge on customer communication in distributed agile development seems to be lacking. In order to shed light on this topic and provide practical guidelines for companies in distributed agile environments, a qualitative case study was conducted in a large globally distributed software company. The key finding was that it might be difficult for an agile organization to get relevant information from a traditional type of customer organization, even though the customer communication was indicated to be active and utilized via multiple different communication media. Several challenges discussed in this paper referred to "information blackout" indicating the importance of an environment fostering meaningful communication. In order to evaluate if this environment can be created a set of guidelines is proposed.
Israeli nurse practice environment characteristics, retention, and job satisfaction.
Dekeyser Ganz, Freda; Toren, Orly
2014-02-24
There is an international nursing shortage. Improving the practice environment has been shown to be a successful strategy against this phenomenon, as the practice environment is associated with retention and job satisfaction. The Israeli nurse practice environment has not been measured. The purpose of this study was to measure practice environment characteristics, retention and job satisfaction and to evaluate the association between these variables. A demographic questionnaire, the Practice Environment Scale, and a Job Satisfaction Questionnaire were administered to Israeli acute and intensive care nurses working in 7 hospitals across the country. Retention was measured by intent to leave the organization and work experience. A convenience sample of registered nurses was obtained using a bi-phasic, stratified, cluster design. Data were collected based on the preferences of each unit, either distribution during various shifts or at staff meetings; or via staff mailboxes. Descriptive statistics were used to describe the sample and results of the questionnaires. Pearson Product Moment Correlations were used to determine significant associations among the variables. A multiple regression model was designed where the criterion variable was the practice environment. Analyses of variance determined differences between groups on nurse practice environment characteristics. 610 nurses reported moderate levels of practice environment characteristics, where the lowest scoring characteristic was 'appropriate staffing and resources'. Approximately 9% of the sample reported their intention to leave and the level of job satisfaction was high. A statistically significant, negative, weak correlation was found between intention to leave and practice environment characteristics, with a moderate correlation between job satisfaction and practice environment characteristics. 'Appropriate staffing and resources' was the only characteristic found to be statistically different based on hospital size and geographic region. This study supports the international nature of the vicious cycle that includes a poor quality practice environment, decreased job satisfaction and low nurse retention. Despite the extreme nursing shortage in Israel, perceptions of the practice environment were similar to other countries. Policy makers and hospital managers should address the practice environment, in order to improve job satisfaction and increase retention.
From built environment to health inequalities: An explanatory framework based on evidence
Gelormino, Elena; Melis, Giulia; Marietta, Cristina; Costa, Giuseppe
2015-01-01
Objective: The Health in All Policies strategy aims to engage every policy domain in health promotion. The more socially disadvantaged groups are usually more affected by potential negative impacts of policies if they are not health oriented. The built environment represents an important policy domain and, apart from its housing component, its impact on health inequalities is seldom assessed. Methods: A scoping review of evidence on the built environment and its health equity impact was carried out, searching both urban and medical literature since 2000 analysing socio-economic inequalities in relation to different components of the built environment. Results: The proposed explanatory framework assumes that key features of built environment (identified as density, functional mix and public spaces and services), may influence individual health through their impact on both natural environment and social context, as well as behaviours, and that these effects may be unequally distributed according to the social position of individuals. Conclusion: In general, the expected links proposed by the framework are well documented in the literature; however, evidence of their impact on health inequalities remains uncertain due to confounding factors, heterogeneity in study design, and difficulty to generalize evidence that is still very embedded to local contexts. PMID:26844145
System design in an evolving system-of-systems architecture and concept of operations
NASA Astrophysics Data System (ADS)
Rovekamp, Roger N., Jr.
Proposals for space exploration architectures have increased in complexity and scope. Constituent systems (e.g., rovers, habitats, in-situ resource utilization facilities, transfer vehicles, etc) must meet the needs of these architectures by performing in multiple operational environments and across multiple phases of the architecture's evolution. This thesis proposes an approach for using system-of-systems engineering principles in conjunction with system design methods (e.g., Multi-objective optimization, genetic algorithms, etc) to create system design options that perform effectively at both the system and system-of-systems levels, across multiple concepts of operations, and over multiple architectural phases. The framework is presented by way of an application problem that investigates the design of power systems within a power sharing architecture for use in a human Lunar Surface Exploration Campaign. A computer model has been developed that uses candidate power grid distribution solutions for a notional lunar base. The agent-based model utilizes virtual control agents to manage the interactions of various exploration and infrastructure agents. The philosophy behind the model is based both on lunar power supply strategies proposed in literature, as well as on the author's own approaches for power distribution strategies of future lunar bases. In addition to proposing a framework for system design, further implications of system-of-systems engineering principles are briefly explored, specifically as they relate to producing more robust cross-cultural system-of-systems architecture solutions.
NASA Astrophysics Data System (ADS)
Park, Soomyung; Joo, Seong-Soon; Yae, Byung-Ho; Lee, Jong-Hyun
2002-07-01
In this paper, we present the Optical Cross-Connect (OXC) Management Control System Architecture, which has the scalability and robust maintenance and provides the distributed managing environment in the optical transport network. The OXC system we are developing, which is divided into the hardware and the internal and external software for the OXC system, is made up the OXC subsystem with the Optical Transport Network (OTN) sub layers-hardware and the optical switch control system, the signaling control protocol subsystem performing the User-to-Network Interface (UNI) and Network-to-Network Interface (NNI) signaling control, the Operation Administration Maintenance & Provisioning (OAM&P) subsystem, and the network management subsystem. And the OXC management control system has the features that can support the flexible expansion of the optical transport network, provide the connectivity to heterogeneous external network elements, be added or deleted without interrupting OAM&P services, be remotely operated, provide the global view and detail information for network planner and operator, and have Common Object Request Broker Architecture (CORBA) based the open system architecture adding and deleting the intelligent service networking functions easily in future. To meet these considerations, we adopt the object oriented development method in the whole developing steps of the system analysis, design, and implementation to build the OXC management control system with the scalability, the maintenance, and the distributed managing environment. As a consequently, the componentification for the OXC operation management functions of each subsystem makes the robust maintenance, and increases code reusability. Also, the component based OXC management control system architecture will have the flexibility and scalability in nature.
NASA Technical Reports Server (NTRS)
Johnston, William E.; Gannon, Dennis; Nitzberg, Bill
2000-01-01
We use the term "Grid" to refer to distributed, high performance computing and data handling infrastructure that incorporates geographically and organizationally dispersed, heterogeneous resources that are persistent and supported. This infrastructure includes: (1) Tools for constructing collaborative, application oriented Problem Solving Environments / Frameworks (the primary user interfaces for Grids); (2) Programming environments, tools, and services providing various approaches for building applications that use aggregated computing and storage resources, and federated data sources; (3) Comprehensive and consistent set of location independent tools and services for accessing and managing dynamic collections of widely distributed resources: heterogeneous computing systems, storage systems, real-time data sources and instruments, human collaborators, and communications systems; (4) Operational infrastructure including management tools for distributed systems and distributed resources, user services, accounting and auditing, strong and location independent user authentication and authorization, and overall system security services The vision for NASA's Information Power Grid - a computing and data Grid - is that it will provide significant new capabilities to scientists and engineers by facilitating routine construction of information based problem solving environments / frameworks. Such Grids will knit together widely distributed computing, data, instrument, and human resources into just-in-time systems that can address complex and large-scale computing and data analysis problems. Examples of these problems include: (1) Coupled, multidisciplinary simulations too large for single systems (e.g., multi-component NPSS turbomachine simulation); (2) Use of widely distributed, federated data archives (e.g., simultaneous access to metrological, topological, aircraft performance, and flight path scheduling databases supporting a National Air Space Simulation systems}; (3) Coupling large-scale computing and data systems to scientific and engineering instruments (e.g., realtime interaction with experiments through real-time data analysis and interpretation presented to the experimentalist in ways that allow direct interaction with the experiment (instead of just with instrument control); (5) Highly interactive, augmented reality and virtual reality remote collaborations (e.g., Ames / Boeing Remote Help Desk providing field maintenance use of coupled video and NDI to a remote, on-line airframe structures expert who uses this data to index into detailed design databases, and returns 3D internal aircraft geometry to the field); (5) Single computational problems too large for any single system (e.g. the rotocraft reference calculation). Grids also have the potential to provide pools of resources that could be called on in extraordinary / rapid response situations (such as disaster response) because they can provide common interfaces and access mechanisms, standardized management, and uniform user authentication and authorization, for large collections of distributed resources (whether or not they normally function in concert). IPG development and deployment is addressing requirements obtained by analyzing a number of different application areas, in particular from the NASA Aero-Space Technology Enterprise. This analysis has focussed primarily on two types of users: the scientist / design engineer whose primary interest is problem solving (e.g. determining wing aerodynamic characteristics in many different operating environments), and whose primary interface to IPG will be through various sorts of problem solving frameworks. The second type of user is the tool designer: the computational scientists who convert physics and mathematics into code that can simulate the physical world. These are the two primary users of IPG, and they have rather different requirements. The results of the analysis of the needs of these two types of users provides a broad set of requirements that gives rise to a general set of required capabilities. The IPG project is intended to address all of these requirements. In some cases the required computing technology exists, and in some cases it must be researched and developed. The project is using available technology to provide a prototype set of capabilities in a persistent distributed computing testbed. Beyond this, there are required capabilities that are not immediately available, and whose development spans the range from near-term engineering development (one to two years) to much longer term R&D (three to six years). Additional information is contained in the original.
1984-06-01
Eacn stock point is autonomous witn respect to how it implements data processing support, as long as it accommodates the Navy Supply Systems Command...has its own data elements, files, programs , transactions, users, reports, and some have additional hardware. To augment them all and not force redesign... programs are written to request session establishments among them using only logical addressing names (mailboxes) whicn are independent from physical
The Effects of Soldier Gear Encumbrance on Restraints in a Frontal Crash Environment
2015-08-31
their gear poses a challenge in restraint system design that is not typical in the automotive world. •The weight of the gear encumbrance may have a...Distribution Statement A. Approved for public release. TEST METHODOLOGY •A modified rigid steel seat similar to the type used for ECE R16 compliance testing...structure were non-deformable. 6 Shoulder Restraints Steel Non Deformable D-Rings 5th Point Restraint 5th Point Exiting Through the Seat
EHR standards--A comparative study.
Blobel, Bernd; Pharow, Peter
2006-01-01
For ensuring quality and efficiency of patient's care, the care paradigm moves from organization-centered over process-controlled towards personal care. Such health system paradigm change leads to new paradigms for analyzing, designing, implementing and deploying supporting health information systems including EHR systems as core application in a distributed eHealth environment. The paper defines the architectural paradigm for future-proof EHR systems. It compares advanced EHR architectures referencing them at the Generic Component Model. The paper introduces the evolving paradigm of autonomous computing for self-organizing health information systems.
Hypercluster Parallel Processor
NASA Technical Reports Server (NTRS)
Blech, Richard A.; Cole, Gary L.; Milner, Edward J.; Quealy, Angela
1992-01-01
Hypercluster computer system includes multiple digital processors, operation of which coordinated through specialized software. Configurable according to various parallel-computing architectures of shared-memory or distributed-memory class, including scalar computer, vector computer, reduced-instruction-set computer, and complex-instruction-set computer. Designed as flexible, relatively inexpensive system that provides single programming and operating environment within which one can investigate effects of various parallel-computing architectures and combinations on performance in solution of complicated problems like those of three-dimensional flows in turbomachines. Hypercluster software and architectural concepts are in public domain.