Sample records for simulated service environments

  1. Self-adaptive Fault-Tolerance of HLA-Based Simulations in the Grid Environment

    NASA Astrophysics Data System (ADS)

    Huang, Jijie; Chai, Xudong; Zhang, Lin; Li, Bo Hu

    The objects of a HLA-based simulation can access model services to update their attributes. However, the grid server may be overloaded and refuse the model service to handle objects accesses. Because these objects have been accessed this model service during last simulation loop and their medium state are stored in this server, this may terminate the simulation. A fault-tolerance mechanism must be introduced into simulations. But the traditional fault-tolerance methods cannot meet the above needs because the transmission latency between a federate and the RTI in grid environment varies from several hundred milliseconds to several seconds. By adding model service URLs to the OMT and expanding the HLA services and model services with some interfaces, this paper proposes a self-adaptive fault-tolerance mechanism of simulations according to the characteristics of federates accessing model services. Benchmark experiments indicate that the expanded HLA/RTI can make simulations self-adaptively run in the grid environment.

  2. Self-organizing network services with evolutionary adaptation.

    PubMed

    Nakano, Tadashi; Suda, Tatsuya

    2005-09-01

    This paper proposes a novel framework for developing adaptive and scalable network services. In the proposed framework, a network service is implemented as a group of autonomous agents that interact in the network environment. Agents in the proposed framework are autonomous and capable of simple behaviors (e.g., replication, migration, and death). In this paper, an evolutionary adaptation mechanism is designed using genetic algorithms (GAs) for agents to evolve their behaviors and improve their fitness values (e.g., response time to a service request) to the environment. The proposed framework is evaluated through simulations, and the simulation results demonstrate the ability of autonomous agents to adapt to the network environment. The proposed framework may be suitable for disseminating network services in dynamic and large-scale networks where a large number of data and services need to be replicated, moved, and deleted in a decentralized manner.

  3. Modelling and Simulation as a Service: New Concepts and Service-Oriented Architectures (Modelisation et simulation en tant que service: Nouveaux concepts et architectures orientes service)

    DTIC Science & Technology

    2015-05-01

    delivery business model where S&T activities are conducted in a NATO dedicated executive body, having its own personnel, capabilities and infrastructure ...SD-4: Design for Securability 5-4 5.3.2 Recommendations on Simulation Environment Infrastructure 5-5 5.3.2.1 Recommendation IN-1: Harmonize...Critical Data and 5-5 Algorithms 5.3.2.2 Recommendation IN-2: Establish Permanent Simulation 5-5 Infrastructure 5.3.2.3 Recommendation IN-3: Establish

  4. Providing a parallel and distributed capability for JMASS using SPEEDES

    NASA Astrophysics Data System (ADS)

    Valinski, Maria; Driscoll, Jonathan; McGraw, Robert M.; Meyer, Bob

    2002-07-01

    The Joint Modeling And Simulation System (JMASS) is a Tri-Service simulation environment that supports engineering and engagement-level simulations. As JMASS is expanded to support other Tri-Service domains, the current set of modeling services must be expanded for High Performance Computing (HPC) applications by adding support for advanced time-management algorithms, parallel and distributed topologies, and high speed communications. By providing support for these services, JMASS can better address modeling domains requiring parallel computationally intense calculations such clutter, vulnerability and lethality calculations, and underwater-based scenarios. A risk reduction effort implementing some HPC services for JMASS using the SPEEDES (Synchronous Parallel Environment for Emulation and Discrete Event Simulation) Simulation Framework has recently concluded. As an artifact of the JMASS-SPEEDES integration, not only can HPC functionality be brought to the JMASS program through SPEEDES, but an additional HLA-based capability can be demonstrated that further addresses interoperability issues. The JMASS-SPEEDES integration provided a means of adding HLA capability to preexisting JMASS scenarios through an implementation of the standard JMASS port communication mechanism that allows players to communicate.

  5. Enhancing Pre-Service Special Educator Preparation through Combined Use of Virtual Simulation and Instructional Coaching

    ERIC Educational Resources Information Center

    Peterson-Ahmad, Maria

    2018-01-01

    To meet the ever-increasing teaching standards, pre-service special educators need extensive and advanced opportunities for pedagogical preparation prior to entering the classroom. Providing opportunities for pre-service special educators to practice such strategies within a virtual simulation environment offers teacher preparation programs a way…

  6. Optimal Living Environments for the Elderly: A Design Simulation Approach.

    ERIC Educational Resources Information Center

    Hoffman, Stephanie B.; And Others

    PLANNED AGE (Planned Alternatives for Gerontological Environments) is a consumer/advocate-oriented design simulation package that provides: (a) a medium for user-planner interaction in the design of living and service environments for the aged; (b) an educational, planning, design, and evaluation tool that can be used by the elderly, their…

  7. Computer modeling with randomized-controlled trial data informs the development of person-centered aged care homes.

    PubMed

    Chenoweth, Lynn; Vickland, Victor; Stein-Parbury, Jane; Jeon, Yun-Hee; Kenny, Patricia; Brodaty, Henry

    2015-10-01

    To answer questions on the essential components (services, operations and resources) of a person-centered aged care home (iHome) using computer simulation. iHome was developed with AnyLogic software using extant study data obtained from 60 Australian aged care homes, 900+ clients and 700+ aged care staff. Bayesian analysis of simulated trial data will determine the influence of different iHome characteristics on care service quality and client outcomes. Interim results: A person-centered aged care home (socio-cultural context) and care/lifestyle services (interactional environment) can produce positive outcomes for aged care clients (subjective experiences) in the simulated environment. Further testing will define essential characteristics of a person-centered care home.

  8. Flexible Simulation E-Learning Environment for Studying Digital Circuits and Possibilities for It Deployment as Semantic Web Service

    ERIC Educational Resources Information Center

    Radoyska, P.; Ivanova, T.; Spasova, N.

    2011-01-01

    In this article we present a partially realized project for building a distributed learning environment for studying digital circuits Test and Diagnostics at TU-Sofia. We describe the main requirements for this environment, substantiate the developer platform choice, and present our simulation and circuit parameter calculation tools.…

  9. The Effects of an Energy-Environment Simulator Upon Selected Energy-Related Attitudes of Science Students and In-Service Teachers.

    ERIC Educational Resources Information Center

    Dunlop, David L.

    This document is the outcome of a study designed to investigate the energy-related attitudes of several different groups of science students and science teachers both before and after working with an energy-environment simulator for approximately an hour. During the interaction with the simulator, the participants decided upon the variables they…

  10. Software as a service approach to sensor simulation software deployment

    NASA Astrophysics Data System (ADS)

    Webster, Steven; Miller, Gordon; Mayott, Gregory

    2012-05-01

    Traditionally, military simulation has been problem domain specific. Executing an exercise currently requires multiple simulation software providers to specialize, deploy, and configure their respective implementations, integrate the collection of software to achieve a specific system behavior, and then execute for the purpose at hand. This approach leads to rigid system integrations which require simulation expertise for each deployment due to changes in location, hardware, and software. Our alternative is Software as a Service (SaaS) predicated on the virtualization of Night Vision Electronic Sensors (NVESD) sensor simulations as an exemplary case. Management middleware elements layer self provisioning, configuration, and integration services onto the virtualized sensors to present a system of services at run time. Given an Infrastructure as a Service (IaaS) environment, enabled and managed system of simulations yields a durable SaaS delivery without requiring user simulation expertise. Persistent SaaS simulations would provide on demand availability to connected users, decrease integration costs and timelines, and benefit the domain community from immediate deployment of lessons learned.

  11. Stochastic Simulation Service: Bridging the Gap between the Computational Expert and the Biologist

    PubMed Central

    Banerjee, Debjani; Bellesia, Giovanni; Daigle, Bernie J.; Douglas, Geoffrey; Gu, Mengyuan; Gupta, Anand; Hellander, Stefan; Horuk, Chris; Nath, Dibyendu; Takkar, Aviral; Lötstedt, Per; Petzold, Linda R.

    2016-01-01

    We present StochSS: Stochastic Simulation as a Service, an integrated development environment for modeling and simulation of both deterministic and discrete stochastic biochemical systems in up to three dimensions. An easy to use graphical user interface enables researchers to quickly develop and simulate a biological model on a desktop or laptop, which can then be expanded to incorporate increasing levels of complexity. StochSS features state-of-the-art simulation engines. As the demand for computational power increases, StochSS can seamlessly scale computing resources in the cloud. In addition, StochSS can be deployed as a multi-user software environment where collaborators share computational resources and exchange models via a public model repository. We demonstrate the capabilities and ease of use of StochSS with an example of model development and simulation at increasing levels of complexity. PMID:27930676

  12. Stochastic Simulation Service: Bridging the Gap between the Computational Expert and the Biologist

    DOE PAGES

    Drawert, Brian; Hellander, Andreas; Bales, Ben; ...

    2016-12-08

    We present StochSS: Stochastic Simulation as a Service, an integrated development environment for modeling and simulation of both deterministic and discrete stochastic biochemical systems in up to three dimensions. An easy to use graphical user interface enables researchers to quickly develop and simulate a biological model on a desktop or laptop, which can then be expanded to incorporate increasing levels of complexity. StochSS features state-of-the-art simulation engines. As the demand for computational power increases, StochSS can seamlessly scale computing resources in the cloud. In addition, StochSS can be deployed as a multi-user software environment where collaborators share computational resources andmore » exchange models via a public model repository. We also demonstrate the capabilities and ease of use of StochSS with an example of model development and simulation at increasing levels of complexity.« less

  13. Probabilistic inspection strategies for minimizing service failures

    NASA Technical Reports Server (NTRS)

    Brot, Abraham

    1994-01-01

    The INSIM computer program is described which simulates the 'limited fatigue life' environment in which aircraft structures generally operate. The use of INSIM to develop inspection strategies which aim to minimize service failures is demonstrated. Damage-tolerance methodology, inspection thresholds and customized inspections are simulated using the probability of failure as the driving parameter.

  14. Web-HLA and Service-Enabled RTI in the Simulation Grid

    NASA Astrophysics Data System (ADS)

    Huang, Jijie; Li, Bo Hu; Chai, Xudong; Zhang, Lin

    HLA-based simulations in a grid environment have now become a main research hotspot in the M&S community, but there are many shortcomings of the current HLA running in a grid environment. This paper analyzes the analogies between HLA and OGSA from the software architecture point of view, and points out the service-oriented method should be introduced into the three components of HLA to overcome its shortcomings. This paper proposes an expanded running architecture that can integrate the HLA with OGSA and realizes a service-enabled RTI (SE-RTI). In addition, in order to handle the bottleneck problem that is how to efficiently realize the HLA time management mechanism, this paper proposes a centralized way by which the CRC of the SE-RTI takes charge of the time management and the dispatching of TSO events of each federate. Benchmark experiments indicate that the running velocity of simulations in Internet or WAN is properly improved.

  15. Bringing good teaching cases "to life": a simulator-based medical education service.

    PubMed

    Gordon, James A; Oriol, Nancy E; Cooper, Jeffrey B

    2004-01-01

    Realistic medical simulation has expanded worldwide over the last decade. Such technology is playing an increasing role in medical education not merely because simulator sessions are enjoyable, but because they can provide an enhanced environment for experiential learning and reflective thought. High-fidelity patient simulators allow students of all levels to "practice" medicine without risk, providing a natural framework for the integration of basic and clinical science in a safe environment. Often described as "flight simulation for doctors," the rationale, utility, and range of medical simulations have been described elsewhere, yet the challenges of integrating this technology into the medical school curriculum have received little attention. The authors report how Harvard Medical School established an on-campus simulator program for students in 2001, building on the work of the Center for Medical Simulation in Boston. As an overarching structure for the process, faculty and residents developed a simulator-based "medical education service"-like any other medical teaching service, but designed exclusively to help students learn on the simulator alongside a clinician-mentor, on demand. Initial evaluations among both preclinical and clinical students suggest that simulation is highly accepted and increasingly demanded. For some learners, simulation may allow complex information to be understood and retained more efficiently than can occur with traditional methods. Moreover, the process outlined here suggests that simulation can be integrated into existing curricula of almost any medical school or teaching hospital in an efficient and cost-effective manner.

  16. End-to-end simulation and verification of GNC and robotic systems considering both space segment and ground segment

    NASA Astrophysics Data System (ADS)

    Benninghoff, Heike; Rems, Florian; Risse, Eicke; Brunner, Bernhard; Stelzer, Martin; Krenn, Rainer; Reiner, Matthias; Stangl, Christian; Gnat, Marcin

    2018-01-01

    In the framework of a project called on-orbit servicing end-to-end simulation, the final approach and capture of a tumbling client satellite in an on-orbit servicing mission are simulated. The necessary components are developed and the entire end-to-end chain is tested and verified. This involves both on-board and on-ground systems. The space segment comprises a passive client satellite, and an active service satellite with its rendezvous and berthing payload. The space segment is simulated using a software satellite simulator and two robotic, hardware-in-the-loop test beds, the European Proximity Operations Simulator (EPOS) 2.0 and the OOS-Sim. The ground segment is established as for a real servicing mission, such that realistic operations can be performed from the different consoles in the control room. During the simulation of the telerobotic operation, it is important to provide a realistic communication environment with different parameters like they occur in the real world (realistic delay and jitter, for example).

  17. Evaluation of Synthetic Automatic Terminal Information Services (ATIS) Messages

    DOT National Transportation Integrated Search

    1995-04-01

    This report describes an evaluation of the effectivenss of synthetic voice : Automatic Terminal Information Service (ATIS) messages in a simulated : environment. The evaluation was conducted by ARINC and CTA, Incorporated, for : the Federal Aviation ...

  18. Simulation of the dynamic environment for missile component testing: Demonstration

    NASA Technical Reports Server (NTRS)

    Chang, Kurng Y.

    1989-01-01

    The problems in defining a realistic test requirement for missile and space vehicle components can be classified into two categories: (1) definition of the test environment representing the expected service condition, and (2) simulation of the desired environment in the test laboratory. Recently, a new three-dimensional (3-D) test facility was completed at the U.S. Army Harry Diamond Laboratory (HDL) to simulate triaxial vibration input to a test specimen. The vibration test system is designed to support multi-axial vibration tests over the frequency range of 5 to 2000 Hertz. The availability of this 3-D test system motivates the development of new methodologies addressing environmental definition and simulation.

  19. Land Use Management in the Panama Canal Watershed to Maximize Hydrologic Ecosystem Services Benefits: Explicit Simulation of Preferential Flow Paths in an HPC Environment

    NASA Astrophysics Data System (ADS)

    Regina, J. A.; Ogden, F. L.; Steinke, R. C.; Frazier, N.; Cheng, Y.; Zhu, J.

    2017-12-01

    Preferential flow paths (PFP) resulting from biotic and abiotic factors contribute significantly to the generation of runoff in moist lowland tropical watersheds. Flow through PFPs represents the dominant mechanism by which land use choices affect hydrological behavior. The relative influence of PFP varies depending upon land-use management practices. Assessing the possible effects of land-use and landcover change on flows, and other ecosystem services, in the humid tropics partially depends on adequate simulation of PFP across different land-uses. Currently, 5% of global trade passes through the Panama Canal, which is supplied with fresh water from the Panama Canal Watershed. A third set of locks, recently constructed, are expected to double the capacity of the Canal. We incorporated explicit simulation of PFPs in to the ADHydro HPC distributed hydrological model to simulate the effects of land-use and landcover change due to land management incentives on water resources availability in the Panama Canal Watershed. These simulations help to test hypotheses related to the effectiveness of various proposed payments for ecosystem services schemes. This presentation will focus on hydrological model formulation and performance in an HPC environment.

  20. A Java-Enabled Interactive Graphical Gas Turbine Propulsion System Simulator

    NASA Technical Reports Server (NTRS)

    Reed, John A.; Afjeh, Abdollah A.

    1997-01-01

    This paper describes a gas turbine simulation system which utilizes the newly developed Java language environment software system. The system provides an interactive graphical environment which allows the quick and efficient construction and analysis of arbitrary gas turbine propulsion systems. The simulation system couples a graphical user interface, developed using the Java Abstract Window Toolkit, and a transient, space- averaged, aero-thermodynamic gas turbine analysis method, both entirely coded in the Java language. The combined package provides analytical, graphical and data management tools which allow the user to construct and control engine simulations by manipulating graphical objects on the computer display screen. Distributed simulations, including parallel processing and distributed database access across the Internet and World-Wide Web (WWW), are made possible through services provided by the Java environment.

  1. Scheduling multimedia services in cloud computing environment

    NASA Astrophysics Data System (ADS)

    Liu, Yunchang; Li, Chunlin; Luo, Youlong; Shao, Yanling; Zhang, Jing

    2018-02-01

    Currently, security is a critical factor for multimedia services running in the cloud computing environment. As an effective mechanism, trust can improve security level and mitigate attacks within cloud computing environments. Unfortunately, existing scheduling strategy for multimedia service in the cloud computing environment do not integrate trust mechanism when making scheduling decisions. In this paper, we propose a scheduling scheme for multimedia services in multi clouds. At first, a novel scheduling architecture is presented. Then, We build a trust model including both subjective trust and objective trust to evaluate the trust degree of multimedia service providers. By employing Bayesian theory, the subjective trust degree between multimedia service providers and users is obtained. According to the attributes of QoS, the objective trust degree of multimedia service providers is calculated. Finally, a scheduling algorithm integrating trust of entities is proposed by considering the deadline, cost and trust requirements of multimedia services. The scheduling algorithm heuristically hunts for reasonable resource allocations and satisfies the requirement of trust and meets deadlines for the multimedia services. Detailed simulated experiments demonstrate the effectiveness and feasibility of the proposed trust scheduling scheme.

  2. Application of Coalition Battle Management Language (C-BML) and C-BML Services to Live, Virtual, and Constructive (LVC) Simulation Environments

    DTIC Science & Technology

    2011-12-01

    Task Based Approach to Planning.” Paper 08F- SIW -033. In Proceed- ings of the Fall Simulation Interoperability Workshop. Simulation Interoperability...Paper 06F- SIW -003. In Proceed- 2597 Blais ings of the Fall Simulation Interoperability Workshop. Simulation Interoperability Standards Organi...MSDL).” Paper 10S- SIW -003. In Proceedings of the Spring Simulation Interoperability Workshop. Simulation Interoperability Standards Organization

  3. Mobile user environment and satellite diversity for NGSO S-PCN's

    NASA Technical Reports Server (NTRS)

    Werner, Markus; Bischl, Hermann; Lutz, Erich

    1995-01-01

    The performance of satellite diversity under the influence of the mobile user environment is analyzed. To this end, a digital channel model is presented which takes into account the elevation angle as well as the user mobility in a given environment. For different LEO and MEO systems and for varying mobile user environments, some crucial benefits and drawbacks of satellite diversity are discussed. Specifically, the important GW service area concept is introduced. The conclusions are validated by numerical results from computer simulations. Globalstar (LEO) and Inmarsat (MEO) are compared in terms of visibility, service availability and equivalent handover complexity for different environments and user mobility.

  4. The Virtual Climate Data Server (vCDS): An iRODS-Based Data Management Software Appliance Supporting Climate Data Services and Virtualization-as-a-Service in the NASA Center for Climate Simulation

    NASA Technical Reports Server (NTRS)

    Schnase, John L.; Tamkin, Glenn S.; Ripley, W. David III; Stong, Savannah; Gill, Roger; Duffy, Daniel Q.

    2012-01-01

    Scientific data services are becoming an important part of the NASA Center for Climate Simulation's mission. Our technological response to this expanding role is built around the concept of a Virtual Climate Data Server (vCDS), repetitive provisioning, image-based deployment and distribution, and virtualization-as-a-service. The vCDS is an iRODS-based data server specialized to the needs of a particular data-centric application. We use RPM scripts to build vCDS images in our local computing environment, our local Virtual Machine Environment, NASA s Nebula Cloud Services, and Amazon's Elastic Compute Cloud. Once provisioned into one or more of these virtualized resource classes, vCDSs can use iRODS s federation capabilities to create an integrated ecosystem of managed collections that is scalable and adaptable to changing resource requirements. This approach enables platform- or software-asa- service deployment of vCDS and allows the NCCS to offer virtualization-as-a-service: a capacity to respond in an agile way to new customer requests for data services.

  5. Exchange Service Station Gasoline Pumping Operation Simulation.

    DTIC Science & Technology

    1980-06-01

    an event step simulation model of the Naval operation.s The model has been developed as a management tool and aid to decision making. The environment...has been developed as a management tool and aid to decision making. The environment in which the system operates is discussed and the significant...of the variables such as arrival rates; while others are primarily controlled by managerial decision making, for example the number of pumps available

  6. Simulation studies of STOL airplane operations in metropolitan downtown and airport air traffic control environments

    NASA Technical Reports Server (NTRS)

    Sawyer, R. H.; Mclaughlin, M. D.

    1974-01-01

    The operating problems and equipment requirements for STOL airplanes in terminal area operations in simulated air traffic control (ATC) environments were studied. These studies consisted of Instrument Flight Rules (IFR) arrivals and departures in the New York area to and from a downtown STOL port, STOL runways at John F. Kennedy International Airport, or STOL runways at a hypothetical international airport. The studies were accomplished in real time by using a STOL airplane flight simulator. An experimental powered lift STOL airplane and two in-service airplanes having high aerodynamic lift (i.e., STOL) capability were used in the simulations.

  7. Railroads and the Environment : Estimation of Fuel Consumption in Rail Transportation : Volume 3. Comparison of Computer Simulations with Field Measurements

    DOT National Transportation Integrated Search

    1978-09-01

    This report documents comparisons between extensive rail freight service measurements (previously presented in Volume II) and simulations of the same operations using a sophisticated train performance calculator computer program. The comparisons cove...

  8. Internet Tomography in Support of Internet and Network Simulation and Emulation Modelling

    NASA Astrophysics Data System (ADS)

    Moloisane, A.; Ganchev, I.; O'Droma, M.

    Internet performance measurement data extracted through Internet Tomography techniques and metrics and how it may be used to enhance the capacity of network simulation and emulation modelling is addressed in this paper. The advantages of network simulation and emulation as a means to aid design and develop the component networks, which make up the Internet and are fundamental to its ongoing evolution, are highlighted. The Internet's rapid growth has spurred development of new protocols and algorithms to meet changing operational requirements such as security, multicast delivery, mobile networking, policy management, and quality of service (QoS) support. Both the development and evaluation of these operational tools requires the answering of many design and operational questions. Creating the technical support required by network engineers and managers in their efforts to seek answers to these questions is in itself a major challenge. Within the Internet the number and range of services supported continues to grow exponentially, from legacy and client/server applications to VoIP, multimedia streaming services and interactive multimedia services. Services have their own distinctive requirements and idiosyncrasies. They respond differently to bandwidth limitations, latency and jitter problems. They generate different types of “conversations” between end-user terminals, back-end resources and middle-tier servers. To add to the complexity, each new or enhanced service introduced onto the network contends for available bandwidth with every other service. In an effort to ensure networking products and resources being designed and developed handling diverse conditions encountered in real Internet environments, network simulation and emulation modelling is a valuable tool, and becoming a critical element, in networking product and application design and development. The better these laboratory tools reflect real-world environment and conditions the more helpful to designers they will be.

  9. Virtual Reality Calibration for Telerobotic Servicing

    NASA Technical Reports Server (NTRS)

    Kim, W.

    1994-01-01

    A virtual reality calibration technique of matching a virtual environment of simulated graphics models in 3-D geometry and perspective with actual camera views of the remote site task environment has been developed to enable high-fidelity preview/predictive displays with calibrated graphics overlay on live video.

  10. Evaluation of power system security and development of transmission pricing method

    NASA Astrophysics Data System (ADS)

    Kim, Hyungchul

    The electric power utility industry is presently undergoing a change towards the deregulated environment. This has resulted in unbundling of generation, transmission and distribution services. The introduction of competition into unbundled electricity services may lead system operation closer to its security boundaries resulting in smaller operating safety margins. The competitive environment is expected to lead to lower price rates for customers and higher efficiency for power suppliers in the long run. Under this deregulated environment, security assessment and pricing of transmission services have become important issues in power systems. This dissertation provides new methods for power system security assessment and transmission pricing. In power system security assessment, the following issues are discussed (1) The description of probabilistic methods for power system security assessment; (2) The computation time of simulation methods; (3) on-line security assessment for operation. A probabilistic method using Monte-Carlo simulation is proposed for power system security assessment. This method takes into account dynamic and static effects corresponding to contingencies. Two different Kohonen networks, Self-Organizing Maps and Learning Vector Quantization, are employed to speed up the probabilistic method. The combination of Kohonen networks and Monte-Carlo simulation can reduce computation time in comparison with straight Monte-Carlo simulation. A technique for security assessment employing Bayes classifier is also proposed. This method can be useful for system operators to make security decisions during on-line power system operation. This dissertation also suggests an approach for allocating transmission transaction costs based on reliability benefits in transmission services. The proposed method shows the transmission transaction cost of reliability benefits when transmission line capacities are considered. The ratio between allocation by transmission line capacity-use and allocation by reliability benefits is computed using the probability of system failure.

  11. Secure Large-Scale Airport Simulations Using Distributed Computational Resources

    NASA Technical Reports Server (NTRS)

    McDermott, William J.; Maluf, David A.; Gawdiak, Yuri; Tran, Peter; Clancy, Dan (Technical Monitor)

    2001-01-01

    To fully conduct research that will support the far-term concepts, technologies and methods required to improve the safety of Air Transportation a simulation environment of the requisite degree of fidelity must first be in place. The Virtual National Airspace Simulation (VNAS) will provide the underlying infrastructure necessary for such a simulation system. Aerospace-specific knowledge management services such as intelligent data-integration middleware will support the management of information associated with this complex and critically important operational environment. This simulation environment, in conjunction with a distributed network of supercomputers, and high-speed network connections to aircraft, and to Federal Aviation Administration (FAA), airline and other data-sources will provide the capability to continuously monitor and measure operational performance against expected performance. The VNAS will also provide the tools to use this performance baseline to obtain a perspective of what is happening today and of the potential impact of proposed changes before they are introduced into the system.

  12. Environmental effects on long term behavior of composite laminates

    NASA Astrophysics Data System (ADS)

    Singhal, S. N.; Chamis, C. C.

    Model equations are presented for approximate methods simulating the long-term behavior of composite materials and structures in hot/humid service environments. These equations allow laminate property upgradings with time, and can account for the effects of service environments on creep response. These methodologies are illustrated for various individual and coupled temperature/moisture, longitudinal/transverse, and composite material type cases. Creep deformation is noted to rise dramatically for cases of matrix-borne, but not of fiber-borne, loading in hot, humid environments; the coupled influence of temperature and moisture is greater than a mere combination of their individual influences.

  13. Environmental effects on long term behavior of composite laminates

    NASA Technical Reports Server (NTRS)

    Singhal, S. N.; Chamis, C. C.

    1992-01-01

    Model equations are presented for approximate methods simulating the long-term behavior of composite materials and structures in hot/humid service environments. These equations allow laminate property upgradings with time, and can account for the effects of service environments on creep response. These methodologies are illustrated for various individual and coupled temperature/moisture, longitudinal/transverse, and composite material type cases. Creep deformation is noted to rise dramatically for cases of matrix-borne, but not of fiber-borne, loading in hot, humid environments; the coupled influence of temperature and moisture is greater than a mere combination of their individual influences.

  14. Putting FLEXPART to REST: The Provision of Atmospheric Transport Modeling Services

    NASA Astrophysics Data System (ADS)

    Morton, Don; Arnold, Dèlia

    2015-04-01

    We are developing a RESTful set of modeling services for the FLEXPART modeling system. FLEXPART (FLEXible PARTicle dispersion model) is a Lagrangian transport and dispersion model used by a growing international community. It has been used to simulate and forecast the atmospheric transport of wildfire smoke, volcanic ash and radionuclides and may be run in backwards mode to provide information for the determination of emission sources such as nuclear emissions and greenhouse gases. This open source software is distributed in source code form, and has several compiler and library dependencies that users need to address. Although well-documented, getting it compiled, set up, running, and post-processed is often tedious, making it difficult for the inexperienced or casual user. Well-designed modeling services lower the entry barrier for scientists to perform simulations, allowing them to create and execute their models from a variety of devices and programming environments. This world of Service Oriented Architectures (SOA) has progressed to a REpresentational State Transfer (REST) paradigm, in which the pervasive and mature HTTP environment is used as a foundation for providing access to model services. With such an approach, sound software engineering practises are adhered to in order to deploy service modules exhibiting very loose coupling with the clients. In short, services are accessed and controlled through the formation of properly-constructed Uniform Resource Identifiers (URI's), processed in an HTTP environment. In this way, any client or combination of clients - whether a bash script, Python program, web GUI, or even Unix command line - that can interact with an HTTP server, can run the modeling environment. This loose coupling allows for the deployment of a variety of front ends, all accessing a common modeling backend system. Furthermore, it is generally accepted in the cloud computing community that RESTful approaches constitute a sound approach towards successful deployment of services. Through the design of a RESTful, cloud-based modeling system, we provide the ubiquitous access to FLEXPART that allows scientists to focus on modeling processes instead of tedious computational details. In this work, we describe the modeling services environment, and provide examples of access via command-line, Python programs, and web GUI interfaces.

  15. Joint Ordnance Test Procedure (JOTP)-010 Safety and Suitability for Service Assessment Testing for Shoulder Launched Munitions

    DTIC Science & Technology

    2016-05-09

    electromagnetic environment for which they are designed to be used. These tests are performed on a powered weapon during simulated normal operation and are...010B SAFETY AND SUITABILITY FOR SERVICE ASSESSMENT TESTING FOR SHOULDER LAUNCHED MUNITIONS Joint Services Munition Safety Test Working Group JOTP...12 6.8 Test Sample Quantities .......................................................... 13 7. PRE- AND POST - TEST INSPECTIONS

  16. The LatHyS database for planetary plasma environment investigations: Overview and a case study of data/model comparisons

    NASA Astrophysics Data System (ADS)

    Modolo, R.; Hess, S.; Génot, V.; Leclercq, L.; Leblanc, F.; Chaufray, J.-Y.; Weill, P.; Gangloff, M.; Fedorov, A.; Budnik, E.; Bouchemit, M.; Steckiewicz, M.; André, N.; Beigbeder, L.; Popescu, D.; Toniutti, J.-P.; Al-Ubaidi, T.; Khodachenko, M.; Brain, D.; Curry, S.; Jakosky, B.; Holmström, M.

    2018-01-01

    We present the Latmos Hybrid Simulation (LatHyS) database, which is dedicated to the investigations of planetary plasma environment. Simulation results of several planetary objects (Mars, Mercury, Ganymede) are available in an online catalogue. The full description of the simulations and their results is compliant with a data model developped in the framework of the FP7 IMPEx project. The catalogue is interfaced with VO-visualization tools such AMDA, 3DView, TOPCAT, CLweb or the IMPEx portal. Web services ensure the possibilities of accessing and extracting simulated quantities/data. We illustrate the interoperability between the simulation database and VO-tools using a detailed science case that focuses on a three-dimensional representation of the solar wind interaction with the Martian upper atmosphere, combining MAVEN and Mars Express observations and simulation results.

  17. Acoustic Analysis and Design of the E-STA MSA Simulator

    NASA Technical Reports Server (NTRS)

    Bittinger, Samantha A.

    2016-01-01

    The Orion European Service Module Structural Test Article (E-STA) Acoustic Test was completed in May 2016 to verify that the European Service Module (ESM) can withstand qualification acoustic environments. The test article required an aft closeout to simulate the Multi-Purpose Crew Vehicle (MPCV) Stage Adapter (MSA) cavity, however, the flight MSA design was too cost-prohibitive to build. NASA Glenn Research Center (GRC) had 6 months to design an MSA Simulator that could recreate the qualification prediction MSA cavity sound pressure level to within a reasonable tolerance. This paper summarizes the design and analysis process to arrive at a design for the MSA Simulator, and then compares its performance to the final prediction models created prior to test.

  18. Astronaut Training in the Neutral Buoyancy Simulator

    NASA Technical Reports Server (NTRS)

    1993-01-01

    This photograph shows an STS-61 astronaut training for the Hubble Space Telescope (HST) servicing mission (STS-61) in the Marshall Space Flight Center's (MSFC's) Neutral Buoyancy Simulator (NBS). Two months after its deployment in space, scientists detected a 2-micron spherical aberration in the primary mirror of the HST that affected the telescope's ability to focus faint light sources into a precise point. This imperfection was very slight, one-fiftieth of the width of a human hair. A scheduled Space Service servicing mission (STS-61) in 1993 permitted scientists to correct the problem. The MSFC NBS provided an excellent environment for testing hardware to examine how it would operate in space and for evaluating techniques for space construction and spacecraft servicing.

  19. A Cooperative Human-Adaptive Traffic Simulation (CHATS)

    NASA Technical Reports Server (NTRS)

    Phillips, Charles T.; Ballin, Mark G.

    1999-01-01

    NASA is considering the development of a Cooperative Human-Adaptive Traffic Simulation (CHATS), to examine and evaluate performance of the National Airspace System (NAS) as the aviation community moves toward free flight. CHATS will be specifically oriented toward simulating strategic decision-making by airspace users and by the service provider s traffic management personnel, within the context of different airspace and rules assumptions. It will use human teams to represent these interests and make decisions, and will rely on computer modeling and simulation to calculate the impacts of these decisions. The simulation objectives will be to examine: 1. evolution of airspace users and the service provider s strategies, through adaptation to new operational environments; 2. air carriers competitive and cooperative behavior; 3. expected benefits to airspace users and the service provider as compared to the current NAS; 4. operational limitations of free flight concepts due to congestion and safety concerns. This paper describes an operational concept for CHATS, and presents a high-level functional design which would utilize a combination of existing and new models and simulation capabilities.

  20. Development of a Web Based Simulating System for Earthquake Modeling on the Grid

    NASA Astrophysics Data System (ADS)

    Seber, D.; Youn, C.; Kaiser, T.

    2007-12-01

    Existing cyberinfrastructure-based information, data and computational networks now allow development of state- of-the-art, user-friendly simulation environments that democratize access to high-end computational environments and provide new research opportunities for many research and educational communities. Within the Geosciences cyberinfrastructure network, GEON, we have developed the SYNSEIS (SYNthetic SEISmogram) toolkit to enable efficient computations of 2D and 3D seismic waveforms for a variety of research purposes especially for helping to analyze the EarthScope's USArray seismic data in a speedy and efficient environment. The underlying simulation software in SYNSEIS is a finite difference code, E3D, developed by LLNL (S. Larsen). The code is embedded within the SYNSEIS portlet environment and it is used by our toolkit to simulate seismic waveforms of earthquakes at regional distances (<1000km). Architecturally, SYNSEIS uses both Web Service and Grid computing resources in a portal-based work environment and has a built in access mechanism to connect to national supercomputer centers as well as to a dedicated, small-scale compute cluster for its runs. Even though Grid computing is well-established in many computing communities, its use among domain scientists still is not trivial because of multiple levels of complexities encountered. We grid-enabled E3D using our own dialect XML inputs that include geological models that are accessible through standard Web services within the GEON network. The XML inputs for this application contain structural geometries, source parameters, seismic velocity, density, attenuation values, number of time steps to compute, and number of stations. By enabling a portal based access to a such computational environment coupled with its dynamic user interface we enable a large user community to take advantage of such high end calculations in their research and educational activities. Our system can be used to promote an efficient and effective modeling environment to help scientists as well as educators in their daily activities and speed up the scientific discovery process.

  1. Data-Intensive Scientific Management, Analysis and Visualization

    NASA Astrophysics Data System (ADS)

    Goranova, Mariana; Shishedjiev, Bogdan; Juliana Georgieva, Juliana

    2012-11-01

    The proposed integrated system provides a suite of services for data-intensive sciences that enables scientists to describe, manage, analyze and visualize data from experiments and numerical simulations in distributed and heterogeneous environment. This paper describes the advisor and the converter services and presents an example from the monitoring of the slant column content of atmospheric minor gases.

  2. Using Virtual Environments as Professional Development Tools for Pre-Service Teachers Seeking ESOL Endorsement

    ERIC Educational Resources Information Center

    Blankenship, Rebecca J.

    2010-01-01

    The purpose of this study was to investigate the potential use of Second Life (Linden Labs, 2004) and Skype (Skype Limited, 2009) as simulated virtual professional development tools for pre-service teachers seeking endorsement in teaching English as a Second Official Language (ESOL). Second Life is an avatar-based Internet program that allows…

  3. Secure environment for real-time tele-collaboration on virtual simulation of radiation treatment planning.

    PubMed

    Ntasis, Efthymios; Maniatis, Theofanis A; Nikita, Konstantina S

    2003-01-01

    A secure framework is described for real-time tele-collaboration on Virtual Simulation procedure of Radiation Treatment Planning. An integrated approach is followed clustering the security issues faced by the system into organizational issues, security issues over the LAN and security issues over the LAN-to-LAN connection. The design and the implementation of the security services are performed according to the identified security requirements, along with the need for real time communication between the collaborating health care professionals. A detailed description of the implementation is given, presenting a solution, which can directly be tailored to other tele-collaboration services in the field of health care. The pilot study of the proposed security components proves the feasibility of the secure environment, and the consistency with the high performance demands of the application.

  4. Laminar Flow Control Leading Edge Systems in Simulated Airline Service

    NASA Technical Reports Server (NTRS)

    Wagner, R. D.; Maddalon, D. V.; Fisher, D. F.

    1988-01-01

    Achieving laminar flow on the wings of a commercial transport involves difficult problems associated with the wing leading edge. The NASA Leading Edge Flight Test Program has made major progress toward the solution of these problems. The effectiveness and practicality of candidate laminar flow leading edge systems were proven under representative airline service conditions. This was accomplished in a series of simulated airline service flights by modifying a JetStar aircraft with laminar flow leading edge systems and operating it out of three commercial airports in the United States. The aircraft was operated as an airliner would under actual air traffic conditions, in bad weather, and in insect infested environments.

  5. Pre-service Teachers Learn the Nature of Science in Simulated Worlds

    NASA Astrophysics Data System (ADS)

    Marshall, Jill

    2007-10-01

    Although the Texas Essential Knowledge and Skills include an understanding of the nature of science as an essential goal of every high school science course, few students report opportunities to explore essential characteristics of science in their previous classes. A simulated-world environment (Erickson, 2005) allows students to function as working scientists and discover these essential elements for themselves (i.e. that science is evidence-based and involves testable conjectures, that theories have limitations and are constantly being modified based on new discoveries to more closely reflect the natural world.) I will report on pre-service teachers' exploration of two simulated worlds and resulting changes in their descriptions of the nature of science. Erickson (2005). Simulating the Nature of Science. Presentation at the 2005 Summer AAPT Meeting, Salt Lake City, UT.

  6. Analysis of Macro-micro Simulation Models for Service-Oriented Public Platform: Coordination of Networked Services and Measurement of Public Values

    NASA Astrophysics Data System (ADS)

    Kinoshita, Yumiko

    When service sectors are a major driver for the growth of the world economy, we are challenged to implement service-oriented infrastructure as e-Gov platform to achieve further growth and innovation for both developed and developing countries. According to recent trends in service industry, it is clarified that main factors for the growth of service sectors are investment into knowledge, trade, and the enhanced capacity of micro, small, and medium-sized enterprises (MSMEs). In addition, the design and deployment of public service platform require appropriate evaluation methodology. Reflecting these observations, this paper proposes macro-micro simulation approach to assess public values (PV) focusing on MSMEs. Linkage aggregate variables (LAVs) are defined to show connection between macro and micro impacts of public services. As a result, the relationship of demography, business environment, macro economy, and socio-economic impact are clarified and their values are quantified from the behavioral perspectives of citizens and firms.

  7. Secure Encapsulation and Publication of Biological Services in the Cloud Computing Environment

    PubMed Central

    Zhang, Weizhe; Wang, Xuehui; Lu, Bo; Kim, Tai-hoon

    2013-01-01

    Secure encapsulation and publication for bioinformatics software products based on web service are presented, and the basic function of biological information is realized in the cloud computing environment. In the encapsulation phase, the workflow and function of bioinformatics software are conducted, the encapsulation interfaces are designed, and the runtime interaction between users and computers is simulated. In the publication phase, the execution and management mechanisms and principles of the GRAM components are analyzed. The functions such as remote user job submission and job status query are implemented by using the GRAM components. The services of bioinformatics software are published to remote users. Finally the basic prototype system of the biological cloud is achieved. PMID:24078906

  8. Secure encapsulation and publication of biological services in the cloud computing environment.

    PubMed

    Zhang, Weizhe; Wang, Xuehui; Lu, Bo; Kim, Tai-hoon

    2013-01-01

    Secure encapsulation and publication for bioinformatics software products based on web service are presented, and the basic function of biological information is realized in the cloud computing environment. In the encapsulation phase, the workflow and function of bioinformatics software are conducted, the encapsulation interfaces are designed, and the runtime interaction between users and computers is simulated. In the publication phase, the execution and management mechanisms and principles of the GRAM components are analyzed. The functions such as remote user job submission and job status query are implemented by using the GRAM components. The services of bioinformatics software are published to remote users. Finally the basic prototype system of the biological cloud is achieved.

  9. Thermal and fluid simulation of the environment under the dashboard, compared with measurement data

    NASA Astrophysics Data System (ADS)

    Popescu, C. S.; Sirbu, G. M.; Nita, I. C.

    2017-10-01

    The development of vehicles during the last decade is related to the evolution of electronic systems added in order to increase the safety and the number of services available on board, such as advanced driver-assistance systems (ADAS). Cars already have a complex computer network, with electronic control units (ECUs) connected to each other and receiving information from many sensors. The ECUs transfer an important heat power to the environment, while proper operating conditions need to be provided to ensure their reliability at high and low temperature, vibration and humidity. In a car cabin, electronic devices are usually placed in the compartment under the dashboard, an enclosed space designed for functional purposes. In the early stages of the vehicle design it has become necessary to analyse the environment under dashboard, by the use of Computational Fluid Dynamics (CFD) simulations and measurements. This paper presents the cooling of heat sinks by natural convection, a thermal and fluid simulation of the environment under the dashboard compared with test data.

  10. Development of a High-Fidelity Simulation Environment for Shadow-Mode Assessments of Air Traffic Concepts

    NASA Technical Reports Server (NTRS)

    Lee, Alan G.; Robinson, John E.; Lai, Chok Fung

    2017-01-01

    This paper will describe the purpose, architecture, and implementation of a gate-to-gate, high-fidelity air traffic simulation environment called the Shadow Mode Assessment using Realistic Technologies for the National Airspace System (SMART-NAS) Test Bed.The overarching purpose of the SMART-NAS Test Bed (SNTB) is to conduct high-fidelity, real-time, human-in-the-loop and automation-in-the-loop simulations of current and proposed future air traffic concepts for the Next Generation Air Transportation System of the United States, called NextGen. SNTB is intended to enable simulations that are currently impractical or impossible for three major areas of NextGen research and development: Concepts across multiple operational domains such as the gate-to-gate trajectory-based operations concept; Concepts related to revolutionary operations such as the seamless and widespread integration of large and small Unmanned Aerial System (UAS) vehicles throughout U.S. airspace; Real-time system-wide safety assurance technologies to allow safe, increasingly autonomous aviation operations. SNTB is primarily accessed through a web browser. A set of secure support services are provided to simplify all aspects of real-time, human-in-the-loop and automation-in-the-loop simulations from design (i.e., prior to execution) through analysis (i.e., after execution). These services include simulation architecture and asset configuration; scenario generation; command, control and monitoring; and analysis support.

  11. Teaching Workflow Analysis and Lean Thinking via Simulation: A Formative Evaluation

    PubMed Central

    Campbell, Robert James; Gantt, Laura; Congdon, Tamara

    2009-01-01

    This article presents the rationale for the design and development of a video simulation used to teach lean thinking and workflow analysis to health services and health information management students enrolled in a course on the management of health information. The discussion includes a description of the design process, a brief history of the use of simulation in healthcare, and an explanation of how video simulation can be used to generate experiential learning environments. Based on the results of a survey given to 75 students as part of a formative evaluation, the video simulation was judged effective because it allowed students to visualize a real-world process (concrete experience), contemplate the scenes depicted in the video along with the concepts presented in class in a risk-free environment (reflection), develop hypotheses about why problems occurred in the workflow process (abstract conceptualization), and develop solutions to redesign a selected process (active experimentation). PMID:19412533

  12. Simulation and Shoulder Dystocia.

    PubMed

    Shaddeau, Angela K; Deering, Shad

    2016-12-01

    Shoulder dystocia is an unpredictable obstetric emergency that requires prompt interventions to ensure optimal outcomes. Proper technique is important but difficult to train given the urgent and critical clinical situation. Simulation training for shoulder dystocia allows providers at all levels to practice technical and teamwork skills in a no-risk environment. Programs utilizing simulation training for this emergency have consistently demonstrated improved performance both during practice drills and in actual patients with significantly decreased risks of fetal injury. Given the evidence, simulation training for shoulder dystocia should be conducted at all institutions that provide delivery services.

  13. Translational simulation: not 'where?' but 'why?' A functional view of in situ simulation.

    PubMed

    Brazil, Victoria

    2017-01-01

    Healthcare simulation has been widely adopted for health professional education at all stages of training and practice and across cognitive, procedural, communication and teamwork domains. Recent enthusiasm for in situ simulation-delivered in the real clinical environment-cites improved transfer of knowledge and skills into real-world practice, as well as opportunities to identify latent safety threats and other workplace-specific issues. However, describing simulation type according to place may not be helpful. Instead, I propose the term translational simulation as a functional term for how simulation may be connected directly with health service priorities and patient outcomes, through interventional and diagnostic functions, independent of the location of the simulation activity.

  14. Channel simulation to facilitate mobile-satellite communications research

    NASA Technical Reports Server (NTRS)

    Davarian, Faramaz

    1987-01-01

    The mobile-satellite-service channel simulator, which is a facility for an end-to-end hardware simulation of mobile satellite communications links is discussed. Propagation effects, Doppler, interference, band limiting, satellite nonlinearity, and thermal noise have been incorporated into the simulator. The propagation environment in which the simulator needs to operate and the architecture of the simulator are described. The simulator is composed of: a mobile/fixed transmitter, interference transmitters, a propagation path simulator, a spacecraft, and a fixed/mobile receiver. Data from application experiments conducted with the channel simulator are presented; the noise converison technique to evaluate interference effects, the error floor phenomenon of digital multipath fading links, and the fade margin associated with a noncoherent receiver are examined. Diagrams of the simulator are provided.

  15. Distributed Observer Network

    NASA Technical Reports Server (NTRS)

    2008-01-01

    NASA s advanced visual simulations are essential for analyses associated with life cycle planning, design, training, testing, operations, and evaluation. Kennedy Space Center, in particular, uses simulations for ground services and space exploration planning in an effort to reduce risk and costs while improving safety and performance. However, it has been difficult to circulate and share the results of simulation tools among the field centers, and distance and travel expenses have made timely collaboration even harder. In response, NASA joined with Valador Inc. to develop the Distributed Observer Network (DON), a collaborative environment that leverages game technology to bring 3-D simulations to conventional desktop and laptop computers. DON enables teams of engineers working on design and operations to view and collaborate on 3-D representations of data generated by authoritative tools. DON takes models and telemetry from these sources and, using commercial game engine technology, displays the simulation results in a 3-D visual environment. Multiple widely dispersed users, working individually or in groups, can view and analyze simulation results on desktop and laptop computers in real time.

  16. A Simulation System for Validating the Analytical Prediction of Performance of the Convolutional Encoded and Symbol Interleaved TDRSS S-band Return Link Service in a Pulsed RFI Environment

    NASA Technical Reports Server (NTRS)

    1981-01-01

    A hardware integrated convolutional coding/symbol interleaving and integrated symbol deinterleaving/Viterbi decoding simulation system is described. Validation on the system of the performance of the TDRSS S-band return link with BPSK modulation, operating in a pulsed RFI environment is included. The system consists of three components, the Fast Linkabit Error Rate Tester (FLERT), the Transition Probability Generator (TPG), and a modified LV7017B which includes rate 1/3 capability as well as a periodic interleaver/deinterleaver. Operating and maintenance manuals for each of these units are included.

  17. The TAVERNS emulator: An Ada simulation of the space station data communications network and software development environment

    NASA Technical Reports Server (NTRS)

    Howes, Norman R.

    1986-01-01

    The Space Station DMS (Data Management System) is the onboard component of the Space Station Information System (SSIS) that includes the computers, networks and software that support the various core and payload subsystems of the Space Station. TAVERNS (Test And Validation Environment for Remote Networked Systems) is a distributed approach for development and validation of application software for Space Station. The TAVERNS concept assumes that the different subsystems will be developed by different contractors who may be geographically separated. The TAVERNS Emulator is an Ada simulation of a TAVERNS on the ASD VAX. The software services described in the DMS Test Bed User's Manual are being emulated on the VAX together with simulations of some of the core subsystems and a simulation of the DCN. The TAVERNS Emulator will be accessible remotely from any VAX that can communicate with the ASD VAX.

  18. The Effect of Stress and Hot Corrosion on Nickel-Base Superalloys

    DTIC Science & Technology

    1985-03-01

    in a degradation of material properties and reduced component life. Allen and Whitlow(6). stated that superalloys in combustion turbine environments...pins are tested in combustion gas streams at elevated temperatures. A hot corrosion environment is usually simulated by burning a sulfur-containing fuel...corrosion attack frequently observed on combustion turbine blades retrieved from service. Figure 1 shows the effect of salt thickness on hot corrosion

  19. Delay and Disruption Tolerant Networking MACHETE Model

    NASA Technical Reports Server (NTRS)

    Segui, John S.; Jennings, Esther H.; Gao, Jay L.

    2011-01-01

    To verify satisfaction of communication requirements imposed by unique missions, as early as 2000, the Communications Networking Group at the Jet Propulsion Laboratory (JPL) saw the need for an environment to support interplanetary communication protocol design, validation, and characterization. JPL's Multi-mission Advanced Communications Hybrid Environment for Test and Evaluation (MACHETE), described in Simulator of Space Communication Networks (NPO-41373) NASA Tech Briefs, Vol. 29, No. 8 (August 2005), p. 44, combines various commercial, non-commercial, and in-house custom tools for simulation and performance analysis of space networks. The MACHETE environment supports orbital analysis, link budget analysis, communications network simulations, and hardware-in-the-loop testing. As NASA is expanding its Space Communications and Navigation (SCaN) capabilities to support planned and future missions, building infrastructure to maintain services and developing enabling technologies, an important and broader role is seen for MACHETE in design-phase evaluation of future SCaN architectures. To support evaluation of the developing Delay Tolerant Networking (DTN) field and its applicability for space networks, JPL developed MACHETE models for DTN Bundle Protocol (BP) and Licklider/Long-haul Transmission Protocol (LTP). DTN is an Internet Research Task Force (IRTF) architecture providing communication in and/or through highly stressed networking environments such as space exploration and battlefield networks. Stressed networking environments include those with intermittent (predictable and unknown) connectivity, large and/or variable delays, and high bit error rates. To provide its services over existing domain specific protocols, the DTN protocols reside at the application layer of the TCP/IP stack, forming a store-and-forward overlay network. The key capabilities of the Bundle Protocol include custody-based reliability, the ability to cope with intermittent connectivity, the ability to take advantage of scheduled and opportunistic connectivity, and late binding of names to addresses.

  20. Medical Service Clinical Laboratory Procedures--Bacteriology.

    ERIC Educational Resources Information Center

    Department of the Army, Washington, DC.

    This manual presents laboratory procedures for the differentiation and identification of disease agents from clinical materials. Included are procedures for the collection of specimens, preparation of culture media, pure culture methods, cultivation of the microorganisms in natural and simulated natural environments, and procedures in…

  1. Assessing the impact of natural service bulls and genotype by environment interactions on genetic gain and inbreeding in organic dairy cattle genomic breeding programs.

    PubMed

    Yin, T; Wensch-Dorendorf, M; Simianer, H; Swalve, H H; König, S

    2014-06-01

    The objective of the present study was to compare genetic gain and inbreeding coefficients of dairy cattle in organic breeding program designs by applying stochastic simulations. Evaluated breeding strategies were: (i) selecting bulls from conventional breeding programs, and taking into account genotype by environment (G×E) interactions, (ii) selecting genotyped bulls within the organic environment for artificial insemination (AI) programs and (iii) selecting genotyped natural service bulls within organic herds. The simulated conventional population comprised 148 800 cows from 2976 herds with an average herd size of 50 cows per herd, and 1200 cows were assigned to 60 organic herds. In a young bull program, selection criteria of young bulls in both production systems (conventional and organic) were either 'conventional' estimated breeding values (EBV) or genomic estimated breeding values (GEBV) for two traits with low (h 2=0.05) and moderate heritability (h 2=0.30). GEBV were calculated for different accuracies (r mg), and G×E interactions were considered by modifying originally simulated true breeding values in the range from r g=0.5 to 1.0. For both traits (h 2=0.05 and 0.30) and r mg⩾0.8, genomic selection of bulls directly in the organic population and using selected bulls via AI revealed higher genetic gain than selecting young bulls in the larger conventional population based on EBV; also without the existence of G×E interactions. Only for pronounced G×E interactions (r g=0.5), and for highly accurate GEBV for natural service bulls (r mg>0.9), results suggests the use of genotyped organic natural service bulls instead of implementing an AI program. Inbreeding coefficients of selected bulls and their offspring were generally lower when basing selection decisions for young bulls on GEBV compared with selection strategies based on pedigree indices.

  2. Self-Organizing Distributed Architecture Supporting Dynamic Space Expanding and Reducing in Indoor LBS Environment

    PubMed Central

    Jeong, Seol Young; Jo, Hyeong Gon; Kang, Soon Ju

    2015-01-01

    Indoor location-based services (iLBS) are extremely dynamic and changeable, and include numerous resources and mobile devices. In particular, the network infrastructure requires support for high scalability in the indoor environment, and various resource lookups are requested concurrently and frequently from several locations based on the dynamic network environment. A traditional map-based centralized approach for iLBSs has several disadvantages: it requires global knowledge to maintain a complete geographic indoor map; the central server is a single point of failure; it can also cause low scalability and traffic congestion; and it is hard to adapt to a change of service area in real time. This paper proposes a self-organizing and fully distributed platform for iLBSs. The proposed self-organizing distributed platform provides a dynamic reconfiguration of locality accuracy and service coverage by expanding and contracting dynamically. In order to verify the suggested platform, scalability performance according to the number of inserted or deleted nodes composing the dynamic infrastructure was evaluated through a simulation similar to the real environment. PMID:26016908

  3. Dynamic Hop Service Differentiation Model for End-to-End QoS Provisioning in Multi-Hop Wireless Networks

    NASA Astrophysics Data System (ADS)

    Youn, Joo-Sang; Seok, Seung-Joon; Kang, Chul-Hee

    This paper presents a new QoS model for end-to-end service provisioning in multi-hop wireless networks. In legacy IEEE 802.11e based multi-hop wireless networks, the fixed assignment of service classes according to flow's priority at every node causes priority inversion problem when performing end-to-end service differentiation. Thus, this paper proposes a new QoS provisioning model called Dynamic Hop Service Differentiation (DHSD) to alleviate the problem and support effective service differentiation between end-to-end nodes. Many previous works for QoS model through the 802.11e based service differentiation focus on packet scheduling on several service queues with different service rate and service priority. Our model, however, concentrates on a dynamic class selection scheme, called Per Hop Class Assignment (PHCA), in the node's MAC layer, which selects a proper service class for each packet, in accordance with queue states and service requirement, in every node along the end-to-end route of the packet. The proposed QoS solution is evaluated using the OPNET simulator. The simulation results show that the proposed model outperforms both best-effort and 802.11e based strict priority service models in mobile ad hoc environments.

  4. Simulation: A Complementary Method for Teaching Health Services Strategic Management

    PubMed Central

    Reddick, W. T.

    1990-01-01

    Rapid change in the health care environment mandates a more comprehensive approach to the education of future health administrators. The area of consideration in this study is that of health care strategic management. A comprehensive literature review suggests microcomputer-based simulation as an appropriate vehicle for addressing the needs of both educators and students. Seven strategic management software packages are reviewed and rated with an instrument adapted from the Infoworld review format. The author concludes that a primary concern is the paucity of health care specific strategic management simulations.

  5. Evolving Storage and Cyber Infrastructure at the NASA Center for Climate Simulation

    NASA Technical Reports Server (NTRS)

    Salmon, Ellen; Duffy, Daniel; Spear, Carrie; Sinno, Scott; Vaughan, Garrison; Bowen, Michael

    2018-01-01

    This talk will describe recent developments at the NASA Center for Climate Simulation, which is funded by NASAs Science Mission Directorate, and supports the specialized data storage and computational needs of weather, ocean, and climate researchers, as well as astrophysicists, heliophysicists, and planetary scientists. To meet requirements for higher-resolution, higher-fidelity simulations, the NCCS augments its High Performance Computing (HPC) and storage retrieval environment. As the petabytes of model and observational data grow, the NCCS is broadening data services offerings and deploying and expanding virtualization resources for high performance analytics.

  6. A parametric study of surface roughness and bonding mechanisms of aluminum alloys with epoxies: a molecular dynamics simulation

    NASA Astrophysics Data System (ADS)

    Timilsina, Rajendra; Termaath, Stephanie

    The marine environment is highly aggressive towards most materials. However, aluminium-magnesium alloys (Al-Mg, specifically, 5xxx series) have exceptionally long service life in such aggressive marine environments. For instance, an Al-Mg alloy, AA5083, is extensively used in naval structures because of its good mechanical strength, formability, seawater corrosion resistance and weldability. However, bonding mechanisms of these alloys with epoxies in a rough surface environment are not fully understood yet. It requires a rigorous investigation at molecular or atomic levels. We performed a molecular dynamics simulation to study an adherend surface preparation and surface bonding mechanisms of Al-Mg alloy (AA5083) with different epoxies by developing several computer models. Various distributions of surface roughness are introduced in the models and performed molecular dynamics simulations. Formation of a beta phase (Al3Mg2) , microstructures, bonding energies at the interface, bonding strengths and durability are investigated. Office of Naval Research.

  7. Running climate model on a commercial cloud computing environment: A case study using Community Earth System Model (CESM) on Amazon AWS

    NASA Astrophysics Data System (ADS)

    Chen, Xiuhong; Huang, Xianglei; Jiao, Chaoyi; Flanner, Mark G.; Raeker, Todd; Palen, Brock

    2017-01-01

    The suites of numerical models used for simulating climate of our planet are usually run on dedicated high-performance computing (HPC) resources. This study investigates an alternative to the usual approach, i.e. carrying out climate model simulations on commercially available cloud computing environment. We test the performance and reliability of running the CESM (Community Earth System Model), a flagship climate model in the United States developed by the National Center for Atmospheric Research (NCAR), on Amazon Web Service (AWS) EC2, the cloud computing environment by Amazon.com, Inc. StarCluster is used to create virtual computing cluster on the AWS EC2 for the CESM simulations. The wall-clock time for one year of CESM simulation on the AWS EC2 virtual cluster is comparable to the time spent for the same simulation on a local dedicated high-performance computing cluster with InfiniBand connections. The CESM simulation can be efficiently scaled with the number of CPU cores on the AWS EC2 virtual cluster environment up to 64 cores. For the standard configuration of the CESM at a spatial resolution of 1.9° latitude by 2.5° longitude, increasing the number of cores from 16 to 64 reduces the wall-clock running time by more than 50% and the scaling is nearly linear. Beyond 64 cores, the communication latency starts to outweigh the benefit of distributed computing and the parallel speedup becomes nearly unchanged.

  8. Creating a Realistic Weather Environment for Motion-Based Piloted Flight Simulation

    NASA Technical Reports Server (NTRS)

    Daniels, Taumi S.; Schaffner, Philip R.; Evans, Emory T.; Neece, Robert T.; Young, Steve D.

    2012-01-01

    A flight simulation environment is being enhanced to facilitate experiments that evaluate research prototypes of advanced onboard weather radar, hazard/integrity monitoring (HIM), and integrated alerting and notification (IAN) concepts in adverse weather conditions. The simulation environment uses weather data based on real weather events to support operational scenarios in a terminal area. A simulated atmospheric environment was realized by using numerical weather data sets. These were produced from the High-Resolution Rapid Refresh (HRRR) model hosted and run by the National Oceanic and Atmospheric Administration (NOAA). To align with the planned flight simulation experiment requirements, several HRRR data sets were acquired courtesy of NOAA. These data sets coincided with severe weather events at the Memphis International Airport (MEM) in Memphis, TN. In addition, representative flight tracks for approaches and departures at MEM were generated and used to develop and test simulations of (1) what onboard sensors such as the weather radar would observe; (2) what datalinks of weather information would provide; and (3) what atmospheric conditions the aircraft would experience (e.g. turbulence, winds, and icing). The simulation includes a weather radar display that provides weather and turbulence modes, derived from the modeled weather along the flight track. The radar capabilities and the pilots controls simulate current-generation commercial weather radar systems. Appropriate data-linked weather advisories (e.g., SIGMET) were derived from the HRRR weather models and provided to the pilot consistent with NextGen concepts of use for Aeronautical Information Service (AIS) and Meteorological (MET) data link products. The net result of this simulation development was the creation of an environment that supports investigations of new flight deck information systems, methods for incorporation of better weather information, and pilot interface and operational improvements for better aviation safety. This research is part of a larger effort at NASA to study the impact of the growing complexity of operations, information, and systems on crew decision-making and response effectiveness; and then to recommend methods for improving future designs.

  9. Physiological responses to simulated firefighter exercise protocols in varying environments.

    PubMed

    Horn, Gavin P; Kesler, Richard M; Motl, Robert W; Hsiao-Wecksler, Elizabeth T; Klaren, Rachel E; Ensari, Ipek; Petrucci, Matthew N; Fernhall, Bo; Rosengren, Karl S

    2015-01-01

    For decades, research to quantify the effects of firefighting activities and personal protective equipment on physiology and biomechanics has been conducted in a variety of testing environments. It is unknown if these different environments provide similar information and comparable responses. A novel Firefighting Activities Station, which simulates four common fireground tasks, is presented for use with an environmental chamber in a controlled laboratory setting. Nineteen firefighters completed three different exercise protocols following common research practices. Simulated firefighting activities conducted in an environmental chamber or live-fire structures elicited similar physiological responses (max heart rate: 190.1 vs 188.0 bpm, core temperature response: 0.047°C/min vs 0.043°C/min) and accelerometry counts. However, the response to a treadmill protocol commonly used in laboratory settings resulted in significantly lower heart rate (178.4 vs 188.0 bpm), core temperature response (0.037°C/min vs 0.043°C/min) and physical activity counts compared with firefighting activities in the burn building. Practitioner Summary: We introduce a new approach for simulating realistic firefighting activities in a controlled laboratory environment for ergonomics assessment of fire service equipment and personnel. Physiological responses to this proposed protocol more closely replicate those from live-fire activities than a traditional treadmill protocol and are simple to replicate and standardise.

  10. Modeling and performance analysis using extended fuzzy-timing Petri nets for networked virtual environments.

    PubMed

    Zhou, Y; Murata, T; Defanti, T A

    2000-01-01

    Despite their attractive properties, networked virtual environments (net-VEs) are notoriously difficult to design, implement, and test due to the concurrency, real-time and networking features in these systems. Net-VEs demand high quality-of-service (QoS) requirements on the network to maintain natural and real-time interactions among users. The current practice for net-VE design is basically trial and error, empirical, and totally lacks formal methods. This paper proposes to apply a Petri net formal modeling technique to a net-VE-NICE (narrative immersive constructionist/collaborative environment), predict the net-VE performance based on simulation, and improve the net-VE performance. NICE is essentially a network of collaborative virtual reality systems called the CAVE-(CAVE automatic virtual environment). First, we introduce extended fuzzy-timing Petri net (EFTN) modeling and analysis techniques. Then, we present EFTN models of the CAVE, NICE, and transport layer protocol used in NICE: transmission control protocol (TCP). We show the possibility analysis based on the EFTN model for the CAVE. Then, by using these models and design/CPN as the simulation tool, we conducted various simulations to study real-time behavior, network effects and performance (latencies and jitters) of NICE. Our simulation results are consistent with experimental data.

  11. The sustainability and performance measurement on supply chain in services industry: A literature review

    NASA Astrophysics Data System (ADS)

    Leksono, Eko Budi; Suparno, Vanany, Iwan

    2017-11-01

    The services industry growth has been significant relation with economic growth. A new paradigm is needed for services sector development. The supply chain and performance measurement able to sustain of services industry growth. The supply chain implementation in the services industry called service supply chain (SSC). The globalization and stakeholder pressure makes operation of SSC should more attention to sustainability issue which consists of economic, social and environment simultaneously on SSC. Furthermore, services industry can develop by implementation of the sustainable SSC and its performance measurement. The sustainable SSC implementation can minimize of negative operation effect to environment and social, and maximize of profit. Sustainable service supply chain performance measurements (SSSCPM) are still less explored. The purpose of this paper is to review the literature in the field SSC, SSSC, SSC performance measurement (SSCPM) and SSSCPM for identification of the SSSCPM frameworks and indicators. Beside, the result of review able to look opportunities for develop a new framework for SSSCPM at the operational level, tactical and strategic, multiplayer and close loop, the effectiveness of the integration and development of modeling and simulation for evaluation in the future.

  12. Medical operations: Crew surgeon's report. [in Skylab simulation test

    NASA Technical Reports Server (NTRS)

    Ross, C. E.

    1973-01-01

    To assure the safety and well being of the Skylab environment simulation crewmembers it was necessary to develop a medical safety plan with emergency procedures. All medical and nonmedical test and operations personnel, except those specifically exempted, were required to meet the medical standards and proficiency levels as established. Implemented programs included health care of the test crew and their families, occupational medical services for chamber operating personnel, clinical laboratory support and hypobaric and other emergency support.

  13. Semantics-enabled service discovery framework in the SIMDAT pharma grid.

    PubMed

    Qu, Cangtao; Zimmermann, Falk; Kumpf, Kai; Kamuzinzi, Richard; Ledent, Valérie; Herzog, Robert

    2008-03-01

    We present the design and implementation of a semantics-enabled service discovery framework in the data Grids for process and product development using numerical simulation and knowledge discovery (SIMDAT) Pharma Grid, an industry-oriented Grid environment for integrating thousands of Grid-enabled biological data services and analysis services. The framework consists of three major components: the Web ontology language (OWL)-description logic (DL)-based biological domain ontology, OWL Web service ontology (OWL-S)-based service annotation, and semantic matchmaker based on the ontology reasoning. Built upon the framework, workflow technologies are extensively exploited in the SIMDAT to assist biologists in (semi)automatically performing in silico experiments. We present a typical usage scenario through the case study of a biological workflow: IXodus.

  14. Computer control of a robotic satellite servicer

    NASA Technical Reports Server (NTRS)

    Fernandez, K. R.

    1980-01-01

    The advantages that will accrue from the in-orbit servicing of satellites are listed. It is noted that in a concept in satellite servicing which holds promise as a compromise between the high flexibility and adaptability of manned vehicles and the lower cost of an unmanned vehicle involves an unmanned servicer carrying a remotely supervised robotic manipulator arm. Because of deficiencies in sensor technology, robot servicing would require that satellites be designed according to a modular concept. A description is given of the servicer simulation hardware, the computer and interface hardware, and the software. It is noted that several areas require further development; these include automated docking, modularization of satellite design, reliable connector and latching mechanisms, development of manipulators for space environments, and development of automated diagnostic techniques.

  15. Semantically Aware Foundation Environment (SAFE) for Clean-Slate Design of Resilient, Adaptive Secure Hosts (CRASH)

    DTIC Science & Technology

    2016-02-01

    system consists of a high-fidelity hardware simulation using field programmable gate arrays (FPGAs), with a set of runtime services (ConcreteWare...perimeter protection, patch, and pray” is not aligned with the threat. Programmers will not bail us out of this situation (by writing defect free code...hosted on a Field Programmable Gate Array (FPGA), with a set of runtime services (concreteware) running on the hardware. Secure applications can be

  16. Training Community Modeling and Simulation Business Plan, 2007 Edition. Volume 2: Data Call Responses and Analysis

    DTIC Science & Technology

    2009-02-01

    services; and • Other reconstruction assistance. D-14 17. Train Forces on Military Assistance to Civil Authorities ( MACA ) Develop environments...for training in the planning and execution of MACA in support of disaster relief (natural and man-made), military assistance for civil disturbances

  17. Sample Strategies Used To Serve Rural Students in the Least Restrictive Environment.

    ERIC Educational Resources Information Center

    Helge, Doris

    This booklet provides sample strategies to ameliorate service delivery problems commonly encountered by rural special educators. Strategies to increase acceptance of disabled students by nondisabled peers include buddy systems and class activities that promote personal interaction, simulation activities, and social and personal skills development.…

  18. Validation of Operational Multiscale Environment Model With Grid Adaptivity (OMEGA).

    DTIC Science & Technology

    1995-12-01

    Center for the period of the Chernobyl Nuclear Accident. The physics of the model is tested using National Weather Service Medium Range Forecast data by...Climatology Center for the first three days following the release at the Chernobyl Nuclear Plant. A user-defined source term was developed to simulate

  19. The development of the Canadian Mobile Servicing System Kinematic Simulation Facility

    NASA Technical Reports Server (NTRS)

    Beyer, G.; Diebold, B.; Brimley, W.; Kleinberg, H.

    1989-01-01

    Canada will develop a Mobile Servicing System (MSS) as its contribution to the U.S./International Space Station Freedom. Components of the MSS will include a remote manipulator (SSRMS), a Special Purpose Dexterous Manipulator (SPDM), and a mobile base (MRS). In order to support requirements analysis and the evaluation of operational concepts related to the use of the MSS, a graphics based kinematic simulation/human-computer interface facility has been created. The facility consists of the following elements: (1) A two-dimensional graphics editor allowing the rapid development of virtual control stations; (2) Kinematic simulations of the space station remote manipulators (SSRMS and SPDM), and mobile base; and (3) A three-dimensional graphics model of the space station, MSS, orbiter, and payloads. These software elements combined with state of the art computer graphics hardware provide the capability to prototype MSS workstations, evaluate MSS operational capabilities, and investigate the human-computer interface in an interactive simulation environment. The graphics technology involved in the development and use of this facility is described.

  20. A Data Management System for International Space Station Simulation Tools

    NASA Technical Reports Server (NTRS)

    Betts, Bradley J.; DelMundo, Rommel; Elcott, Sharif; McIntosh, Dawn; Niehaus, Brian; Papasin, Richard; Mah, Robert W.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Groups associated with the design, operational, and training aspects of the International Space Station make extensive use of modeling and simulation tools. Users of these tools often need to access and manipulate large quantities of data associated with the station, ranging from design documents to wiring diagrams. Retrieving and manipulating this data directly within the simulation and modeling environment can provide substantial benefit to users. An approach for providing these kinds of data management services, including a database schema and class structure, is presented. Implementation details are also provided as a data management system is integrated into the Intelligent Virtual Station, a modeling and simulation tool developed by the NASA Ames Smart Systems Research Laboratory. One use of the Intelligent Virtual Station is generating station-related training procedures in a virtual environment, The data management component allows users to quickly and easily retrieve information related to objects on the station, enhancing their ability to generate accurate procedures. Users can associate new information with objects and have that information stored in a database.

  1. Performance Analysis of Cloud Computing Architectures Using Discrete Event Simulation

    NASA Technical Reports Server (NTRS)

    Stocker, John C.; Golomb, Andrew M.

    2011-01-01

    Cloud computing offers the economic benefit of on-demand resource allocation to meet changing enterprise computing needs. However, the flexibility of cloud computing is disadvantaged when compared to traditional hosting in providing predictable application and service performance. Cloud computing relies on resource scheduling in a virtualized network-centric server environment, which makes static performance analysis infeasible. We developed a discrete event simulation model to evaluate the overall effectiveness of organizations in executing their workflow in traditional and cloud computing architectures. The two part model framework characterizes both the demand using a probability distribution for each type of service request as well as enterprise computing resource constraints. Our simulations provide quantitative analysis to design and provision computing architectures that maximize overall mission effectiveness. We share our analysis of key resource constraints in cloud computing architectures and findings on the appropriateness of cloud computing in various applications.

  2. A Prototyping Effort for the Integrated Spacecraft Analysis System

    NASA Technical Reports Server (NTRS)

    Wong, Raymond; Tung, Yu-Wen; Maldague, Pierre

    2011-01-01

    Computer modeling and simulation has recently become an essential technique for predicting and validating spacecraft performance. However, most computer models only examine spacecraft subsystems, and the independent nature of the models creates integration problems, which lowers the possibilities of simulating a spacecraft as an integrated unit despite a desire for this type of analysis. A new project called Integrated Spacecraft Analysis was proposed to serve as a framework for an integrated simulation environment. The project is still in its infancy, but a software prototype would help future developers assess design issues. The prototype explores a service oriented design paradigm that theoretically allows programs written in different languages to communicate with one another. It includes creating a uniform interface to the SPICE libraries such that different in-house tools like APGEN or SEQGEN can exchange information with it without much change. Service orientation may result in a slower system as compared to a single application, and more research needs to be done on the different available technologies, but a service oriented approach could increase long term maintainability and extensibility.

  3. Performance evaluation of power control algorithms in wireless cellular networks

    NASA Astrophysics Data System (ADS)

    Temaneh-Nyah, C.; Iita, V.

    2014-10-01

    Power control in a mobile communication network intents to control the transmission power levels in such a way that the required quality of service (QoS) for the users is guaranteed with lowest possible transmission powers. Most of the studies of power control algorithms in the literature are based on some kind of simplified assumptions which leads to compromise in the validity of the results when applied in a real environment. In this paper, a CDMA network was simulated. The real environment was accounted for by defining the analysis area and the network base stations and mobile stations are defined by their geographical coordinates, the mobility of the mobile stations is accounted for. The simulation also allowed for a number of network parameters including the network traffic, and the wireless channel models to be modified. Finally, we present the simulation results of a convergence speed based comparative analysis of three uplink power control algorithms.

  4. 2005 8th Annual Systems Engineering Conference Volume 3 - Wednesday presentations

    DTIC Science & Technology

    2005-10-24

    phasi s on s ystem s eng ineeri ng Imple menta tion o f SE P lans Requires PEO chief engineer Conduct of technical reviews SE Policy Addendum Signed by...in a Performance Based Logistics Environment, Denise Duncan, LMI Track 5 - Best Practices & Standardization: CMMI for Services, Mr. Juan Ceva...CMMI for Services Mr. Juan Ceva, Raytheon RIS TRACK 5 Logistics Session 3C5 TRACK 4 Net Centric Operations Session 3C4 TRACK 6 Modeling & Simulation

  5. Specifications of a Simulation Model for a Local Area Network Design in Support of Stock Point Logistics Integrated Communications Environment (SPLICE).

    DTIC Science & Technology

    1982-10-01

    class queueing system with a preemptive -resume priority service discipline, as depicted in Figure 4.2. Concerning a SPLICLAN configuration a node can...processor can be modeled as a single resource, multi-class queueing system with a preemptive -resume priority structure as the one given in Figure 4.2. An...LOCAL AREA NETWORK DESIGN IN SUPPORT OF STOCK POINT LOGISTICS INTEGRATED COMMUNICATIONS ENVIRONMENT (SPLICE) by Ioannis Th. Mastrocostopoulos October

  6. Panthere V2: Multipurpose Simulation Software for 3D Dose Rate Calculations

    NASA Astrophysics Data System (ADS)

    Penessot, Gaël; Bavoil, Éléonore; Wertz, Laurent; Malouch, Fadhel; Visonneau, Thierry; Dubost, Julien

    2017-09-01

    PANTHERE is a multipurpose radiation protection software developed by EDF to calculate gamma dose rates in complex 3D environments. PANTHERE takes a key role in the EDF ALARA process, enabling to predict dose rates and to organize and optimize operations in high radiation environments. PANTHERE is also used for nuclear waste characterization, transport of nuclear materials, etc. It is used in most of the EDF engineering units and their design service providers and industrial partners.

  7. iRODS-Based Climate Data Services and Virtualization-as-a-Service in the NASA Center for Climate Simulation

    NASA Astrophysics Data System (ADS)

    Schnase, J. L.; Duffy, D. Q.; Tamkin, G. S.; Strong, S.; Ripley, D.; Gill, R.; Sinno, S. S.; Shen, Y.; Carriere, L. E.; Brieger, L.; Moore, R.; Rajasekar, A.; Schroeder, W.; Wan, M.

    2011-12-01

    Scientific data services are becoming an important part of the NASA Center for Climate Simulation's mission. Our technological response to this expanding role is built around the concept of specialized virtual climate data servers, repetitive cloud provisioning, image-based deployment and distribution, and virtualization-as-a-service. A virtual climate data server is an OAIS-compliant, iRODS-based data server designed to support a particular type of scientific data collection. iRODS is data grid middleware that provides policy-based control over collection-building, managing, querying, accessing, and preserving large scientific data sets. We have developed prototype vCDSs to manage NetCDF, HDF, and GeoTIF data products. We use RPM scripts to build vCDS images in our local computing environment, our local Virtual Machine Environment, NASA's Nebula Cloud Services, and Amazon's Elastic Compute Cloud. Once provisioned into these virtualized resources, multiple vCDSs can use iRODS's federation and realized object capabilities to create an integrated ecosystem of data servers that can scale and adapt to changing requirements. This approach enables platform- or software-as-a-service deployment of the vCDSs and allows the NCCS to offer virtualization-as-a-service, a capacity to respond in an agile way to new customer requests for data services, and a path for migrating existing services into the cloud. We have registered MODIS Atmosphere data products in a vCDS that contains 54 million registered files, 630TB of data, and over 300 million metadata values. We are now assembling IPCC AR5 data into a production vCDS that will provide the platform upon which NCCS's Earth System Grid (ESG) node publishes to the extended science community. In this talk, we describe our approach, experiences, lessons learned, and plans for the future.

  8. Around Marshall

    NASA Image and Video Library

    1993-06-30

    This photograph shows STS-61 crewmemmbers training for the Hubble Space Telescope (HST) servicing mission in the Marshall Space Flight Center's (MSFC's) Neutral Buoyancy Simulator (NBS). Two months after its deployment in space, scientists detected a 2-micron spherical aberration in the primary mirror of the HST that affected the telescope's ability to focus faint light sources into a precise point. This imperfection was very slight, one-fiftieth of the width of a human hair. A scheduled Space Service servicing mission (STS-61) in 1993 permitted scientists to correct the problem. The MSFC NBS provided an excellent environment for testing hardware to examine how it would operate in space and for evaluating techniques for space construction and spacecraft servicing.

  9. Analyzing the Effect of Consultation Training on the Development of Consultation Competence

    ERIC Educational Resources Information Center

    Newell, Markeda L.; Newell, Terrance

    2018-01-01

    The purpose of this study was to examine the effectiveness of one consultation course on the development of pre-service school psychologists' consultation knowledge, confidence, and skills. Computer-simulation was used as a means to replicate the school environment and capture consultants' engagement throughout the consultation process without…

  10. Simulated Apprenticeship for Pre-Service Filipino Teachers

    ERIC Educational Resources Information Center

    Medula, Cesar Turqueza

    2017-01-01

    The delivery of teacher education courses often for the most part deal with the visible parts of knowledge, the "know-what", which is often disconnected from the tacit knowledge, the "know-how", required in authentic teaching environments. It could be argued that would-be teachers do undergo practice teaching as part of their…

  11. An Update on Improvements to NiCE Support for PROTEUS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bennett, Andrew; McCaskey, Alexander J.; Billings, Jay Jay

    2015-09-01

    The Department of Energy Office of Nuclear Energy's Nuclear Energy Advanced Modeling and Simulation (NEAMS) program has supported the development of the NEAMS Integrated Computational Environment (NiCE), a modeling and simulation workflow environment that provides services and plugins to facilitate tasks such as code execution, model input construction, visualization, and data analysis. This report details the development of workflows for the reactor core neutronics application, PROTEUS. This advanced neutronics application (primarily developed at Argonne National Laboratory) aims to improve nuclear reactor design and analysis by providing an extensible and massively parallel, finite-element solver for current and advanced reactor fuel neutronicsmore » modeling. The integration of PROTEUS-specific tools into NiCE is intended to make the advanced capabilities that PROTEUS provides more accessible to the nuclear energy research and development community. This report will detail the work done to improve existing PROTEUS workflow support in NiCE. We will demonstrate and discuss these improvements, including the development of flexible IO services, an improved interface for input generation, and the addition of advanced Fortran development tools natively in the platform.« less

  12. Attitude dynamics and control of a spacecraft like a robotic manipulator when implementing on-orbit servicing

    NASA Astrophysics Data System (ADS)

    Da Fonseca, Ijar M.; Goes, Luiz C. S.; Seito, Narumi; da Silva Duarte, Mayara K.; de Oliveira, Élcio Jeronimo

    2017-08-01

    In space the manipulators working space is characterized by the microgravity environment. In this environment the spacecraft floats and its rotational/translational motion may be excited by any internal and external disturbances. The complete system, i.e., the spacecraft and the associated robotic manipulator, floats and is sensitive to any reaction force and torque related to the manipulator's operation. In this sense the effort done by the robot may result in torque about the system center of mass and also in forces changing its translational motion. This paper analyzes the impact of the robot manipulator dynamics on the attitude motion and the associated control effort to keep the attitude stable during the manipulator's operation. The dynamics analysis is performed in the close proximity phase of rendezvous docking/berthing operation. In such scenario the linear system equations for the translation and attitude relative motions are appropriate. The computer simulations are implemented for the relative translational and rotational motion. The equations of motion have been simulated through computer by using the MatLab software. The LQR and the PID control laws are used for linear and nonlinear control, respectively, aiming to keep the attitude stable while the robot is in and out of service. The gravity-gradient and the residual magnetic torque are considered as external disturbances. The control efforts are analyzed for the manipulator in and out of service. The control laws allow the system stabilization and good performance when the manipulator is in service.

  13. Centre of Excellence For Simulation Education and Innovation (CESEI).

    PubMed

    Qayumi, A Karim

    2010-01-01

    Simulation is becoming an integral part of medical education. The American College of Surgeons (ACS) was the first organization to recognize the value of simulation-based learning, and to award accreditation for educational institutions that aim to provide simulation as part of the experiential learning opportunity. Centre of Excellence for Simulation Education and Innovation (CESEI) is a multidisciplinary and interprofessional educational facility that is based at the University of British Columbia (UBC) and Vancouver Costal Health Authority (VCH). Centre of Excellence for Simulation Education and Innovation's goal is to provide excellence in education, research, and healthcare delivery by providing a technologically advanced environment and learning opportunity using simulation for various groups of learners including undergraduate, postgraduate, nursing, and allied health professionals. This article is an attempt to describe the infrastructure, services, and uniqueness of the Centre of Excellence for Simulation Education and Innovation. Copyright 2010 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  14. Potential roles for EVA and telerobotics in a unified worksite

    NASA Astrophysics Data System (ADS)

    Akin, David; Howard, Russel D.

    1993-02-01

    Although telerobotics and extravehicular activity (EVA) are often portrayed as competitive approaches to space operations, ongoing research in the Space Systems Laboratory (SSL) has demonstrated the utility of cooperative roles in an integrated EVA/telerobotic work site. Working in the neutral buoyancy simulation environment, tests were performed on interactive roles or EVA subjects and telerobots in structural assembly and satellite servicing tasks. In the most elaborate of these tests to date, EVA subjects were assisted by the SSL's Beam Assembly Teleoperator (BAT) in several servicing tasks planned for Hubble Space Telescope, using the high-fidelity crew training article in the NASA Marshall Neutral Buoyancy Simulator. These tests revealed several shortcomings in the design of BAT for satellite servicing and demonstrated the utility of a free-flying or RMS-mounted telerobot for providing EVA crew assistance. This paper documents the past tests, including the use of free-flying telerobots to effect the rescue of a simulated incapacitated EVA subject, and details planned future efforts in this area, including the testing of a new telerobotic system optimized for the satellite servicing role, the development of dedicated telerobotic devices designed specifically for assisting EVA crew, and conceptual approaches to advanced EVA/telerobotic operations such as the Astronaut Operations Vehicle.

  15. Performance Evaluation of a SLA Negotiation Control Protocol for Grid Networks

    NASA Astrophysics Data System (ADS)

    Cergol, Igor; Mirchandani, Vinod; Verchere, Dominique

    A framework for an autonomous negotiation control protocol for service delivery is crucial to enable the support of heterogeneous service level agreements (SLAs) that will exist in distributed environments. We have first given a gist of our augmented service negotiation protocol to support distinct service elements. The augmentations also encompass related composition of the services and negotiation with several service providers simultaneously. All the incorporated augmentations will enable to consolidate the service negotiation operations for telecom networks, which are evolving towards Grid networks. Furthermore, our autonomous negotiation protocol is based on a distributed multi-agent framework to create an open market for Grid services. Second, we have concisely presented key simulation results of our work in progress. The results exhibit the usefulness of our negotiation protocol for realistic scenarios that involves different background traffic loading, message sizes and traffic flow asymmetry between background and negotiation traffics.

  16. Experimenting with semantic web services to understand the role of NLP technologies in healthcare.

    PubMed

    Jagannathan, V

    2006-01-01

    NLP technologies can play a significant role in healthcare where a predominant segment of the clinical documentation is in text form. In a graduate course focused on understanding semantic web services at West Virginia University, a class project was designed with the purpose of exploring potential use for NLP-based abstraction of clinical documentation. The role of NLP-technology was simulated using human abstractors and various workflows were investigated using public domain workflow and semantic web service technologies. This poster explores the potential use of NLP and the role of workflow and semantic web technologies in developing healthcare IT environments.

  17. Engineering High Assurance Distributed Cyber Physical Systems

    DTIC Science & Technology

    2015-01-15

    decisions: number of interacting agents and co-dependent decisions made in real-time without causing interference . To engineer a high assurance DART...environment specification, architecture definition, domain-specific languages, design patterns, code - generation, analysis, test-generation, and simulation...include synchronization between the models and source code , debugging at the model level, expression of the design intent, and quality of service

  18. The Erector Set Computer: Building a Virtual Workstation over a Large Multi-Vendor Network.

    ERIC Educational Resources Information Center

    Farago, John M.

    1989-01-01

    Describes a computer network developed at the City University of New York Law School that uses device sharing and local area networking to create a simulated law office. Topics discussed include working within a multi-vendor environment, and the communication, information, and database access services available through the network. (CLB)

  19. The role of simulation models in monitoring soil organic carbon storage and greenhouse gas mitigation potential in bioenergy cropping systems

    USDA-ARS?s Scientific Manuscript database

    There is an increased demand on agricultural systems worldwide to provide food, fiber, and feedstock for the emerging bioenergy industry, raising legitimate concerns on the associated impacts of such intensification on the environment. Of the many ecosystem services that could be impacted by the la...

  20. Combining patient journey modelling and visual multi-agent computer simulation: a framework to improving knowledge translation in a healthcare environment.

    PubMed

    Curry, Joanne; Fitzgerald, Anneke; Prodan, Ante; Dadich, Ann; Sloan, Terry

    2014-01-01

    This article focuses on a framework that will investigate the integration of two disparate methodologies: patient journey modelling and visual multi-agent simulation, and its impact on the speed and quality of knowledge translation to healthcare stakeholders. Literature describes patient journey modelling and visual simulation as discrete activities. This paper suggests that their combination and their impact on translating knowledge to practitioners are greater than the sum of the two technologies. The test-bed is ambulatory care and the goal is to determine if this approach can improve health services delivery, workflow, and patient outcomes and satisfaction. The multidisciplinary research team is comprised of expertise in patient journey modelling, simulation, and knowledge translation.

  1. An Analysis of Failure Handling in Chameleon, A Framework for Supporting Cost-Effective Fault Tolerant Services

    NASA Technical Reports Server (NTRS)

    Haakensen, Erik Edward

    1998-01-01

    The desire for low-cost reliable computing is increasing. Most current fault tolerant computing solutions are not very flexible, i.e., they cannot adapt to reliability requirements of newly emerging applications in business, commerce, and manufacturing. It is important that users have a flexible, reliable platform to support both critical and noncritical applications. Chameleon, under development at the Center for Reliable and High-Performance Computing at the University of Illinois, is a software framework. for supporting cost-effective adaptable networked fault tolerant service. This thesis details a simulation of fault injection, detection, and recovery in Chameleon. The simulation was written in C++ using the DEPEND simulation library. The results obtained from the simulation included the amount of overhead incurred by the fault detection and recovery mechanisms supported by Chameleon. In addition, information about fault scenarios from which Chameleon cannot recover was gained. The results of the simulation showed that both critical and noncritical applications can be executed in the Chameleon environment with a fairly small amount of overhead. No single point of failure from which Chameleon could not recover was found. Chameleon was also found to be capable of recovering from several multiple failure scenarios.

  2. Toward Risk Reduction for Mobile Service Composition.

    PubMed

    Deng, Shuiguang; Huang, Longtao; Li, Ying; Zhou, Honggeng; Wu, Zhaohui; Cao, Xiongfei; Kataev, Mikhail Yu; Li, Ling

    2016-08-01

    The advances in mobile technologies enable us to consume or even provide services through powerful mobile devices anytime and anywhere. Services running on mobile devices within limited range can be composed to coordinate together through wireless communication technologies and perform complex tasks. However, the mobility of users and devices in mobile environment imposes high risk on the execution of the tasks. This paper targets reducing this risk by constructing a dependable service composition after considering the mobility of both service requesters and providers. It first proposes a risk model and clarifies the risk of mobile service composition; and then proposes a service composition approach by modifying the simulated annealing algorithm. Our objective is to form a service composition by selecting mobile services under the mobility model and to ensure the service composition have the best quality of service and the lowest risk. The experimental results demonstrate that our approach can yield near-optimal solutions and has a nearly linear complexity with respect to a problem size.

  3. Multi-Agent Social Simulation

    NASA Astrophysics Data System (ADS)

    Noda, Itsuki; Stone, Peter; Yamashita, Tomohisa; Kurumatani, Koichi

    While ambient intelligence and smart environments (AISE) technologies are expected to provide large impacts to human lives and social activities, it is generally difficult to show utilities and effects of these technologies on societies. AISE technologies are not only methods to improve performance and functionality of existing services in the society, but also frameworks to introduce new systems and services to the society. For example, no one expected beforehand what Internet or mobile phone brought into out social activities and services, although they changes our social system and patterns of behaviors drastically and emerge new services (and risks, unfortunately). The main reason of this difficulty is that actual effects of IT systems appear when enough number of people in the society use the technologies.

  4. History of Hubble Space Telescope (HST)

    NASA Image and Video Library

    1993-07-09

    This photograph shows an STS-61 astronaut training for the Hubble Space Telescope (HST) servicing mission (STS-61) in the Marshall Space Flight Center's (MSFC's) Neutral Buoyancy Simulator (NBS). Two months after its deployment in space, scientists detected a 2-micron spherical aberration in the primary mirror of the HST that affected the telescope's ability to focus faint light sources into a precise point. This imperfection was very slight, one-fiftieth of the width of a human hair. A scheduled Space Service servicing mission (STS-61) in 1993 permitted scientists to correct the problem. The MSFC NBS provided an excellent environment for testing hardware to examine how it would operate in space and for evaluating techniques for space construction and spacecraft servicing.

  5. Resource Provisioning in SLA-Based Cluster Computing

    NASA Astrophysics Data System (ADS)

    Xiong, Kaiqi; Suh, Sang

    Cluster computing is excellent for parallel computation. It has become increasingly popular. In cluster computing, a service level agreement (SLA) is a set of quality of services (QoS) and a fee agreed between a customer and an application service provider. It plays an important role in an e-business application. An application service provider uses a set of cluster computing resources to support e-business applications subject to an SLA. In this paper, the QoS includes percentile response time and cluster utilization. We present an approach for resource provisioning in such an environment that minimizes the total cost of cluster computing resources used by an application service provider for an e-business application that often requires parallel computation for high service performance, availability, and reliability while satisfying a QoS and a fee negotiated between a customer and the application service provider. Simulation experiments demonstrate the applicability of the approach.

  6. Efficiency of Health Care Production in Low-Resource Settings: A Monte-Carlo Simulation to Compare the Performance of Data Envelopment Analysis, Stochastic Distance Functions, and an Ensemble Model

    PubMed Central

    Giorgio, Laura Di; Flaxman, Abraham D.; Moses, Mark W.; Fullman, Nancy; Hanlon, Michael; Conner, Ruben O.; Wollum, Alexandra; Murray, Christopher J. L.

    2016-01-01

    Low-resource countries can greatly benefit from even small increases in efficiency of health service provision, supporting a strong case to measure and pursue efficiency improvement in low- and middle-income countries (LMICs). However, the knowledge base concerning efficiency measurement remains scarce for these contexts. This study shows that current estimation approaches may not be well suited to measure technical efficiency in LMICs and offers an alternative approach for efficiency measurement in these settings. We developed a simulation environment which reproduces the characteristics of health service production in LMICs, and evaluated the performance of Data Envelopment Analysis (DEA) and Stochastic Distance Function (SDF) for assessing efficiency. We found that an ensemble approach (ENS) combining efficiency estimates from a restricted version of DEA (rDEA) and restricted SDF (rSDF) is the preferable method across a range of scenarios. This is the first study to analyze efficiency measurement in a simulation setting for LMICs. Our findings aim to heighten the validity and reliability of efficiency analyses in LMICs, and thus inform policy dialogues about improving the efficiency of health service production in these settings. PMID:26812685

  7. Serious Gaming for Test & Evaluation of Clean-Slate (Ab Initio) National Airspace System (NAS) Designs

    NASA Technical Reports Server (NTRS)

    Allen, B. Danette; Alexandrov, Natalia

    2016-01-01

    Incremental approaches to air transportation system development inherit current architectural constraints, which, in turn, place hard bounds on system capacity, efficiency of performance, and complexity. To enable airspace operations of the future, a clean-slate (ab initio) airspace design(s) must be considered. This ab initio National Airspace System (NAS) must be capable of accommodating increased traffic density, a broader diversity of aircraft, and on-demand mobility. System and subsystem designs should scale to accommodate the inevitable demand for airspace services that include large numbers of autonomous Unmanned Aerial Vehicles and a paradigm shift in general aviation (e.g., personal air vehicles) in addition to more traditional aerial vehicles such as commercial jetliners and weather balloons. The complex and adaptive nature of ab initio designs for the future NAS requires new approaches to validation, adding a significant physical experimentation component to analytical and simulation tools. In addition to software modeling and simulation, the ability to exercise system solutions in a flight environment will be an essential aspect of validation. The NASA Langley Research Center (LaRC) Autonomy Incubator seeks to develop a flight simulation infrastructure for ab initio modeling and simulation that assumes no specific NAS architecture and models vehicle-to-vehicle behavior to examine interactions and emergent behaviors among hundreds of intelligent aerial agents exhibiting collaborative, cooperative, coordinative, selfish, and malicious behaviors. The air transportation system of the future will be a complex adaptive system (CAS) characterized by complex and sometimes unpredictable (or unpredicted) behaviors that result from temporal and spatial interactions among large numbers of participants. A CAS not only evolves with a changing environment and adapts to it, it is closely coupled to all systems that constitute the environment. Thus, the ecosystem that contains the system and other systems evolves with the CAS as well. The effects of the emerging adaptation and co-evolution are difficult to capture with only combined mathematical and computational experimentation. Therefore, an ab initio flight simulation environment must accommodate individual vehicles, groups of self-organizing vehicles, and large-scale infrastructure behavior. Inspired by Massively Multiplayer Online Role Playing Games (MMORPG) and Serious Gaming, the proposed ab initio simulation environment is similar to online gaming environments in which player participants interact with each other, affect their environment, and expect the simulation to persist and change regardless of any individual player's active participation.

  8. GATE Monte Carlo simulation in a cloud computing environment

    NASA Astrophysics Data System (ADS)

    Rowedder, Blake Austin

    The GEANT4-based GATE is a unique and powerful Monte Carlo (MC) platform, which provides a single code library allowing the simulation of specific medical physics applications, e.g. PET, SPECT, CT, radiotherapy, and hadron therapy. However, this rigorous yet flexible platform is used only sparingly in the clinic due to its lengthy calculation time. By accessing the powerful computational resources of a cloud computing environment, GATE's runtime can be significantly reduced to clinically feasible levels without the sizable investment of a local high performance cluster. This study investigated a reliable and efficient execution of GATE MC simulations using a commercial cloud computing services. Amazon's Elastic Compute Cloud was used to launch several nodes equipped with GATE. Job data was initially broken up on the local computer, then uploaded to the worker nodes on the cloud. The results were automatically downloaded and aggregated on the local computer for display and analysis. Five simulations were repeated for every cluster size between 1 and 20 nodes. Ultimately, increasing cluster size resulted in a decrease in calculation time that could be expressed with an inverse power model. Comparing the benchmark results to the published values and error margins indicated that the simulation results were not affected by the cluster size and thus that integrity of a calculation is preserved in a cloud computing environment. The runtime of a 53 minute long simulation was decreased to 3.11 minutes when run on a 20-node cluster. The ability to improve the speed of simulation suggests that fast MC simulations are viable for imaging and radiotherapy applications. With high power computing continuing to lower in price and accessibility, implementing Monte Carlo techniques with cloud computing for clinical applications will continue to become more attractive.

  9. Dynamic VMs placement for energy efficiency by PSO in cloud computing

    NASA Astrophysics Data System (ADS)

    Dashti, Seyed Ebrahim; Rahmani, Amir Masoud

    2016-03-01

    Recently, cloud computing is growing fast and helps to realise other high technologies. In this paper, we propose a hieratical architecture to satisfy both providers' and consumers' requirements in these technologies. We design a new service in the PaaS layer for scheduling consumer tasks. In the providers' perspective, incompatibility between specification of physical machine and user requests in cloud leads to problems such as energy-performance trade-off and large power consumption so that profits are decreased. To guarantee Quality of service of users' tasks, and reduce energy efficiency, we proposed to modify Particle Swarm Optimisation to reallocate migrated virtual machines in the overloaded host. We also dynamically consolidate the under-loaded host which provides power saving. Simulation results in CloudSim demonstrated that whatever simulation condition is near to the real environment, our method is able to save as much as 14% more energy and the number of migrations and simulation time significantly reduces compared with the previous works.

  10. Performance Evaluation of Resource Management in Cloud Computing Environments.

    PubMed

    Batista, Bruno Guazzelli; Estrella, Julio Cezar; Ferreira, Carlos Henrique Gomes; Filho, Dionisio Machado Leite; Nakamura, Luis Hideo Vasconcelos; Reiff-Marganiec, Stephan; Santana, Marcos José; Santana, Regina Helena Carlucci

    2015-01-01

    Cloud computing is a computational model in which resource providers can offer on-demand services to clients in a transparent way. However, to be able to guarantee quality of service without limiting the number of accepted requests, providers must be able to dynamically manage the available resources so that they can be optimized. This dynamic resource management is not a trivial task, since it involves meeting several challenges related to workload modeling, virtualization, performance modeling, deployment and monitoring of applications on virtualized resources. This paper carries out a performance evaluation of a module for resource management in a cloud environment that includes handling available resources during execution time and ensuring the quality of service defined in the service level agreement. An analysis was conducted of different resource configurations to define which dimension of resource scaling has a real influence on client requests. The results were used to model and implement a simulated cloud system, in which the allocated resource can be changed on-the-fly, with a corresponding change in price. In this way, the proposed module seeks to satisfy both the client by ensuring quality of service, and the provider by ensuring the best use of resources at a fair price.

  11. Performance Evaluation of Resource Management in Cloud Computing Environments

    PubMed Central

    Batista, Bruno Guazzelli; Estrella, Julio Cezar; Ferreira, Carlos Henrique Gomes; Filho, Dionisio Machado Leite; Nakamura, Luis Hideo Vasconcelos; Reiff-Marganiec, Stephan; Santana, Marcos José; Santana, Regina Helena Carlucci

    2015-01-01

    Cloud computing is a computational model in which resource providers can offer on-demand services to clients in a transparent way. However, to be able to guarantee quality of service without limiting the number of accepted requests, providers must be able to dynamically manage the available resources so that they can be optimized. This dynamic resource management is not a trivial task, since it involves meeting several challenges related to workload modeling, virtualization, performance modeling, deployment and monitoring of applications on virtualized resources. This paper carries out a performance evaluation of a module for resource management in a cloud environment that includes handling available resources during execution time and ensuring the quality of service defined in the service level agreement. An analysis was conducted of different resource configurations to define which dimension of resource scaling has a real influence on client requests. The results were used to model and implement a simulated cloud system, in which the allocated resource can be changed on-the-fly, with a corresponding change in price. In this way, the proposed module seeks to satisfy both the client by ensuring quality of service, and the provider by ensuring the best use of resources at a fair price. PMID:26555730

  12. PS-CARA: Context-Aware Resource Allocation Scheme for Mobile Public Safety Networks.

    PubMed

    Kaleem, Zeeshan; Khaliq, Muhammad Zubair; Khan, Ajmal; Ahmad, Ishtiaq; Duong, Trung Q

    2018-05-08

    The fifth-generation (5G) communications systems are expecting to support users with diverse quality-of-service (QoS) requirements. Beside these requirements, the task with utmost importance is to support the emergency communication services during natural or man-made disasters. Most of the conventional base stations are not properly functional during a disaster situation, so deployment of emergency base stations such as mobile personal cell (mPC) is crucial. An mPC having moving capability can move in the disaster area to provide emergency communication services. However, mPC deployment causes severe co-channel interference to the users in its vicinity. The problem in the existing resource allocation schemes is its support for static environment, that does not fit well for mPC. So, a resource allocation scheme for mPC users is desired that can dynamically allocate resources based on users’ location and its connection establishment priority. In this paper, we propose a public safety users priority-based context-aware resource allocation (PS-CARA) scheme for users sum-rate maximization in disaster environment. Simulations results demonstrate that the proposed PS-CARA scheme can increase the user average and edge rate around 10.3% and 32.8% , respectively because of context information availability and by prioritizing the public safety users. The simulation results ensure that call blocking probability is also reduced considerably under the PS-CARA scheme.

  13. PS-CARA: Context-Aware Resource Allocation Scheme for Mobile Public Safety Networks

    PubMed Central

    Khaliq, Muhammad Zubair; Khan, Ajmal; Ahmad, Ishtiaq

    2018-01-01

    The fifth-generation (5G) communications systems are expecting to support users with diverse quality-of-service (QoS) requirements. Beside these requirements, the task with utmost importance is to support the emergency communication services during natural or man-made disasters. Most of the conventional base stations are not properly functional during a disaster situation, so deployment of emergency base stations such as mobile personal cell (mPC) is crucial. An mPC having moving capability can move in the disaster area to provide emergency communication services. However, mPC deployment causes severe co-channel interference to the users in its vicinity. The problem in the existing resource allocation schemes is its support for static environment, that does not fit well for mPC. So, a resource allocation scheme for mPC users is desired that can dynamically allocate resources based on users’ location and its connection establishment priority. In this paper, we propose a public safety users priority-based context-aware resource allocation (PS-CARA) scheme for users sum-rate maximization in disaster environment. Simulations results demonstrate that the proposed PS-CARA scheme can increase the user average and edge rate around 10.3% and 32.8% , respectively because of context information availability and by prioritizing the public safety users. The simulation results ensure that call blocking probability is also reduced considerably under the PS-CARA scheme. PMID:29738499

  14. Marshall Space Flight Center Telescience Resource Kit

    NASA Technical Reports Server (NTRS)

    Wade, Gina

    2016-01-01

    Telescience Resource Kit (TReK) is a suite of software applications that can be used to monitor and control assets in space or on the ground. The Telescience Resource Kit was originally developed for the International Space Station program. Since then it has been used to support a variety of NASA programs and projects including the WB-57 Ascent Vehicle Experiment (WAVE) project, the Fast Affordable Science and Technology Satellite (FASTSAT) project, and the Constellation Program. The Payloads Operations Center (POC), also known as the Payload Operations Integration Center (POIC), provides the capability for payload users to operate their payloads at their home sites. In this environment, TReK provides local ground support system services and an interface to utilize remote services provided by the POC. TReK provides ground system services for local and remote payload user sites including International Partner sites, Telescience Support Centers, and U.S. Investigator sites in over 40 locations worldwide. General Capabilities: Support for various data interfaces such as User Datagram Protocol, Transmission Control Protocol, and Serial interfaces. Data Services - retrieve, process, record, playback, forward, and display data (ground based data or telemetry data). Command - create, modify, send, and track commands. Command Management - Configure one TReK system to serve as a command server/filter for other TReK systems. Database - databases are used to store telemetry and command definition information. Application Programming Interface (API) - ANSI C interface compatible with commercial products such as Visual C++, Visual Basic, LabVIEW, Borland C++, etc. The TReK API provides a bridge for users to develop software to access and extend TReK services. Environments - development, test, simulations, training, and flight. Includes standalone training simulators.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rudkevich, Aleksandr; Goldis, Evgeniy

    This research conducted by the Newton Energy Group, LLC (NEG) is dedicated to the development of pCloud: a Cloud-based Power Market Simulation Environment. pCloud is offering power industry stakeholders the capability to model electricity markets and is organized around the Software as a Service (SaaS) concept -- a software application delivery model in which software is centrally hosted and provided to many users via the internet. During the Phase I of this project NEG developed a prototype design for pCloud as a SaaS-based commercial service offering, system architecture supporting that design, ensured feasibility of key architecture's elements, formed technological partnershipsmore » and negotiated commercial agreements with partners, conducted market research and other related activities and secured funding for continue development of pCloud between the end of Phase I and beginning of Phase II, if awarded. Based on the results of Phase I activities, NEG has established that the development of a cloud-based power market simulation environment within the Windows Azure platform is technologically feasible, can be accomplished within the budget and timeframe available through the Phase II SBIR award with additional external funding. NEG believes that pCloud has the potential to become a game-changing technology for the modeling and analysis of electricity markets. This potential is due to the following critical advantages of pCloud over its competition: - Standardized access to advanced and proven power market simulators offered by third parties. - Automated parallelization of simulations and dynamic provisioning of computing resources on the cloud. This combination of automation and scalability dramatically reduces turn-around time while offering the capability to increase the number of analyzed scenarios by a factor of 10, 100 or even 1000. - Access to ready-to-use data and to cloud-based resources leading to a reduction in software, hardware, and IT costs. - Competitive pricing structure, which will make high-volume usage of simulation services affordable. - Availability and affordability of high quality power simulators, which presently only large corporate clients can afford, will level the playing field in developing regional energy policies, determining prudent cost recovery mechanisms and assuring just and reasonable rates to consumers. - Users that presently do not have the resources to internally maintain modeling capabilities will now be able to run simulations. This will invite more players into the industry, ultimately leading to more transparent and liquid power markets.« less

  16. Improving interprofessional competence in undergraduate students using a novel blended learning approach.

    PubMed

    Riesen, Eleanor; Morley, Michelle; Clendinneng, Debra; Ogilvie, Susan; Ann Murray, Mary

    2012-07-01

    Interprofessional simulation interventions, especially when face-to-face, involve considerable resources and require that all participants convene in a single location at a specific time. Scheduling multiple people across different programs is an important barrier to implementing interprofessional education interventions. This study explored a novel way to overcome the challenges associated with scheduling interprofessional learning experiences through the use of simulations in a virtual environment (Web.Alive™) where learners interact as avatars. In this study, 60 recent graduates from nursing, paramedic, police, and child and youth service programs participated in a 2-day workshop designed to improve interprofessional competencies through a blend of learning environments that included virtual face-to-face experiences, traditional face-to-face experiences and online experiences. Changes in learners' interprofessional competence were assessed through three outcomes: change in interprofessional attitudes pre- to post-workshop, self-perceived changes in interprofessional competence and observer ratings of performance across three clinical simulations. Results from the study indicate that from baseline to post-intervention, there was significant improvement in learners' interprofessional competence across all outcomes, and that the blended learning environment provided an acceptable way to develop these competencies.

  17. Regional Evaluation of the Severity-Based Stroke Triage Algorithm for Emergency Medical Services Using Discrete Event Simulation.

    PubMed

    Bogle, Brittany M; Asimos, Andrew W; Rosamond, Wayne D

    2017-10-01

    The Severity-Based Stroke Triage Algorithm for Emergency Medical Services endorses routing patients with suspected large vessel occlusion acute ischemic strokes directly to endovascular stroke centers (ESCs). We sought to evaluate different specifications of this algorithm within a region. We developed a discrete event simulation environment to model patients with suspected stroke transported according to algorithm specifications, which varied by stroke severity screen and permissible additional transport time for routing patients to ESCs. We simulated King County, Washington, and Mecklenburg County, North Carolina, distributing patients geographically into census tracts. Transport time to the nearest hospital and ESC was estimated using traffic-based travel times. We assessed undertriage, overtriage, transport time, and the number-needed-to-route, defined as the number of patients enduring additional transport to route one large vessel occlusion patient to an ESC. Undertriage was higher and overtriage was lower in King County compared with Mecklenburg County for each specification. Overtriage variation was primarily driven by screen (eg, 13%-55% in Mecklenburg County and 10%-40% in King County). Transportation time specifications beyond 20 minutes increased overtriage and decreased undertriage in King County but not Mecklenburg County. A low- versus high-specificity screen routed 3.7× more patients to ESCs. Emergency medical services spent nearly twice the time routing patients to ESCs in King County compared with Mecklenburg County. Our results demonstrate how discrete event simulation can facilitate informed decision making to optimize emergency medical services stroke severity-based triage algorithms. This is the first step toward developing a mature simulation to predict patient outcomes. © 2017 American Heart Association, Inc.

  18. A Mobility-Aware QoS Signaling Protocol for Ambient Networks

    NASA Astrophysics Data System (ADS)

    Jeong, Seong-Ho; Lee, Sung-Hyuck; Bang, Jongho

    Mobility-aware quality of service (QoS) signaling is crucial to provide seamless multimedia services in the ambient environment where mobile nodes may move frequently between different wireless access networks. The mobility of an IP-based node in ambient networks affects routing paths, and as a result, can have a significant impact on the operation and state management of QoS signaling protocols. In this paper, we first analyze the impact of mobility on QoS signaling protocols and how the protocols operate in mobility scenarios. We then propose an efficient mobility-aware QoS signaling protocol which can operate adaptively in ambient networks. The key features of the protocol include the fast discovery of a crossover node where the old and new paths converge or diverge due to handover and the localized state management for seamless services. Our analytical and simulation/experimental results show that the proposed/implemented protocol works better than existing protocols in the IP-based mobile environment.

  19. CCSDS Advanced Orbiting Systems Virtual Channel Access Service for QoS MACHETE Model

    NASA Technical Reports Server (NTRS)

    Jennings, Esther H.; Segui, John S.

    2011-01-01

    To support various communications requirements imposed by different missions, interplanetary communication protocols need to be designed, validated, and evaluated carefully. Multimission Advanced Communications Hybrid Environment for Test and Evaluation (MACHETE), described in "Simulator of Space Communication Networks" (NPO-41373), NASA Tech Briefs, Vol. 29, No. 8 (August 2005), p. 44, combines various tools for simulation and performance analysis of space networks. The MACHETE environment supports orbital analysis, link budget analysis, communications network simulations, and hardware-in-the-loop testing. By building abstract behavioral models of network protocols, one can validate performance after identifying the appropriate metrics of interest. The innovators have extended the MACHETE model library to include a generic link-layer Virtual Channel (VC) model supporting quality-of-service (QoS) controls based on IP streams. The main purpose of this generic Virtual Channel model addition was to interface fine-grain flow-based QoS (quality of service) between the network and MAC layers of the QualNet simulator, a commercial component of MACHETE. This software model adds the capability of mapping IP streams, based on header fields, to virtual channel numbers, allowing extended QoS handling at link layer. This feature further refines the QoS v existing at the network layer. QoS at the network layer (e.g. diffserv) supports few QoS classes, so data from one class will be aggregated together; differentiating between flows internal to a class/priority is not supported. By adding QoS classification capability between network and MAC layers through VC, one maps multiple VCs onto the same physical link. Users then specify different VC weights, and different queuing and scheduling policies at the link layer. This VC model supports system performance analysis of various virtual channel link-layer QoS queuing schemes independent of the network-layer QoS systems.

  20. Concepts and theoretical specifications of a Coastal Vulnerability Dynamic Simulator (COVUDS): A multi-agent system for simulating coastal vulnerability towards management of coastal ecosystem services

    NASA Astrophysics Data System (ADS)

    Orencio, P. M.; Endo, A.; Taniguchi, M.

    2014-12-01

    Disaster-causing natural hazards such as floods, erosions, earthquakes or slope failures were particularly observed to be concentrated in certain geographical regions. In the Asia-pacific region, coastal ecosystems were suffering because of perennial threats driven by chronic fluctuations in climate variability (e.g., typhoons, ENSO), or by dynamically occurring events (e.g., earthquakes, tsunamis). Among the many people that were found prone to such a risky condition were the ones inhabiting near the coastal areas. Characteristically, aside from being located at the forefront of these events, the coastal communities have impacted the resource by the kind of behavioral patterns they exhibited, such as overdependence and overexploitation to achieve their wellbeing. In this paper, we introduce the development of an approach to an assessment of the coupled human- environment using a multi- agent simulation (MAS) model known as Coastal Vulnerability Dynamic Simulator (COVUDS). The COVUDS comprised a human- environmental platform consisting multi- agents with corresponding spatial- based dynamic and static variables. These variables were used to present multiple hypothetical future situations that contribute to the purpose of supporting a more rational management of the coastal ecosystem and their environmental equities. Initially, we present the theoretical and conceptual components that would lead to the development of the COVUDS. These consisted of the human population engaged in behavioral patterns affecting the conditions of coastal ecosystem services; the system of the biophysical environment and changes in patches brought by global environment and local behavioral variations; the policy factors that were important for choosing area- specific interventions; and the decision- making mechanism that integrates the first three components. To guide a future scenario-based application that will be undertaken in a coastal area in the Philippines, the components of the model will be presented within a platform following a parameterized architecture.

  1. Ambient Assisted Living spaces validation by services and devices simulation.

    PubMed

    Fernández-Llatas, Carlos; Mocholí, Juan Bautista; Sala, Pilar; Naranjo, Juan Carlos; Pileggi, Salvatore F; Guillén, Sergio; Traver, Vicente

    2011-01-01

    The design of Ambient Assisted Living (AAL) products is a very demanding challenge. AAL products creation is a complex iterative process which must accomplish exhaustive prerequisites about accessibility and usability. In this process the early detection of errors is crucial to create cost-effective systems. Computer-assisted tools can suppose a vital help to usability designers in order to avoid design errors. Specifically computer simulation of products in AAL environments can be used in all the design phases to support the validation. In this paper, a computer simulation tool for supporting usability designers in the creation of innovative AAL products is presented. This application will benefit their work saving time and improving the final system functionality.

  2. Distributed Decision Making in a Dynamic Network Environment

    DTIC Science & Technology

    1990-01-01

    protocols, particularly when traffic arrival statistics are varying or unknown, and loads are high. Both nonpreemptive and preemptive repeat disciplines are...The simulation model allows general value functions, continuous time operation, and preemptive or nonpreemptive service. For reasons of tractability... nonpreemptive LIFO, (4) nonpreemptive LIFO with discarding, (5) nonpreemptive HOL, (6) nonpreemp- tive HOL with discarding, (7) preemptive repeat HOL, (8

  3. Measuring Knowledge, Acceptance, and Perceptions of Telehealth in an Interprofessional Curriculum for Student Nurse Practitioners, Occupational Therapists, and Physical Therapists

    ERIC Educational Resources Information Center

    Randall, Ken; Steinheider, Brigitte; Isaacson, Mary; Shortridge, Ann; Bird, Stephanie; Crio, Carrie; Ross, Heather; Loving, Gary

    2016-01-01

    Introduction: The use of telehealth in service delivery is both challenging and beneficial. This paper describes the results of a three semester-long interprofessional education program in team-based care using telehealth technology. The study assessed telehealth knowledge acquisition, practice in a structured environment with a simulated patient,…

  4. Evaluation of COSTAR mass handling characteristics in an environment. A simulation of the Hubble Space Telescope service mission

    NASA Technical Reports Server (NTRS)

    Rajulu, Sudhakar L.; Klute, Glenn K.; Fletcher, Lauren

    1994-01-01

    The STS-61 Shuttle mission, which took place in December 1993, was solely aimed at servicing the Hubble Space Telescope (HST). Successful completion of this mission was critical to NASA since it was necessary to rectify a flaw in the HST mirror. In addition, NASA had never scheduled a mission with such a high quantity of complex extravehicular activity. To meet the challenge of this mission, the STS-61 crew trained extensively in the Weightless Environment Test Facility at the Johnson Space Center and in the Neutral Buoyancy Simulator at the Marshall Space Flight Center. However, it was suspected that neutral buoyancy training might induce negative training by virtue of the viscous damping effect present in water. The mockups built for this training also did not have the mass properties of the actual orbital replacement units (ORUs). It was felt that the crew should be further trained on mockups with similar mass characteristics. A comprehensive study was designed to address these issues. The study was quantitative, and instrumentation was set up to measure and quantify the forces and moments experienced during ORU mass handling and remote manipulator system run conditions.

  5. Intrusion-Tolerant Location Information Services in Intelligent Vehicular Networks

    NASA Astrophysics Data System (ADS)

    Yan, Gongjun; Yang, Weiming; Shaner, Earl F.; Rawat, Danda B.

    Intelligent Vehicular Networks, known as Vehicle-to-Vehicle and Vehicle-to-Roadside wireless communications (also called Vehicular Ad hoc Networks), are revolutionizing our daily driving with better safety and more infortainment. Most, if not all, applications will depend on accurate location information. Thus, it is of importance to provide intrusion-tolerant location information services. In this paper, we describe an adaptive algorithm that detects and filters the false location information injected by intruders. Given a noisy environment of mobile vehicles, the algorithm estimates the high resolution location of a vehicle by refining low resolution location input. We also investigate results of simulations and evaluate the quality of the intrusion-tolerant location service.

  6. A study of an adaptive replication framework for orchestrated composite web services.

    PubMed

    Mohamed, Marwa F; Elyamany, Hany F; Nassar, Hamed M

    2013-01-01

    Replication is considered one of the most important techniques to improve the Quality of Services (QoS) of published Web Services. It has achieved impressive success in managing resource sharing and usage in order to moderate the energy consumed in IT environments. For a robust and successful replication process, attention should be paid to suitable time as well as the constraints and capabilities in which the process runs. The replication process is time-consuming since outsourcing some new replicas into other hosts is lengthy. Furthermore, nowadays, most of the business processes that might be implemented over the Web are composed of multiple Web services working together in two main styles: Orchestration and Choreography. Accomplishing a replication over such business processes is another challenge due to the complexity and flexibility involved. In this paper, we present an adaptive replication framework for regular and orchestrated composite Web services. The suggested framework includes a number of components for detecting unexpected and unhappy events that might occur when consuming the original published web services including failure or overloading. It also includes a specific replication controller to manage the replication process and select the best host that would encapsulate a new replica. In addition, it includes a component for predicting the incoming load in order to decrease the time needed for outsourcing new replicas, enhancing the performance greatly. A simulation environment has been created to measure the performance of the suggested framework. The results indicate that adaptive replication with prediction scenario is the best option for enhancing the performance of the replication process in an online business environment.

  7. The Computing and Data Grid Approach: Infrastructure for Distributed Science Applications

    NASA Technical Reports Server (NTRS)

    Johnston, William E.

    2002-01-01

    With the advent of Grids - infrastructure for using and managing widely distributed computing and data resources in the science environment - there is now an opportunity to provide a standard, large-scale, computing, data, instrument, and collaboration environment for science that spans many different projects and provides the required infrastructure and services in a relatively uniform and supportable way. Grid technology has evolved over the past several years to provide the services and infrastructure needed for building 'virtual' systems and organizations. We argue that Grid technology provides an excellent basis for the creation of the integrated environments that can combine the resources needed to support the large- scale science projects located at multiple laboratories and universities. We present some science case studies that indicate that a paradigm shift in the process of science will come about as a result of Grids providing transparent and secure access to advanced and integrated information and technologies infrastructure: powerful computing systems, large-scale data archives, scientific instruments, and collaboration tools. These changes will be in the form of services that can be integrated with the user's work environment, and that enable uniform and highly capable access to these computers, data, and instruments, regardless of the location or exact nature of these resources. These services will integrate transient-use resources like computing systems, scientific instruments, and data caches (e.g., as they are needed to perform a simulation or analyze data from a single experiment); persistent-use resources. such as databases, data catalogues, and archives, and; collaborators, whose involvement will continue for the lifetime of a project or longer. While we largely address large-scale science in this paper, Grids, particularly when combined with Web Services, will address a broad spectrum of science scenarios. both large and small scale.

  8. Community Currency Trading Method through Partial Transaction Intermediary Process

    NASA Astrophysics Data System (ADS)

    Kido, Kunihiko; Hasegawa, Seiichi; Komoda, Norihisa

    A community currency is local money that is issued by local governments or Non-Profit Organization (NPO) to support social services. The purpose of introducing community currencies is to regenerate communities by fostering mutual aids among community members. In this paper, we propose a community currency trading method through partial intermediary process, under operational environments without introducing coordinators all the time. In this method, coordinators perform coordination between service users and service providers during several months from the start point of transactions. After the period of coordination, participants spontaneously make transactions based on their trust area and a trust evaluation method based on the number of provided services and complaint information. This method is especially effective to communities with close social networks and low trustworthiness. The proposed method is evaluated through multi-agent simulation.

  9. Distributed hydrological models to quantify ecosystem services and inform land use decisions in Europe

    NASA Astrophysics Data System (ADS)

    Wilebore, Beccy; Willis, Kathy

    2016-04-01

    Landcover conversion is one of the largest anthropogenic threats to ecological services globally; in the EU around 1500 ha of biodiverse land are lost every day to changes in infrastructure and urbanisation. This land conversion directly affects key ecosystem services that support natural infrastructure, including water flow regulation and the mitigation of flood risks. We assess the sensitivity of runoff production to landcover in the UK at a high spatial resolution, using a distributed hydrologic model in the regional land-surface model JULES (Joint UK Land Environment Simulator). This work, as part of the wider initiative 'NaturEtrade', will create a novel suite of easy-to-use tools and mechanisms to allow EU landowners to quickly map and assess the value of their land in providing key ecosystem services.

  10. MaGate Simulator: A Simulation Environment for a Decentralized Grid Scheduler

    NASA Astrophysics Data System (ADS)

    Huang, Ye; Brocco, Amos; Courant, Michele; Hirsbrunner, Beat; Kuonen, Pierre

    This paper presents a simulator for of a decentralized modular grid scheduler named MaGate. MaGate’s design emphasizes scheduler interoperability by providing intelligent scheduling serving the grid community as a whole. Each MaGate scheduler instance is able to deal with dynamic scheduling conditions, with continuously arriving grid jobs. Received jobs are either allocated on local resources, or delegated to other MaGates for remote execution. The proposed MaGate simulator is based on GridSim toolkit and Alea simulator, and abstracts the features and behaviors of complex fundamental grid elements, such as grid jobs, grid resources, and grid users. Simulation of scheduling tasks is supported by a grid network overlay simulator executing distributed ant-based swarm intelligence algorithms to provide services such as group communication and resource discovery. For evaluation, a comparison of behaviors of different collaborative policies among a community of MaGates is provided. Results support the use of the proposed approach as a functional ready grid scheduler simulator.

  11. QoS measurement of workflow-based web service compositions using Colored Petri net.

    PubMed

    Nematzadeh, Hossein; Motameni, Homayun; Mohamad, Radziah; Nematzadeh, Zahra

    2014-01-01

    Workflow-based web service compositions (WB-WSCs) is one of the main composition categories in service oriented architecture (SOA). Eflow, polymorphic process model (PPM), and business process execution language (BPEL) are the main techniques of the category of WB-WSCs. Due to maturity of web services, measuring the quality of composite web services being developed by different techniques becomes one of the most important challenges in today's web environments. Business should try to provide good quality regarding the customers' requirements to a composed web service. Thus, quality of service (QoS) which refers to nonfunctional parameters is important to be measured since the quality degree of a certain web service composition could be achieved. This paper tried to find a deterministic analytical method for dependability and performance measurement using Colored Petri net (CPN) with explicit routing constructs and application of theory of probability. A computer tool called WSET was also developed for modeling and supporting QoS measurement through simulation.

  12. Analysis to Inform Defense Planning Despite Austerity

    DTIC Science & Technology

    2014-01-01

    available from www.rand.org as a public service of the RAND Corporation. CHILDREN AND FAMILIES EDUCATION AND THE ARTS ENERGY AND ENVIRONMENT HEALTH AND...xvii S.3. Cost-Effectiveness Landscape , by Strategic Perspective . . . . . . xxiii 1.1. Problems Go Far Beyond Fiscal Constraints...objectives for each regional and functional area, as well as such cross-cutting challenges as simul - taneous conflicts. It might also have different

  13. Developing Simulated Cyber Attack Scenarios Against Virtualized Adversary Networks

    DTIC Science & Technology

    2017-03-01

    MAST is a custom software framework originally designed to facilitate the training of network administrators on live networks using SimWare. The MAST...or any other aspect of this collection of information, including suggestions for reducing this burden, to Washington headquarters Services ...scenario development and testing in a virtual test environment. Commercial and custom software tools that provide the ability to conduct network

  14. Making Good Instructors Great: USMC Cognitive Readiness and Instructor Professionalization Initiatives

    DTIC Science & Technology

    2012-01-01

    enhance their classes; these approaches are recom- mended in addition to (not in lieu of) other well-known military scenario-based training methods...Interservice/Industry Training , Simulation, and Education Conference (I/ITSEC) 2012 2012 Paper No. 12185 Making Good Instructors Great: USMC...and ambiguous environments. Each of the US Armed Services is addressing cognitive readiness training differently. The Marine Corps, for in- stance

  15. Training in surgical oncology - the role of VR simulation.

    PubMed

    Lewis, T M; Aggarwal, R; Rajaretnam, N; Grantcharov, T P; Darzi, A

    2011-09-01

    There have been dramatic changes in surgical training over the past two decades which have resulted in a number of concerns for the development of future surgeons. Changes in the structure of cancer services, working hour restrictions and a commitment to patient safety has led to a reduction in training opportunities that are available to the surgeon in training. Simulation and in particular virtual reality (VR) simulation has been heralded as an effective adjunct to surgical training. Advances in VR simulation has allowed trainees to practice realistic full length procedures in a safe and controlled environment, where mistakes are permitted and can be used as learning points. There is considerable evidence to demonstrate that the VR simulation can be used to enhance technical skills and improve operating room performance. Future work should focus on the cost effectiveness and predictive validity of VR simulation, which in turn would increase the uptake of simulation and enhance surgical training. Copyright © 2011 Elsevier Ltd. All rights reserved.

  16. Design and simulation of EVA tools for first servicing mission of HST

    NASA Technical Reports Server (NTRS)

    Naik, Dipak; Dehoff, P. H.

    1993-01-01

    The Hubble Space Telescope (HST) was launched into near-earth orbit by the space shuttle Discovery on April 24, 1990. The payload of two cameras, two spectrographs, and a high-speed photometer is supplemented by three fine-guidance sensors that can be used for astronomy as well as for star tracking. A widely reported spherical aberration in the primary mirror causes HST to produce images of much lower quality than intended. A space shuttle repair mission in late 1993 will install small corrective mirrors that will restore the full intended optical capability of the HST. The first servicing mission (FSM) will involve considerable extravehicular activity (EVA). It is proposed to design special EVA tools for the FSM. This report includes details of the data acquisition system being developed to test the performance of the various EVA tools in ambient as well as simulated space environment.

  17. Student Workshops for Severe Weather Warning Decision Making using AWIPS-2 at the University of Oklahoma

    NASA Astrophysics Data System (ADS)

    Zwink, A. B.; Morris, D.; Ware, P. J.; Ernst, S.; Holcomb, B.; Riley, S.; Hardy, J.; Mullens, S.; Bowlan, M.; Payne, C.; Bates, A.; Williams, B.

    2016-12-01

    For several years, employees at the Cooperative Institute of Mesoscale Meteorological Studies at the University of Oklahoma (OU) that are affiliated with Warning Decision Training Division (WDTD) of the National Weather Service (NWS) provided training simulations to students from OU's School of Meteorology (SoM). These simulations focused on warning decision making using Dual-Pol radar data products in an AWIPS-1 environment. Building on these previous experiences, CIMMS/WDTD recently continued the collaboration with the SoM Oklahoma Weather Lab (OWL) by holding a warning decision workshop simulating a NWS Weather Forecast Office (WFO) experience. The workshop took place in the WDTD AWIPS-2 computer laboratory with 25 AWIPS-2 workstations and the WES-2 Bridge (Weather Event Simulator) software which replayed AWIPS-2 data. Using the WES-2 Bridge and the WESSL-2 (WES Scripting Language) event display, this computer lab has the state-of-the-art ability to simulate severe weather events and recreate WFO warning operations. OWL Student forecasters attending the workshop worked in teams in a multi-player simulation of the Hastings, Nebraska WFO on May 6th, 2015, where thunderstorms across the service area produced large hail, damaging winds, and multiple tornadoes. This paper will discuss the design and goals of the WDTD/OWL workshop, as well as plans for holding similar workshops in the future.

  18. SLA-based optimisation of virtualised resource for multi-tier web applications in cloud data centres

    NASA Astrophysics Data System (ADS)

    Bi, Jing; Yuan, Haitao; Tie, Ming; Tan, Wei

    2015-10-01

    Dynamic virtualised resource allocation is the key to quality of service assurance for multi-tier web application services in cloud data centre. In this paper, we develop a self-management architecture of cloud data centres with virtualisation mechanism for multi-tier web application services. Based on this architecture, we establish a flexible hybrid queueing model to determine the amount of virtual machines for each tier of virtualised application service environments. Besides, we propose a non-linear constrained optimisation problem with restrictions defined in service level agreement. Furthermore, we develop a heuristic mixed optimisation algorithm to maximise the profit of cloud infrastructure providers, and to meet performance requirements from different clients as well. Finally, we compare the effectiveness of our dynamic allocation strategy with two other allocation strategies. The simulation results show that the proposed resource allocation method is efficient in improving the overall performance and reducing the resource energy cost.

  19. Cyberinfrastructure for Atmospheric Discovery

    NASA Astrophysics Data System (ADS)

    Wilhelmson, R.; Moore, C. W.

    2004-12-01

    Each year across the United States, floods, tornadoes, hail, strong winds, lightning, hurricanes, and winter storms cause hundreds of deaths, routinely disrupt transportation and commerce, and result in billions of dollars in annual economic losses . MEAD and LEAD are two recent efforts aimed at developing the cyberinfrastructure for studying and forecasting these events through collection, integration, and analysis of observational data coupled with numerical simulation, data mining, and visualization. MEAD (Modeling Environment for Atmospheric Discovery) has been funded for two years as an NCSA (National Center for Supercomputing Applications) Alliance Expedition. The goal of this expedition has been the development/adaptation of cyberinfrastructure that will enable research simulations, datamining, machine learning and visualization of hurricanes and storms utilizing the high performance computing environments including the TeraGrid. Portal grid and web infrastructure are being tested that will enable launching of hundreds of individual WRF (Weather Research and Forecasting) simulations. In a similar way, multiple Regional Ocean Modeling System (ROMS) or WRF/ROMS simulations can be carried out. Metadata and the resulting large volumes of data will then be made available for further study and for educational purposes using analysis, mining, and visualization services. Initial coupling of the ROMS and WRF codes has been completed and parallel I/O is being implemented for these models. Management of these activities (services) are being enabled through Grid workflow technologies (e.g. OGCE). LEAD (Linked Environments for Atmospheric Discovery) is a recently funded 5-year, large NSF ITR grant that involves 9 institutions who are developing a comprehensive national cyberinfrastructure in mesoscale meteorology, particularly one that can interoperate with others being developed. LEAD is addressing the fundamental information technology (IT) research challenges needed to create an integrated, scalable for identifying, accessing, preparing, assimilating, predicting, managing, analyzing, mining, and visualizing a broad array of meteorological data and model output, independent of format and physical location. A transforming element of LEAD is Workflow Orchestration for On-Demand, Real-Time, Dynamically-Adaptive Systems (WOORDS), which allows the use of analysis tools, forecast models, and data repositories as dynamically adaptive, on-demand, Grid-enabled systems that can a) change configuration rapidly and automatically in response to weather; b) continually be steered by new data; c) respond to decision-driven inputs from users; d) initiate other processes automatically; and e) steer remote observing technologies to optimize data collection for the problem at hand. Although LEAD efforts are primiarly directed at mesoscale meteorology, the IT services being developed has general applicability to other geoscience and environmental science. Integration of traditional and new data sources is a crucial component in LEAD for data analysis and assimilation, for integration of (ensemble mining) of data from sets of simulations, and for comparing results to observational data. As part of the integration effort, LEAD is creating a myLEAD metadata catalog service: a personal metacatalog that extends the Globus MCS system and is built on top of the OGSA-DAI system developed at the National e-Science Center in Edinburgh, Scotland.

  20. Incorporating discrete event simulation into quality improvement efforts in health care systems.

    PubMed

    Rutberg, Matthew Harris; Wenczel, Sharon; Devaney, John; Goldlust, Eric Jonathan; Day, Theodore Eugene

    2015-01-01

    Quality improvement (QI) efforts are an indispensable aspect of health care delivery, particularly in an environment of increasing financial and regulatory pressures. The ability to test predictions of proposed changes to flow, policy, staffing, and other process-level changes using discrete event simulation (DES) has shown significant promise and is well reported in the literature. This article describes how to incorporate DES into QI departments and programs in order to support QI efforts, develop high-fidelity simulation models, conduct experiments, make recommendations, and support adoption of results. The authors describe how DES-enabled QI teams can partner with clinical services and administration to plan, conduct, and sustain QI investigations. © 2013 by the American College of Medical Quality.

  1. Cloud-Based Orchestration of a Model-Based Power and Data Analysis Toolchain

    NASA Technical Reports Server (NTRS)

    Post, Ethan; Cole, Bjorn; Dinkel, Kevin; Kim, Hongman; Lee, Erich; Nairouz, Bassem

    2016-01-01

    The proposed Europa Mission concept contains many engineering and scientific instruments that consume varying amounts of power and produce varying amounts of data throughout the mission. System-level power and data usage must be well understood and analyzed to verify design requirements. Numerous cross-disciplinary tools and analysis models are used to simulate the system-level spacecraft power and data behavior. This paper addresses the problem of orchestrating a consistent set of models, tools, and data in a unified analysis toolchain when ownership is distributed among numerous domain experts. An analysis and simulation environment was developed as a way to manage the complexity of the power and data analysis toolchain and to reduce the simulation turnaround time. A system model data repository is used as the trusted store of high-level inputs and results while other remote servers are used for archival of larger data sets and for analysis tool execution. Simulation data passes through numerous domain-specific analysis tools and end-to-end simulation execution is enabled through a web-based tool. The use of a cloud-based service facilitates coordination among distributed developers and enables scalable computation and storage needs, and ensures a consistent execution environment. Configuration management is emphasized to maintain traceability between current and historical simulation runs and their corresponding versions of models, tools and data.

  2. Evaluation and Validation of Organic Materials for Advanced Stirling Convertors (ASCs): Overview

    NASA Technical Reports Server (NTRS)

    Shin, Euy-Sik Eugene

    2015-01-01

    Various organic materials are used as essential parts in Stirling Convertors for their unique properties and functionalities such as bonding, potting, sealing, thread locking, insulation, and lubrication. More efficient Advanced Stirling Convertors (ASC) are being developed for future space applications especially with a long mission cycle, sometimes up to 17 years, such as deep space exploration or lunar surface power or Mars rovers, and others. Thus, performance, durability, and reliability of those organics should be critically evaluated in every possible material-process-fabrication-service environment relations based on their mission specifications. In general, thermal stability, radiation hardness, outgassing, and material compatibility of the selected organics have been systematically evaluated while their process and fabrication conditions and procedures were being optimized. Service environment-simulated long term aging tests up to 4 years were performed as a function of temperature for durability assessment of the most critical organic material systems.

  3. Performance of Sustainable Fly Ash and Slag Cement Mortars Exposed to Simulated and Real In Situ Mediterranean Conditions along 90 Warm Season Days.

    PubMed

    Ortega, José Marcos; Esteban, María Dolores; Sánchez, Isidro; Climent, Miguel Ángel

    2017-10-31

    Nowadays, cement manufacture is one of the most polluting worldwide industrial sectors. In order to reduce its CO₂ emissions, the clinker replacement by ground granulated blast-furnace slag and fly ash is becoming increasingly common. Both additions are well-studied when the hardening conditions of cementitious materials are optimum. Therefore, the main objective of this research was to study the short-term effects of exposure, to both laboratory simulated and real in situ Mediterranean climate environments, on the microstructure and durability-related properties of mortars made using commercial slag and fly ash cements, as well as ordinary Portland cement. The real in situ condition consisted of placing the samples at approximately 100 m away from the Mediterranean Sea. The microstructure was analysed using mercury intrusion porosimetry. The effective porosity, the capillary suction coefficient and the non-steady state chloride migration coefficient were also studied. In view of the results obtained, the non-optimum laboratory simulated Mediterranean environment was a good approach to the real in situ one. Finally, mortars prepared using sustainable cements with slag and fly ash exposed to both Mediterranean climate environments, showed adequate service properties in the short-term (90 days), similar to or even better than those in mortars made with ordinary Portland cement.

  4. About Distributed Simulation-based Optimization of Forming Processes using a Grid Architecture

    NASA Astrophysics Data System (ADS)

    Grauer, Manfred; Barth, Thomas

    2004-06-01

    Permanently increasing complexity of products and their manufacturing processes combined with a shorter "time-to-market" leads to more and more use of simulation and optimization software systems for product design. Finding a "good" design of a product implies the solution of computationally expensive optimization problems based on the results of simulation. Due to the computational load caused by the solution of these problems, the requirements on the Information&Telecommunication (IT) infrastructure of an enterprise or research facility are shifting from stand-alone resources towards the integration of software and hardware resources in a distributed environment for high-performance computing. Resources can either comprise software systems, hardware systems, or communication networks. An appropriate IT-infrastructure must provide the means to integrate all these resources and enable their use even across a network to cope with requirements from geographically distributed scenarios, e.g. in computational engineering and/or collaborative engineering. Integrating expert's knowledge into the optimization process is inevitable in order to reduce the complexity caused by the number of design variables and the high dimensionality of the design space. Hence, utilization of knowledge-based systems must be supported by providing data management facilities as a basis for knowledge extraction from product data. In this paper, the focus is put on a distributed problem solving environment (PSE) capable of providing access to a variety of necessary resources and services. A distributed approach integrating simulation and optimization on a network of workstations and cluster systems is presented. For geometry generation the CAD-system CATIA is used which is coupled with the FEM-simulation system INDEED for simulation of sheet-metal forming processes and the problem solving environment OpTiX for distributed optimization.

  5. An embedded checklist in the Anesthesia Information Management System improves pre-anaesthetic induction setup: a randomised controlled trial in a simulation setting.

    PubMed

    Wetmore, Douglas; Goldberg, Andrew; Gandhi, Nishant; Spivack, John; McCormick, Patrick; DeMaria, Samuel

    2016-10-01

    Anaesthesiologists work in a high stress, high consequence environment in which missed steps in preparation may lead to medical errors and potential patient harm. The pre-anaesthetic induction period has been identified as a time in which medical errors can occur. The Anesthesia Patient Safety Foundation has developed a Pre-Anesthetic Induction Patient Safety (PIPS) checklist. We conducted this study to test the effectiveness of this checklist, when embedded in our institutional Anesthesia Information Management System (AIMS), on resident performance in a simulated environment. Using a randomised, controlled, observer-blinded design, we compared performance of anaesthesiology residents in a simulated operating room under production pressure using a checklist in completing a thorough pre-anaesthetic induction evaluation and setup with that of residents with no checklist. The checklist was embedded in the simulated operating room's electronic medical record. Data for 38 anaesthesiology residents shows a statistically significant difference in performance in pre-anaesthetic setup and evaluation as scored by blinded raters (maximum score 22 points), with the checklist group performing better by 7.8 points (p<0.01). The effects of gender and year of residency on total score were not significant. Simulation duration (time to anaesthetic agent administration) was increased significantly by the use of the checklist. Required use of a pre-induction checklist improves anaesthesiology resident performance in a simulated environment. The PIPS checklist as an integrated part of a departmental AIMS warrant further investigation as a quality measure. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  6. Simulation Analysis of Fluid-Structure Interaction of High Velocity Environment Influence on Aircraft Wing Materials under Different Mach Numbers.

    PubMed

    Zhang, Lijun; Sun, Changyan

    2018-04-18

    Aircraft service process is in a state of the composite load of pressure and temperature for a long period of time, which inevitably affects the inherent characteristics of some components in aircraft accordingly. The flow field of aircraft wing materials under different Mach numbers is simulated by Fluent in order to extract pressure and temperature on the wing in this paper. To determine the effect of coupling stress on the wing’s material and structural properties, the fluid-structure interaction (FSI) method is used in ANSYS-Workbench to calculate the stress that is caused by pressure and temperature. Simulation analysis results show that with the increase of Mach number, the pressure and temperature on the wing’s surface both increase exponentially and thermal stress that is caused by temperature will be the main factor in the coupled stress. When compared with three kinds of materials, titanium alloy, aluminum alloy, and Haynes alloy, carbon fiber composite material has better performance in service at high speed, and natural frequency under coupling pre-stressing will get smaller.

  7. Simulation Analysis of Fluid-Structure Interaction of High Velocity Environment Influence on Aircraft Wing Materials under Different Mach Numbers

    PubMed Central

    Sun, Changyan

    2018-01-01

    Aircraft service process is in a state of the composite load of pressure and temperature for a long period of time, which inevitably affects the inherent characteristics of some components in aircraft accordingly. The flow field of aircraft wing materials under different Mach numbers is simulated by Fluent in order to extract pressure and temperature on the wing in this paper. To determine the effect of coupling stress on the wing’s material and structural properties, the fluid-structure interaction (FSI) method is used in ANSYS-Workbench to calculate the stress that is caused by pressure and temperature. Simulation analysis results show that with the increase of Mach number, the pressure and temperature on the wing’s surface both increase exponentially and thermal stress that is caused by temperature will be the main factor in the coupled stress. When compared with three kinds of materials, titanium alloy, aluminum alloy, and Haynes alloy, carbon fiber composite material has better performance in service at high speed, and natural frequency under coupling pre-stressing will get smaller. PMID:29670023

  8. Propagation Characteristics in an Underground Shopping Area for 5GHz-band Wireless Access Systems

    NASA Astrophysics Data System (ADS)

    Itokawa, Kiyohiko; Kita, Naoki; Sato, Akio; Matsue, Hideaki; Mori, Daisuke; Watanabe, Hironobu

    5-GHz band wireless access systems, such as the RLAN (Radio Local Area Network) system of IEEE802.11a, HiperLAN/2, HiSWANa and AWA, are developed and provide transmission rates over 20 Mbps for indoor use. Those 5-GHz access systems are expected to extend service areas from the office to the so-called “hot-spot" in public areas. Underground shopping malls are one of the anticipated service areas for such a nomadic wireless access service. Broadband propagation characteristics are required for radio zone design in an underground mall environment despite previous results obtained by narrow band measurements. This paper presents results of an experimental study on the propagation characteristics for broadband wireless access systems in an underground mall environment. First, broadband propagation path loss is measured and formulated considering human body shadowing. A ray trace simulation is used to clarify the basic propagation mechanism in such a closed environment. Next, a distance dependency of the delay spread during a crowded time period, rush hour, is found to be at most 65 nsec, which is under the permitted maximum value of the present 5-GHz systems. Finally, above propagation characteristics support the result of transmission test carried out by using AWA equipment.

  9. Advanced Maintenance Simulation by Means of Hand-Based Haptic Interfaces

    NASA Astrophysics Data System (ADS)

    Nappi, Michele; Paolino, Luca; Ricciardi, Stefano; Sebillo, Monica; Vitiello, Giuliana

    Aerospace industry has been involved in virtual simulation for design and testing since the birth of virtual reality. Today this industry is showing a growing interest in the development of haptic-based maintenance training applications, which represent the most advanced way to simulate maintenance and repair tasks within a virtual environment by means of a visual-haptic approach. The goal is to allow the trainee to experiment the service procedures not only as a workflow reproduced at a visual level but also in terms of the kinaesthetic feedback involved with the manipulation of tools and components. This study, conducted in collaboration with aerospace industry specialists, is aimed to the development of an immersive virtual capable of immerging the trainees into a virtual environment where mechanics and technicians can perform maintenance simulation or training tasks by directly manipulating 3D virtual models of aircraft parts while perceiving force feedback through the haptic interface. The proposed system is based on ViRstperson, a virtual reality engine under development at the Italian Center for Aerospace Research (CIRA) to support engineering and technical activities such as design-time maintenance procedure validation, and maintenance training. This engine has been extended to support haptic-based interaction, enabling a more complete level of interaction, also in terms of impedance control, and thus fostering the development of haptic knowledge in the user. The user’s “sense of touch” within the immersive virtual environment is simulated through an Immersion CyberForce® hand-based force-feedback device. Preliminary testing of the proposed system seems encouraging.

  10. Operational flight evaluation of the two-segment approach for use in airline service

    NASA Technical Reports Server (NTRS)

    Schwind, G. K.; Morrison, J. A.; Nylen, W. E.; Anderson, E. B.

    1975-01-01

    United Airlines has developed and evaluated a two-segment noise abatement approach procedure for use on Boeing 727 aircraft in air carrier service. In a flight simulator, the two-segment approach was studied in detail and a profile and procedures were developed. Equipment adaptable to contemporary avionics and navigation systems was designed and manufactured by Collins Radio Company and was installed and evaluated in B-727-200 aircraft. The equipment, profile, and procedures were evaluated out of revenue service by pilots representing government agencies, airlines, airframe manufacturers, and professional pilot associations. A system was then placed into scheduled airline service for six months during which 555 two-segment approaches were flown at three airports by 55 airline pilots. The system was determined to be safe, easy to fly, and compatible with the airline operational environment.

  11. A Tool to Simulate the Transmission, Reception, and Execution of Interactive TV Applications

    PubMed Central

    Kulesza, Raoni; Rodrigues, Thiago; Machado, Felipe A. L.; Santos, Celso A. S.

    2017-01-01

    The emergence of Interactive Digital Television (iDTV) opened a set of technological possibilities that go beyond those offered by conventional TV. Among these opportunities we can highlight interactive contents that run together with linear TV program (television service where the viewer has to watch a scheduled TV program at the particular time it is offered and on the particular channel it is presented on). However, developing interactive contents for this new platform is not as straightforward as, for example, developing Internet applications. One of the options to make this development process easier and safer is to use an iDTV simulator. However, after having investigated some of the existing iDTV simulation environments, we have found a limitation: these simulators mainly present solutions focused on the TV receiver, whose interactive content must be loaded in advance by the programmer to a local repository (e.g., Hard Drive, USB). Therefore, in this paper, we propose a tool, named BiS (Broadcast iDTV content Simulator), which makes possible a broader solution for the simulation of interactive contents. It allows simulating the transmission of interactive content along with the linear TV program (simulating the transmission of content over the air and in broadcast to the receivers). To enable this, we defined a generic and easy-to-customize communication protocol that was implemented in the tool. The proposed environment differs from others because it allows simulating reception of both linear content and interactive content while running Java applications to allow such a content presentation. PMID:28280770

  12. Flight evaluation of two-segment approaches using area navigation guidance equipment

    NASA Technical Reports Server (NTRS)

    Schwind, G. K.; Morrison, J. A.; Nylen, W. E.; Anderson, E. B.

    1976-01-01

    A two-segment noise abatement approach procedure for use on DC-8-61 aircraft in air carrier service was developed and evaluated. The approach profile and procedures were developed in a flight simulator. Full guidance is provided throughout the approach by a Collins Radio Company three-dimensional area navigation (RNAV) system which was modified to provide the two-segment approach capabilities. Modifications to the basic RNAV software included safety protection logic considered necessary for an operationally acceptable two-segment system. With an aircraft out of revenue service, the system was refined and extensively flight tested, and the profile and procedures were evaluated by representatives of the airlines, airframe manufacturers, the Air Line Pilots Association, and the Federal Aviation Adminstration. The system was determined to be safe and operationally acceptable. It was then placed into scheduled airline service for an evaluation during which 180 approaches were flown by 48 airline pilots. The approach was determined to be compatible with the airline operational environment, although operation of the RNAV system in the existing terminal area air traffic control environment was difficult.

  13. Nomadic migration : a service environment for autonomic computing on the Grid

    NASA Astrophysics Data System (ADS)

    Lanfermann, Gerd

    2003-06-01

    In recent years, there has been a dramatic increase in available compute capacities. However, these “Grid resources” are rarely accessible in a continuous stream, but rather appear scattered across various machine types, platforms and operating systems, which are coupled by networks of fluctuating bandwidth. It becomes increasingly difficult for scientists to exploit available resources for their applications. We believe that intelligent, self-governing applications should be able to select resources in a dynamic and heterogeneous environment: Migrating applications determine a resource when old capacities are used up. Spawning simulations launch algorithms on external machines to speed up the main execution. Applications are restarted as soon as a failure is detected. All these actions can be taken without human interaction. A distributed compute environment possesses an intrinsic unreliability. Any application that interacts with such an environment must be able to cope with its failing components: deteriorating networks, crashing machines, failing software. We construct a reliable service infrastructure by endowing a service environment with a peer-to-peer topology. This “Grid Peer Services” infrastructure accommodates high-level services like migration and spawning, as well as fundamental services for application launching, file transfer and resource selection. It utilizes existing Grid technology wherever possible to accomplish its tasks. An Application Information Server acts as a generic information registry to all participants in a service environment. The service environment that we developed, allows applications e.g. to send a relocation requests to a migration server. The server selects a new computer based on the transmitted resource requirements. It transfers the application's checkpoint and binary to the new host and resumes the simulation. Although the Grid's underlying resource substrate is not continuous, we achieve persistent computations on Grids by relocating the application. We show with our real-world examples that a traditional genome analysis program can be easily modified to perform self-determined migrations in this service environment. In den vergangenen Jahren ist es zu einer dramatischen Vervielfachung der verfügbaren Rechenzeit gekommen. Diese 'Grid Ressourcen' stehen jedoch nicht als kontinuierlicher Strom zur Verfügung, sondern sind über verschiedene Maschinentypen, Plattformen und Betriebssysteme verteilt, die jeweils durch Netzwerke mit fluktuierender Bandbreite verbunden sind. Es wird für Wissenschaftler zunehmend schwieriger, die verfügbaren Ressourcen für ihre Anwendungen zu nutzen. Wir glauben, dass intelligente, selbstbestimmende Applikationen in der Lage sein sollten, ihre Ressourcen in einer dynamischen und heterogenen Umgebung selbst zu wählen: Migrierende Applikationen suchen eine neue Ressource, wenn die alte aufgebraucht ist. 'Spawning'-Anwendungen lassen Algorithmen auf externen Maschinen laufen, um die Hauptanwendung zu beschleunigen. Applikationen werden neu gestartet, sobald ein Absturz endeckt wird. Alle diese Verfahren können ohne menschliche Interaktion erfolgen. Eine verteilte Rechenumgebung besitzt eine natürliche Unverlässlichkeit. Jede Applikation, die mit einer solchen Umgebung interagiert, muss auf die gestörten Komponenten reagieren können: schlechte Netzwerkverbindung, abstürzende Maschinen, fehlerhafte Software. Wir konstruieren eine verlässliche Serviceinfrastruktur, indem wir der Serviceumgebung eine 'Peer-to-Peer'-Topology aufprägen. Diese “Grid Peer Service” Infrastruktur beinhaltet Services wie Migration und Spawning, als auch Services zum Starten von Applikationen, zur Dateiübertragung und Auswahl von Rechenressourcen. Sie benutzt existierende Gridtechnologie wo immer möglich, um ihre Aufgabe durchzuführen. Ein Applikations-Information- Server arbeitet als generische Registratur für alle Teilnehmer in der Serviceumgebung. Die Serviceumgebung, die wir entwickelt haben, erlaubt es Applikationen z.B. eine Relokationsanfrage an einen Migrationsserver zu stellen. Der Server sucht einen neuen Computer, basierend auf den übermittelten Ressourcen-Anforderungen. Er transferiert den Statusfile des Applikation zu der neuen Maschine und startet die Applikation neu. Obwohl das umgebende Ressourcensubstrat nicht kontinuierlich ist, können wir kontinuierliche Berechnungen auf Grids ausführen, indem wir die Applikation migrieren. Wir zeigen mit realistischen Beispielen, wie sich z.B. ein traditionelles Genom-Analyse-Programm leicht modifizieren lässt, um selbstbestimmte Migrationen in dieser Serviceumgebung durchzuführen.

  14. Structural strength deterioration of coastal bridge piers considering non-uniform corrosion in marine environments

    NASA Astrophysics Data System (ADS)

    Guo, Anxin; Yuan, Wenting; Li, Haitao; Li, Hui

    2018-04-01

    In the aggressive marine environment over a long-term service period, coastal bridges inevitably sustain corrosion-induced damage due to high sea salt and humidity. This paper investigates the strength reduction of coastal bridges, especially focusing on the effects of non-uniform corrosion along the height of bridge piers. First, the corrosion initiation time and the degradation of reinforcement and concrete are analyzed for bridge piers in marine environments. To investigate the various damage modes of the concrete cover, a discretization method with fiber cells is used for calculating time-dependent interaction diagrams of cross-sections of the bridge piers at the atmospheric zone and the splash and tidal zone under a combination of axial force and bending moment. Second, the shear strength of these aging structures is analyzed. Numerical simulation indicates that the strength of a concrete pier experiences dramatic reduction from corrosion initiation to the spalling of the concrete cover. Strength loss in the splash and tidal zone is more significant than in the atmospheric zone when structures' service time is assumed to be the same.

  15. A Belief-based Trust Model for Dynamic Service Selection

    NASA Astrophysics Data System (ADS)

    Ali, Ali Shaikh; Rana, Omer F.

    Provision of services across institutional boundaries has become an active research area. Many such services encode access to computational and data resources (comprising single machines to computational clusters). Such services can also be informational, and integrate different resources within an institution. Consequently, we envision a service rich environment in the future, where service consumers can intelligently decide between which services to select. If interaction between service providers/users is automated, it is necessary for these service clients to be able to automatically chose between a set of equivalent (or similar) services. In such a scenario trust serves as a benchmark to differentiate between service providers. One might therefore prioritize potential cooperative partners based on the established trust. Although many approaches exist in literature about trust between online communities, the exact nature of trust for multi-institutional service sharing remains undefined. Therefore, the concept of trust suffers from an imperfect understanding, a plethora of definitions, and informal use in the literature. We present a formalism for describing trust within multi-institutional service sharing, and provide an implementation of this; enabling the agent to make trust-based decision. We evaluate our formalism through simulation.

  16. Thermal dynamic simulation of wall for building energy efficiency under varied climate environment

    NASA Astrophysics Data System (ADS)

    Wang, Xuejin; Zhang, Yujin; Hong, Jing

    2017-08-01

    Aiming at different kind of walls in five cities of different zoning for thermal design, using thermal instantaneous response factors method, the author develops software to calculation air conditioning cooling load temperature, thermal response factors, and periodic response factors. On the basis of the data, the author gives the net work analysis about the influence of dynamic thermal of wall on air-conditioning load and thermal environment in building of different zoning for thermal design regional, and put forward the strategy how to design thermal insulation and heat preservation wall base on dynamic thermal characteristic of wall under different zoning for thermal design regional. And then provide the theory basis and the technical references for the further study on the heat preservation with the insulation are in the service of energy saving wall design. All-year thermal dynamic load simulating and energy consumption analysis for new energy-saving building is very important in building environment. This software will provide the referable scientific foundation for all-year new thermal dynamic load simulation, energy consumption analysis, building environment systems control, carrying through farther research on thermal particularity and general particularity evaluation for new energy -saving walls building. Based on which, we will not only expediently design system of building energy, but also analyze building energy consumption and carry through scientific energy management. The study will provide the referable scientific foundation for carrying through farther research on thermal particularity and general particularity evaluation for new energy saving walls building.

  17. A Hybrid Cloud Computing Service for Earth Sciences

    NASA Astrophysics Data System (ADS)

    Yang, C. P.

    2016-12-01

    Cloud Computing is becoming a norm for providing computing capabilities for advancing Earth sciences including big Earth data management, processing, analytics, model simulations, and many other aspects. A hybrid spatiotemporal cloud computing service is bulit at George Mason NSF spatiotemporal innovation center to meet this demands. This paper will report the service including several aspects: 1) the hardware includes 500 computing services and close to 2PB storage as well as connection to XSEDE Jetstream and Caltech experimental cloud computing environment for sharing the resource; 2) the cloud service is geographically distributed at east coast, west coast, and central region; 3) the cloud includes private clouds managed using open stack and eucalyptus, DC2 is used to bridge these and the public AWS cloud for interoperability and sharing computing resources when high demands surfing; 4) the cloud service is used to support NSF EarthCube program through the ECITE project, ESIP through the ESIP cloud computing cluster, semantics testbed cluster, and other clusters; 5) the cloud service is also available for the earth science communities to conduct geoscience. A brief introduction about how to use the cloud service will be included.

  18. Path planning and Ground Control Station simulator for UAV

    NASA Astrophysics Data System (ADS)

    Ajami, A.; Balmat, J.; Gauthier, J.-P.; Maillot, T.

    In this paper we present a Universal and Interoperable Ground Control Station (UIGCS) simulator for fixed and rotary wing Unmanned Aerial Vehicles (UAVs), and all types of payloads. One of the major constraints is to operate and manage multiple legacy and future UAVs, taking into account the compliance with NATO Combined/Joint Services Operational Environment (STANAG 4586). Another purpose of the station is to assign the UAV a certain degree of autonomy, via autonomous planification/replanification strategies. The paper is organized as follows. In Section 2, we describe the non-linear models of the fixed and rotary wing UAVs that we use in the simulator. In Section 3, we describe the simulator architecture, which is based upon interacting modules programmed independently. This simulator is linked with an open source flight simulator, to simulate the video flow and the moving target in 3D. To conclude this part, we tackle briefly the problem of the Matlab/Simulink software connection (used to model the UAV's dynamic) with the simulation of the virtual environment. Section 5 deals with the control module of a flight path of the UAV. The control system is divided into four distinct hierarchical layers: flight path, navigation controller, autopilot and flight control surfaces controller. In the Section 6, we focus on the trajectory planification/replanification question for fixed wing UAV. Indeed, one of the goals of this work is to increase the autonomy of the UAV. We propose two types of algorithms, based upon 1) the methods of the tangent and 2) an original Lyapunov-type method. These algorithms allow either to join a fixed pattern or to track a moving target. Finally, Section 7 presents simulation results obtained on our simulator, concerning a rather complicated scenario of mission.

  19. Computer Simulation Shows the Effect of Communication on Day of Surgery Patient Flow.

    PubMed

    Taaffe, Kevin; Fredendall, Lawrence; Huynh, Nathan; Franklin, Jennifer

    2015-07-01

    To improve patient flow in a surgical environment, practitioners and academicians often use process mapping and simulation as tools to evaluate and recommend changes. We used simulations to help staff visualize the effect of communication and coordination delays that occur on the day of surgery. Perioperative services staff participated in tabletop exercises in which they chose the delays that were most important to eliminate. Using a day-of-surgery computer simulation model, the elimination of delays was tested and the results were shared with the group. This exercise, repeated for multiple groups of staff, provided an understanding of not only the dynamic events taking place, but also how small communication delays can contribute to a significant loss in efficiency and the ability to provide timely care. Survey results confirmed these understandings. Copyright © 2015 AORN, Inc. Published by Elsevier Inc. All rights reserved.

  20. Achieving QoS for TCP Traffic in Satellite Networks with Differentiated Services

    NASA Technical Reports Server (NTRS)

    Durresi, Arjan; Kota, Sastri; Goyal, Mukul; Jain, Raj; Bharani, Venkata

    2001-01-01

    Satellite networks play an indispensable role in providing global Internet access and electronic connectivity. To achieve such a global communications, provisioning of quality of service (QoS) within the advanced satellite systems is the main requirement. One of the key mechanisms of implementing the quality of service is traffic management. Traffic management becomes a crucial factor in the case of satellite network because of the limited availability of their resources. Currently, Internet Protocol (IP) only has minimal traffic management capabilities and provides best effort services. In this paper, we presented a broadband satellite network QoS model and simulated performance results. In particular, we discussed the TCP flow aggregates performance for their good behavior in the presence of competing UDP flow aggregates in the same assured forwarding. We identified several factors that affect the performance in the mixed environments and quantified their effects using a full factorial design of experiment methodology.

  1. Experiments with the Mesoscale Atmospheric Simulation System (MASS) using the synthetic relative humidity

    NASA Technical Reports Server (NTRS)

    Chang, Chia-Bo

    1994-01-01

    This study is intended to examine the impact of the synthetic relative humidity on the model simulation of mesoscale convective storm environment. The synthetic relative humidity is derived from the National Weather Services surface observations, and non-conventional sources including aircraft, radar, and satellite observations. The latter sources provide the mesoscale data of very high spatial and temporal resolution. The synthetic humidity data is used to complement the National Weather Services rawinsonde observations. It is believed that a realistic representation of initial moisture field in a mesoscale model is critical for the model simulation of thunderstorm development, and the formation of non-convective clouds as well as their effects on the surface energy budget. The impact will be investigated based on a real-data case study using the mesoscale atmospheric simulation system developed by Mesoscale Environmental Simulations Operations, Inc. The mesoscale atmospheric simulation system consists of objective analysis and initialization codes, and the coarse-mesh and fine-mesh dynamic prediction models. Both models are a three dimensional, primitive equation model containing the essential moist physics for simulating and forecasting mesoscale convective processes in the atmosphere. The modeling system is currently implemented at the Applied Meteorology Unit, Kennedy Space Center. Two procedures involving the synthetic relative humidity to define the model initial moisture fields are considered. It is proposed to perform several short-range (approximately 6 hours) comparative coarse-mesh simulation experiments with and without the synthetic data. They are aimed at revealing the model sensitivities should allow us both to refine the specification of the observational requirements, and to develop more accurate and efficient objective analysis schemes. The goal is to advance the MASS (Mesoscal Atmospheric Simulation System) modeling expertise so that the model output can provide reliable guidance for thunderstorm forecasting.

  2. Evaluation of Environmentally Assisted Cracking of Armour Wires in Flexible Pipes, Power Cables and Umbilicals

    NASA Astrophysics Data System (ADS)

    Zhang, Zhiying

    Environmentally assisted cracking (EAC) of armour wires in flexible pipes, power cables and umbilicals is a major concern with the development of oil and gas fields and wind farms in harsh environments. Hydrogen induced cracking (HIC) or hydrogen embrittlement (HE) of steel armour wires used in deep-water and ultra-deep-water has been evaluated. Simulated tests have been carried out in simulated sea water, under conditions where the susceptibility is the highest, i.e. at room temperature, at the maximum negative cathodic potential and at the maximum stress level expected in service for 150 hours. Examinations of the tested specimens have not revealed cracking or blistering, and measurement of hydrogen content has confirmed hydrogen charging. In addition, sulphide stress cracking (SSC) and chloride stress cracking (CSC) of nickel-based alloy armour wires used in harsh down-hole environments has been evaluated. Simulated tests have been carried out in simulated solution containing high concentration of chloride, with high hydrogen sulphide partial pressure, at high stress level and at 120 °C for 720 hours. Examinations of the tested specimens have not revealed cracking or blistering. Subsequent tensile tests of the tested specimens at ambient pressure and temperature have revealed properties similar to the as-received specimens.

  3. An u-Service Model Based on a Smart Phone for Urban Computing Environments

    NASA Astrophysics Data System (ADS)

    Cho, Yongyun; Yoe, Hyun

    In urban computing environments, all of services should be based on the interaction between humans and environments around them, which frequently and ordinarily in home and office. This paper propose an u-service model based on a smart phone for urban computing environments. The suggested service model includes a context-aware and personalized service scenario development environment that can instantly describe user's u-service demand or situation information with smart devices. To do this, the architecture of the suggested service model consists of a graphical service editing environment for smart devices, an u-service platform, and an infrastructure with sensors and WSN/USN. The graphic editor expresses contexts as execution conditions of a new service through a context model based on ontology. The service platform deals with the service scenario according to contexts. With the suggested service model, an user in urban computing environments can quickly and easily make u-service or new service using smart devices.

  4. Performance of Sustainable Fly Ash and Slag Cement Mortars Exposed to Simulated and Real In Situ Mediterranean Conditions along 90 Warm Season Days

    PubMed Central

    Esteban, María Dolores

    2017-01-01

    Nowadays, cement manufacture is one of the most polluting worldwide industrial sectors. In order to reduce its CO2 emissions, the clinker replacement by ground granulated blast–furnace slag and fly ash is becoming increasingly common. Both additions are well-studied when the hardening conditions of cementitious materials are optimum. Therefore, the main objective of this research was to study the short-term effects of exposure, to both laboratory simulated and real in situ Mediterranean climate environments, on the microstructure and durability-related properties of mortars made using commercial slag and fly ash cements, as well as ordinary Portland cement. The real in situ condition consisted of placing the samples at approximately 100 m away from the Mediterranean Sea. The microstructure was analysed using mercury intrusion porosimetry. The effective porosity, the capillary suction coefficient and the non-steady state chloride migration coefficient were also studied. In view of the results obtained, the non-optimum laboratory simulated Mediterranean environment was a good approach to the real in situ one. Finally, mortars prepared using sustainable cements with slag and fly ash exposed to both Mediterranean climate environments, showed adequate service properties in the short-term (90 days), similar to or even better than those in mortars made with ordinary Portland cement. PMID:29088107

  5. Distributed Data Service for Data Management in Internet of Things Middleware.

    PubMed

    Cruz Huacarpuma, Ruben; de Sousa Junior, Rafael Timoteo; de Holanda, Maristela Terto; de Oliveira Albuquerque, Robson; García Villalba, Luis Javier; Kim, Tai-Hoon

    2017-04-27

    The development of the Internet of Things (IoT) is closely related to a considerable increase in the number and variety of devices connected to the Internet. Sensors have become a regular component of our environment, as well as smart phones and other devices that continuously collect data about our lives even without our intervention. With such connected devices, a broad range of applications has been developed and deployed, including those dealing with massive volumes of data. In this paper, we introduce a Distributed Data Service (DDS) to collect and process data for IoT environments. One central goal of this DDS is to enable multiple and distinct IoT middleware systems to share common data services from a loosely-coupled provider. In this context, we propose a new specification of functionalities for a DDS and the conception of the corresponding techniques for collecting, filtering and storing data conveniently and efficiently in this environment. Another contribution is a data aggregation component that is proposed to support efficient real-time data querying. To validate its data collecting and querying functionalities and performance, the proposed DDS is evaluated in two case studies regarding a simulated smart home system, the first case devoted to evaluating data collection and aggregation when the DDS is interacting with the UIoT middleware, and the second aimed at comparing the DDS data collection with this same functionality implemented within the Kaa middleware.

  6. Autonomous Information Fading and Provision to Achieve High Response Time in Distributed Information Systems

    NASA Astrophysics Data System (ADS)

    Lu, Xiaodong; Arfaoui, Helene; Mori, Kinji

    In highly dynamic electronic commerce environment, the need for adaptability and rapid response time to information service systems has become increasingly important. In order to cope with the continuously changing conditions of service provision and utilization, Faded Information Field (FIF) has been proposed. FIF is a distributed information service system architecture, sustained by push/pull mobile agents to bring high-assurance of services through a recursive demand-oriented provision of the most popular information closer to the users to make a tradeoff between the cost of information service allocation and access. In this paper, based on the analysis of the relationship that exists among the users distribution, information provision and access time, we propose the technology for FIF design to resolve the competing requirements of users and providers to improve users' access time. In addition, to achieve dynamic load balancing with changing users preference, the autonomous information reallocation technology is proposed. We proved the effectiveness of the proposed technology through the simulation and comparison with the conventional system.

  7. Role of Sectoral Transformation in the Evolution of Water Management Norms in Agricultural Catchments: A Sociohydrologic Modeling Analysis

    NASA Astrophysics Data System (ADS)

    Roobavannan, M.; Kandasamy, J.; Pande, S.; Vigneswaran, S.; Sivapalan, M.

    2017-10-01

    This study is focused on the water-agriculture-environment nexus as it played out in the Murrumbidgee River Basin, eastern Australia, and how coevolution of society and water management actually transpired. Over 100 years of agricultural development the Murrumbidgee Basin experienced a "pendulum swing" in terms of water allocation, initially exclusively for agriculture production changing over to reallocation back to the environment. In this paper, we hypothesize that in the competition for water between economic livelihood and environmental wellbeing, economic diversification was the key to swinging community sentiment in favor of environmental protection, and triggering policy action that resulted in more water allocation to the environment. To test this hypothesis, we developed a sociohydrology model to link the dynamics of the whole economy (both agriculture and industry composed of manufacturing and services) to the community's sensitivity toward the environment. Changing community sensitivity influenced how water was allocated and governed and how the agricultural sector grew relative to the industrial sector (composed of manufacturing and services sectors). In this way, we show that economic diversification played a key role in influencing the community's values and preferences with respect to the environment and economic growth. Without diversification, model simulations show that the community would not have been sufficiently sensitive and willing enough to act to restore the environment, highlighting the key role of sectoral transformation in achieving the goal of sustainable agricultural development.

  8. Compilation of Abstracts for SC12 Conference Proceedings

    NASA Technical Reports Server (NTRS)

    Morello, Gina Francine (Compiler)

    2012-01-01

    1 A Breakthrough in Rotorcraft Prediction Accuracy Using Detached Eddy Simulation; 2 Adjoint-Based Design for Complex Aerospace Configurations; 3 Simulating Hypersonic Turbulent Combustion for Future Aircraft; 4 From a Roar to a Whisper: Making Modern Aircraft Quieter; 5 Modeling of Extended Formation Flight on High-Performance Computers; 6 Supersonic Retropropulsion for Mars Entry; 7 Validating Water Spray Simulation Models for the SLS Launch Environment; 8 Simulating Moving Valves for Space Launch System Liquid Engines; 9 Innovative Simulations for Modeling the SLS Solid Rocket Booster Ignition; 10 Solid Rocket Booster Ignition Overpressure Simulations for the Space Launch System; 11 CFD Simulations to Support the Next Generation of Launch Pads; 12 Modeling and Simulation Support for NASA's Next-Generation Space Launch System; 13 Simulating Planetary Entry Environments for Space Exploration Vehicles; 14 NASA Center for Climate Simulation Highlights; 15 Ultrascale Climate Data Visualization and Analysis; 16 NASA Climate Simulations and Observations for the IPCC and Beyond; 17 Next-Generation Climate Data Services: MERRA Analytics; 18 Recent Advances in High-Resolution Global Atmospheric Modeling; 19 Causes and Consequences of Turbulence in the Earths Protective Shield; 20 NASA Earth Exchange (NEX): A Collaborative Supercomputing Platform; 21 Powering Deep Space Missions: Thermoelectric Properties of Complex Materials; 22 Meeting NASA's High-End Computing Goals Through Innovation; 23 Continuous Enhancements to the Pleiades Supercomputer for Maximum Uptime; 24 Live Demonstrations of 100-Gbps File Transfers Across LANs and WANs; 25 Untangling the Computing Landscape for Climate Simulations; 26 Simulating Galaxies and the Universe; 27 The Mysterious Origin of Stellar Masses; 28 Hot-Plasma Geysers on the Sun; 29 Turbulent Life of Kepler Stars; 30 Modeling Weather on the Sun; 31 Weather on Mars: The Meteorology of Gale Crater; 32 Enhancing Performance of NASAs High-End Computing Applications; 33 Designing Curiosity's Perfect Landing on Mars; 34 The Search Continues: Kepler's Quest for Habitable Earth-Sized Planets.

  9. Cost-effective practices in the blood service sector.

    PubMed

    Katsaliaki, Korina

    2008-05-01

    The objective of this study is to recommend alternative policies, which are tested on a computer simulation model, towards a more cost-effective management of the blood supply chain in the UK. With the use of primary and secondary data from the National Blood Service (NBS) and the supplied hospitals, statistical analysis is conducted and a detailed discrete event simulation model of a vertical part of the UK supply chain of blood products is developed to test and identify good ordering, inventory and distribution practices. Fewer outdates, group substitutions, shortages and deliveries could be achieved by blood banks: holding stock of rare blood groups of red blood cells (RBC), having a second routine delivery per weekday, exercising a more insensitive ordering point for RBC, reducing the total crossmatch release period to less than 1.5 days, increasing the transfusion-to-crossmatch ratio to 70%, adhering to an age-based issuing of orders, holding RBC stock of a weighted average of approximately 4 days. The blood supply simulation model can offer useful pieces of advice to the stakeholders of the examined system which leads to cost reductions and increased safety. Moreover, it provides a great range of experimental capabilities in a risk-free environment.

  10. Towards data warehousing and mining of protein unfolding simulation data.

    PubMed

    Berrar, Daniel; Stahl, Frederic; Silva, Candida; Rodrigues, J Rui; Brito, Rui M M; Dubitzky, Werner

    2005-10-01

    The prediction of protein structure and the precise understanding of protein folding and unfolding processes remains one of the greatest challenges in structural biology and bioinformatics. Computer simulations based on molecular dynamics (MD) are at the forefront of the effort to gain a deeper understanding of these complex processes. Currently, these MD simulations are usually on the order of tens of nanoseconds, generate a large amount of conformational data and are computationally expensive. More and more groups run such simulations and generate a myriad of data, which raises new challenges in managing and analyzing these data. Because the vast range of proteins researchers want to study and simulate, the computational effort needed to generate data, the large data volumes involved, and the different types of analyses scientists need to perform, it is desirable to provide a public repository allowing researchers to pool and share protein unfolding data. To adequately organize, manage, and analyze the data generated by unfolding simulation studies, we designed a data warehouse system that is embedded in a grid environment to facilitate the seamless sharing of available computer resources and thus enable many groups to share complex molecular dynamics simulations on a more regular basis. To gain insight into the conformational fluctuations and stability of the monomeric forms of the amyloidogenic protein transthyretin (TTR), molecular dynamics unfolding simulations of the monomer of human TTR have been conducted. Trajectory data and meta-data of the wild-type (WT) protein and the highly amyloidogenic variant L55P-TTR represent the test case for the data warehouse. Web and grid services, especially pre-defined data mining services that can run on or 'near' the data repository of the data warehouse, are likely to play a pivotal role in the analysis of molecular dynamics unfolding data.

  11. Vehicle operation characteristic under different ramp entrance conditions in underground road: Analysis, simulation and modelling

    NASA Astrophysics Data System (ADS)

    Yao, Qiming; Liu, Shuo; Liu, Yang

    2018-05-01

    An experimental design was used to study the vehicle operation characteristics of different ramp entrance conditions in underground road. With driving simulator, the experimental scenarios include left or right ramp with first, second and third service level, respectively, to collect vehicle speed, acceleration, lateral displacement and location information at the ramp entrance section. By using paired t-test and ANOVA, the influence factors of vehicle operating characteristics are studied. The result shows that effects of ramp layout and mainline traffic environment on vehicle operation characteristics are significant. The regression model of vehicle traveling distance on acceleration lane is established. Suggestions are made for ramp entrance design of underground road.

  12. C3H7NO2S effect on concrete steel-rebar corrosion in 0.5 M H2SO4 simulating industrial/microbial environment

    NASA Astrophysics Data System (ADS)

    Okeniyi, Joshua Olusegun; Nwadialo, Christopher Chukwuweike; Olu-Steven, Folusho Emmanuel; Ebinne, Samaru Smart; Coker, Taiwo Ebenezer; Okeniyi, Elizabeth Toyin; Ogbiye, Adebanji Samuel; Durotoye, Taiwo Omowunmi; Badmus, Emmanuel Omotunde Oluwasogo

    2017-02-01

    This paper investigates C3H7NO2S (Cysteine) effect on the inhibition of reinforcing steel corrosion in concrete immersed in 0.5 M H2SO4, for simulating industrial/microbial environment. Different C3H7NO2S concentrations were admixed, in duplicates, in steel-reinforced concrete samples that were partially immersed in the acidic sulphate environment. Electrochemical monitoring techniques of open circuit potential, as per ASTM C876-91 R99, and corrosion rate, by linear polarization resistance, were then employed for studying anticorrosion effect in steel-reinforced concrete samples by the organic hydrocarbon admixture. Analyses of electrochemical test-data followed ASTM G16-95 R04 prescriptions including probability distribution modeling with significant testing by Kolmogorov-Smirnov and student's t-tests statistics. Results established that all datasets of corrosion potential distributed like the Normal, the Gumbel and the Weibull distributions but that only the Weibull model described all the corrosion rate datasets in the study, as per the Kolmogorov-Smirnov test-statistics. Results of the student's t-test showed that differences of corrosion test-data between duplicated samples with the same C3H7NO2S concentrations were not statistically significant. These results indicated that 0.06878 M C3H7NO2S exhibited optimal inhibition efficiency η = 90.52±1.29% on reinforcing steel corrosion in the concrete samples immersed in 0.5 M H2SO4, simulating industrial/microbial service-environment.

  13. Modeling the Blast Load Simulator Airblast Environment using First Principles Codes. Report 1, Blast Load Simulator Environment

    DTIC Science & Technology

    2016-11-01

    ER D C/ G SL T R- 16 -3 1 Modeling the Blast Load Simulator Airblast Environment Using First Principles Codes Report 1, Blast Load...Simulator Airblast Environment using First Principles Codes Report 1, Blast Load Simulator Environment Gregory C. Bessette, James L. O’Daniel...evaluate several first principles codes (FPCs) for modeling airblast environments typical of those encountered in the BLS. The FPCs considered were

  14. SimBox: a simulation-based scalable architecture for distributed command and control of spaceport and service constellations

    NASA Astrophysics Data System (ADS)

    Prasad, Guru; Jayaram, Sanjay; Ward, Jami; Gupta, Pankaj

    2004-09-01

    In this paper, Aximetric proposes a decentralized Command and Control (C2) architecture for a distributed control of a cluster of on-board health monitoring and software enabled control systems called SimBOX that will use some of the real-time infrastructure (RTI) functionality from the current military real-time simulation architecture. The uniqueness of the approach is to provide a "plug and play environment" for various system components that run at various data rates (Hz) and the ability to replicate or transfer C2 operations to various subsystems in a scalable manner. This is possible by providing a communication bus called "Distributed Shared Data Bus" and a distributed computing environment used to scale the control needs by providing a self-contained computing, data logging and control function module that can be rapidly reconfigured to perform different functions. This kind of software-enabled control is very much needed to meet the needs of future aerospace command and control functions.

  15. Programmatic access to logical models in the Cell Collective modeling environment via a REST API.

    PubMed

    Kowal, Bryan M; Schreier, Travis R; Dauer, Joseph T; Helikar, Tomáš

    2016-01-01

    Cell Collective (www.cellcollective.org) is a web-based interactive environment for constructing, simulating and analyzing logical models of biological systems. Herein, we present a Web service to access models, annotations, and simulation data in the Cell Collective platform through the Representational State Transfer (REST) Application Programming Interface (API). The REST API provides a convenient method for obtaining Cell Collective data through almost any programming language. To ensure easy processing of the retrieved data, the request output from the API is available in a standard JSON format. The Cell Collective REST API is freely available at http://thecellcollective.org/tccapi. All public models in Cell Collective are available through the REST API. For users interested in creating and accessing their own models through the REST API first need to create an account in Cell Collective (http://thecellcollective.org). thelikar2@unl.edu. Technical user documentation: https://goo.gl/U52GWo. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  16. Analysis of Intelligent Transportation Systems Using Model-Driven Simulations.

    PubMed

    Fernández-Isabel, Alberto; Fuentes-Fernández, Rubén

    2015-06-15

    Intelligent Transportation Systems (ITSs) integrate information, sensor, control, and communication technologies to provide transport related services. Their users range from everyday commuters to policy makers and urban planners. Given the complexity of these systems and their environment, their study in real settings is frequently unfeasible. Simulations help to address this problem, but present their own issues: there can be unintended mistakes in the transition from models to code; their platforms frequently bias modeling; and it is difficult to compare works that use different models and tools. In order to overcome these problems, this paper proposes a framework for a model-driven development of these simulations. It is based on a specific modeling language that supports the integrated specification of the multiple facets of an ITS: people, their vehicles, and the external environment; and a network of sensors and actuators conveniently arranged and distributed that operates over them. The framework works with a model editor to generate specifications compliant with that language, and a code generator to produce code from them using platform specifications. There are also guidelines to help researchers in the application of this infrastructure. A case study on advanced management of traffic lights with cameras illustrates its use.

  17. Analysis of Intelligent Transportation Systems Using Model-Driven Simulations

    PubMed Central

    Fernández-Isabel, Alberto; Fuentes-Fernández, Rubén

    2015-01-01

    Intelligent Transportation Systems (ITSs) integrate information, sensor, control, and communication technologies to provide transport related services. Their users range from everyday commuters to policy makers and urban planners. Given the complexity of these systems and their environment, their study in real settings is frequently unfeasible. Simulations help to address this problem, but present their own issues: there can be unintended mistakes in the transition from models to code; their platforms frequently bias modeling; and it is difficult to compare works that use different models and tools. In order to overcome these problems, this paper proposes a framework for a model-driven development of these simulations. It is based on a specific modeling language that supports the integrated specification of the multiple facets of an ITS: people, their vehicles, and the external environment; and a network of sensors and actuators conveniently arranged and distributed that operates over them. The framework works with a model editor to generate specifications compliant with that language, and a code generator to produce code from them using platform specifications. There are also guidelines to help researchers in the application of this infrastructure. A case study on advanced management of traffic lights with cameras illustrates its use. PMID:26083232

  18. Stochastic Coloured Petrinet Based Healthcare Infrastructure Interdependency Model

    NASA Astrophysics Data System (ADS)

    Nukavarapu, Nivedita; Durbha, Surya

    2016-06-01

    The Healthcare Critical Infrastructure (HCI) protects all sectors of the society from hazards such as terrorism, infectious disease outbreaks, and natural disasters. HCI plays a significant role in response and recovery across all other sectors in the event of a natural or manmade disaster. However, for its continuity of operations and service delivery HCI is dependent on other interdependent Critical Infrastructures (CI) such as Communications, Electric Supply, Emergency Services, Transportation Systems, and Water Supply System. During a mass casualty due to disasters such as floods, a major challenge that arises for the HCI is to respond to the crisis in a timely manner in an uncertain and variable environment. To address this issue the HCI should be disaster prepared, by fully understanding the complexities and interdependencies that exist in a hospital, emergency department or emergency response event. Modelling and simulation of a disaster scenario with these complexities would help in training and providing an opportunity for all the stakeholders to work together in a coordinated response to a disaster. The paper would present interdependencies related to HCI based on Stochastic Coloured Petri Nets (SCPN) modelling and simulation approach, given a flood scenario as the disaster which would disrupt the infrastructure nodes. The entire model would be integrated with Geographic information based decision support system to visualize the dynamic behaviour of the interdependency of the Healthcare and related CI network in a geographically based environment.

  19. Northwest Trajectory Analysis Capability: A Platform for Enhancing Computational Biophysics Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peterson, Elena S.; Stephan, Eric G.; Corrigan, Abigail L.

    2008-07-30

    As computational resources continue to increase, the ability of computational simulations to effectively complement, and in some cases replace, experimentation in scientific exploration also increases. Today, large-scale simulations are recognized as an effective tool for scientific exploration in many disciplines including chemistry and biology. A natural side effect of this trend has been the need for an increasingly complex analytical environment. In this paper, we describe Northwest Trajectory Analysis Capability (NTRAC), an analytical software suite developed to enhance the efficiency of computational biophysics analyses. Our strategy is to layer higher-level services and introduce improved tools within the user’s familiar environmentmore » without preventing researchers from using traditional tools and methods. Our desire is to share these experiences to serve as an example for effectively analyzing data intensive large scale simulation data.« less

  20. E-research platform of EPOS Thematic Core Service "ANTHROPOGENIC HAZARDS"

    NASA Astrophysics Data System (ADS)

    Orlecka-Sikora, Beata; Lasocki, Stanisław; Grasso, Jean Robert; Schmittbuhl, Jean; Kwiatek, Grzegorz; Garcia, Alexander; Cassidy, Nigel; Sterzel, Mariusz; Szepieniec, Tomasz; Dineva, Savka; Biggare, Pascal; Saccorotti, Gilberto; Sileny, Jan; Fischer, Tomas

    2016-04-01

    EPOS Thematic Core Service ANTHROPOGENIC HAZARDS (TCS AH) aims to create new research opportunities in the field of anthropogenic hazards evoked by exploitation of georesources. TCS AH, based on the prototype built in the framework of the IS-EPOS project (https://tcs.ah-epos.eu/), financed from Polish structural funds (POIG.02.03.00-14-090/13-00), is being further developed within EPOS IP project (H2020-INFRADEV-1-2015-1, INFRADEV-3-2015). TCS AH is designed as a functional e-research environment to ensure a researcher the maximum possible freedom for in silico experimentation by providing a virtual laboratory in which researcher will be able to create own workspace with own processing streams. The unique integrated RI is: (i) data gathered in the so- called "episodes", comprehensively describing a geophysical process, induced or triggered by human technological activity, which under certain circumstances can become hazardous for people, infrastructure and the environment and (ii) problem-oriented, specific high-level services, with the particular attention devoted to methods analyzing correlations between technology, geophysical response and resulting hazard. Services to be implemented are grouped within six blocks: (1) Basic services for data integration and handling; (2) Services for physical models of stress/strain changes over time and space as driven by geo-resource production; (3) Services for analysing geophysical signals; (4) Services to extract the relation between technological operations and observed induced seismic/deformation; (5) Services to quantitative probabilistic assessments of anthropogenic seismic hazard - statistical properties of anthropogenic seismic series and their dependence on time-varying anthropogenesis; ground motion prediction equations; stationary and time-dependent probabilistic seismic hazard estimates, related to time-changeable technological factors inducing the seismic process; (6) Simulator for Multi-hazard/multi-risk assessment in ExploRation/exploitation of GEoResources (MERGER) - numerical estimate of the occurrence probability of chains of events or processes impacting the environment. TCS AH will also serve the public sector expert knowledge and background information. In order to fulfill this aim the services for outreach, dissemination & communication will be implemented. From the technical point of view the implementation of services will proceed according to the methods worked within the mentioned before IS-EPOS project. The detailed workflows of implementation process of aforementioned services & interaction between user & TCS AH have been already prepared.

  1. SAPT units turn-on in an interference-dominant environment. [Stand Alone Pressure Transducer

    NASA Technical Reports Server (NTRS)

    Peng, W.-C.; Yang, C.-C.; Lichtenberg, C.

    1990-01-01

    A stand alone pressure transducer (SAPT) is a credit-card-sized smart pressure sensor inserted between the tile and the aluminum skin of a space shuttle. Reliably initiating the SAPT units via RF signals in a prelaunch environment is a challenging problem. Multiple-source interference may exist if more than one GSE (ground support equipment) antenna is turned on at the same time to meet the simultaneity requirement of 10 ms. A polygon model for orbiter, external tank, solid rocket booster, and tail service masts is used to simulate the prelaunch environment. Geometric optics is then applied to identify the coverage areas and the areas which are vulnerable to multipath and/or multiple-source interference. Simulation results show that the underside areas of an orbiter have incidence angles exceeding 80 deg. For multipath interference, both sides of the cargo bay areas are found to be vulnerable to a worst-case multipath loss exceeding 20 dB. Multiple-source interference areas are also identified. Mitigation methods for the coverage and interference problem are described. It is shown that multiple-source interference can be eliminated (or controlled) using the time-division-multiplexing method or the time-stamp approach.

  2. Technology developments integrating a space network communications testbed

    NASA Technical Reports Server (NTRS)

    Kwong, Winston; Jennings, Esther; Clare, Loren; Leang, Dee

    2006-01-01

    As future manned and robotic space explorations missions involve more complex systems, it is essential to verify, validate, and optimize such systems through simulation and emulation in a low cost testbed environment. The goal of such a testbed is to perform detailed testing of advanced space and ground communications networks, technologies, and client applications that are essential for future space exploration missions. We describe the development of new technologies enhancing our Multi-mission Advanced Communications Hybrid Environment for Test and Evaluation (MACHETE) that enables its integration in a distributed space communications testbed. MACHETE combines orbital modeling, link analysis, and protocol and service modeling to quantify system performance based on comprehensive considerations of different aspects of space missions.

  3. Crack initiation modeling of a directionally-solidified nickel-base superalloy

    NASA Astrophysics Data System (ADS)

    Gordon, Ali Page

    Combustion gas turbine components designed for application in electric power generation equipment are subject to periodic replacement as a result of cracking, damage, and mechanical property degeneration that render them unsafe for continued operation. In view of the significant costs associated with inspecting, servicing, and replacing damaged components, there has been much interest in developing models that not only predict service life, but also estimate the evolved microstructural state of the material. This thesis explains manifestations of microstructural damage mechanisms that facilitate fatigue crack nucleation in a newly-developed directionally-solidified (DS) Ni-base superalloy components exposed to elevated temperatures and high stresses. In this study, models were developed and validated for damage and life prediction using DS GTD-111 as the subject material. This material, proprietary to General Electric Energy, has a chemical composition and grain structure designed to withstand creep damage occurring in the first and second stage blades of gas-powered turbines. The service conditions in these components, which generally exceed 600°C, facilitate the onset of one or more damage mechanisms related to fatigue, creep, or environment. The study was divided into an empirical phase, which consisted of experimentally simulating service conditions in fatigue specimens, and a modeling phase, which entailed numerically simulating the stress-strain response of the material. Experiments have been carried out to simulate a variety of thermal, mechanical, and environmental operating conditions endured by longitudinally (L) and transversely (T) oriented DS GTD-111. Both in-phase and out-of-phase thermo-mechanical fatigue tests were conducted. In some cases, tests in extreme environments/temperatures were needed to isolate one or at most two of the mechanisms causing damage. Microstructural examinations were carried out via SEM and optical microscopy. A continuum crystal plasticity model was used to simulate the material behavior in the L and T orientations. The constitutive model was implemented in ABAQUS and a parameter estimation scheme was developed to obtain the material constants. A physically-based model was developed for correlating crack initiation life based on the experimental life data and predictions are made using the crack initiation model. Assuming a unique relationship between the damage fraction and cycle fraction with respect to cycles to crack initiation for each damage mode, the total crack initiation life has been represented in terms of the individual damage components (fatigue, creep-fatigue, creep, and oxidation-fatigue) observed at the end state of crack initiation.

  4. Being pragmatic about healthcare complexity: our experiences applying complexity theory and pragmatism to health services research.

    PubMed

    Long, Katrina M; McDermott, Fiona; Meadows, Graham N

    2018-06-20

    The healthcare system has proved a challenging environment for innovation, especially in the area of health services management and research. This is often attributed to the complexity of the healthcare sector, characterized by intersecting biological, social and political systems spread across geographically disparate areas. To help make sense of this complexity, researchers are turning towards new methods and frameworks, including simulation modeling and complexity theory. Herein, we describe our experiences implementing and evaluating a health services innovation in the form of simulation modeling. We explore the strengths and limitations of complexity theory in evaluating health service interventions, using our experiences as examples. We then argue for the potential of pragmatism as an epistemic foundation for the methodological pluralism currently found in complexity research. We discuss the similarities between complexity theory and pragmatism, and close by revisiting our experiences putting pragmatic complexity theory into practice. We found the commonalities between pragmatism and complexity theory to be striking. These included a sensitivity to research context, a focus on applied research, and the valuing of different forms of knowledge. We found that, in practice, a pragmatic complexity theory approach provided more flexibility to respond to the rapidly changing context of health services implementation and evaluation. However, this approach requires a redefinition of implementation success, away from pre-determined outcomes and process fidelity, to one that embraces the continual learning, evolution, and emergence that characterized our project.

  5. Demonstration of NICT Space Weather Cloud --Integration of Supercomputer into Analysis and Visualization Environment--

    NASA Astrophysics Data System (ADS)

    Watari, S.; Morikawa, Y.; Yamamoto, K.; Inoue, S.; Tsubouchi, K.; Fukazawa, K.; Kimura, E.; Tatebe, O.; Kato, H.; Shimojo, S.; Murata, K. T.

    2010-12-01

    In the Solar-Terrestrial Physics (STP) field, spatio-temporal resolution of computer simulations is getting higher and higher because of tremendous advancement of supercomputers. A more advanced technology is Grid Computing that integrates distributed computational resources to provide scalable computing resources. In the simulation research, it is effective that a researcher oneself designs his physical model, performs calculations with a supercomputer, and analyzes and visualizes for consideration by a familiar method. A supercomputer is far from an analysis and visualization environment. In general, a researcher analyzes and visualizes in the workstation (WS) managed at hand because the installation and the operation of software in the WS are easy. Therefore, it is necessary to copy the data from the supercomputer to WS manually. Time necessary for the data transfer through long delay network disturbs high-accuracy simulations actually. In terms of usefulness, integrating a supercomputer and an analysis and visualization environment seamlessly with a researcher's familiar method is important. NICT has been developing a cloud computing environment (NICT Space Weather Cloud). In the NICT Space Weather Cloud, disk servers are located near its supercomputer and WSs for data analysis and visualization. They are connected to JGN2plus that is high-speed network for research and development. Distributed virtual high-capacity storage is also constructed by Grid Datafarm (Gfarm v2). Huge-size data output from the supercomputer is transferred to the virtual storage through JGN2plus. A researcher can concentrate on the research by a familiar method without regard to distance between a supercomputer and an analysis and visualization environment. Now, total 16 disk servers are setup in NICT headquarters (at Koganei, Tokyo), JGN2plus NOC (at Otemachi, Tokyo), Okinawa Subtropical Environment Remote-Sensing Center, and Cybermedia Center, Osaka University. They are connected on JGN2plus, and they constitute 1PB (physical size) virtual storage by Gfarm v2. These disk servers are connected with supercomputers of NICT and Osaka University. A system that data output from the supercomputers are automatically transferred to the virtual storage had been built up. Transfer rate is about 50 GB/hrs by actual measurement. It is estimated that the performance is reasonable for a certain simulation and analysis for reconstruction of coronal magnetic field. This research is assumed an experiment of the system, and the verification of practicality is advanced at the same time. Herein we introduce an overview of the space weather cloud system so far we have developed. We also demonstrate several scientific results using the space weather cloud system. We also introduce several web applications of the cloud as a service of the space weather cloud, which is named as "e-SpaceWeather" (e-SW). The e-SW provides with a variety of space weather online services from many aspects.

  6. Neutral Buoyancy Simulator: MSFC-Langley joint test of large space structures component assembly:

    NASA Technical Reports Server (NTRS)

    1978-01-01

    Once the United States' space program had progressed from Earth's orbit into outerspace, the prospect of building and maintaining a permanent presence in space was realized. To accomplish this feat, NASA launched a temporary workstation, Skylab, to discover the effects of low gravity and weightlessness on the human body, and also to develop tools and equipment that would be needed in the future to build and maintain a more permanent space station. The structures, techniques, and work schedules had to be carefully designed to fit this unique construction site. The components had to be lightweight for transport into orbit, yet durable. The station also had to be made with removable parts for easy servicing and repairs by astronauts. All of the tools necessary for service and repairs had to be designed for easy manipulation by a suited astronaut. And construction methods had to be efficient due to limited time the astronauts could remain outside their controlled environment. In lieu of all the specific needs for this project, an environment on Earth had to be developed that could simulate a low gravity atmosphere. A Neutral Buoyancy Simulator (NBS) was constructed by NASA Marshall Space Flight Center (MSFC) in 1968. Since then, NASA scientists have used this facility to understand how humans work best in low gravity and also provide information about the different kinds of structures that can be built. Another facet of the space station would be electrical cornectors which would be used for powering tools the astronauts would need for construction, maintenance and repairs. Shown is an astronaut training during an underwater electrical connector test in the NBS.

  7. Neutral Buoyancy Simulator-NB32-Assembly of Large Space Structure

    NASA Technical Reports Server (NTRS)

    1980-01-01

    Once the United States' space program had progressed from Earth's orbit into outerspace, theprospect of building and maintaining a permanent presence in space was realized. To accomplish this feat, NASA launched a temporary workstation, Skylab, to discover the effects of low gravity and weightlessness on the human body, and also to develop tools and equipment that would be needed in the future to build and maintain a more permanent space station. The structures, techniques, and work schedules had to be carefully designed to fit this unique construction site. The components had to be lightweight for transport into orbit, yet durable. The station also had to be made with removable parts for easy servicing and repairs by astronauts. All of the tools necessary for service and repairs had to be designed for easy manipulation by a suited astronaut. Construction methods had to be efficient due to the limited time the astronauts could remain outside their controlled environment. In lieu of all the specific needs for this project, an environment on Earth had to be developed that could simulate a low gravity atmosphere. A Neutral Buoyancy Simulator (NBS) was constructed by NASA's Marshall Space Flight Center (MSFC) in 1968. Since then, NASA scientists have used this facility to understand how humans work best in low gravity and also provide information about the different kinds of structures that can be built. Pictured is a Massachusetts Institute of Technology (MIT) student working in a spacesuit on the Experimental Assembly of Structures in Extravehicular Activity (EASE) project which was developed as a joint effort between MFSC and MIT. The EASE experiment required that crew members assemble small components to form larger components, working from the payload bay of the space shuttle. The MIT student in this photo is assembling two six-beam tetrahedrons.

  8. A framework for service enterprise workflow simulation with multi-agents cooperation

    NASA Astrophysics Data System (ADS)

    Tan, Wenan; Xu, Wei; Yang, Fujun; Xu, Lida; Jiang, Chuanqun

    2013-11-01

    Process dynamic modelling for service business is the key technique for Service-Oriented information systems and service business management, and the workflow model of business processes is the core part of service systems. Service business workflow simulation is the prevalent approach to be used for analysis of service business process dynamically. Generic method for service business workflow simulation is based on the discrete event queuing theory, which is lack of flexibility and scalability. In this paper, we propose a service workflow-oriented framework for the process simulation of service businesses using multi-agent cooperation to address the above issues. Social rationality of agent is introduced into the proposed framework. Adopting rationality as one social factor for decision-making strategies, a flexible scheduling for activity instances has been implemented. A system prototype has been developed to validate the proposed simulation framework through a business case study.

  9. Hybrid Symbiotic Organisms Search Optimization Algorithm for Scheduling of Tasks on Cloud Computing Environment.

    PubMed

    Abdullahi, Mohammed; Ngadi, Md Asri

    2016-01-01

    Cloud computing has attracted significant attention from research community because of rapid migration rate of Information Technology services to its domain. Advances in virtualization technology has made cloud computing very popular as a result of easier deployment of application services. Tasks are submitted to cloud datacenters to be processed on pay as you go fashion. Task scheduling is one the significant research challenges in cloud computing environment. The current formulation of task scheduling problems has been shown to be NP-complete, hence finding the exact solution especially for large problem sizes is intractable. The heterogeneous and dynamic feature of cloud resources makes optimum task scheduling non-trivial. Therefore, efficient task scheduling algorithms are required for optimum resource utilization. Symbiotic Organisms Search (SOS) has been shown to perform competitively with Particle Swarm Optimization (PSO). The aim of this study is to optimize task scheduling in cloud computing environment based on a proposed Simulated Annealing (SA) based SOS (SASOS) in order to improve the convergence rate and quality of solution of SOS. The SOS algorithm has a strong global exploration capability and uses fewer parameters. The systematic reasoning ability of SA is employed to find better solutions on local solution regions, hence, adding exploration ability to SOS. Also, a fitness function is proposed which takes into account the utilization level of virtual machines (VMs) which reduced makespan and degree of imbalance among VMs. CloudSim toolkit was used to evaluate the efficiency of the proposed method using both synthetic and standard workload. Results of simulation showed that hybrid SOS performs better than SOS in terms of convergence speed, response time, degree of imbalance, and makespan.

  10. Around Marshall

    NASA Image and Video Library

    1977-04-12

    Once the United States' space program had progressed from Earth's orbit into outerspace, the prospect of building and maintaining a permanent presence in space was realized. To accomplish this feat, NASA launched a temporary workstation, Skylab, to discover the effects of low gravity and weightlessness on the human body, and also to develop tools and equipment that would be needed in the future to build and maintain a more permanent space station. The structures, techniques, and work schedules had to be carefully designed to fit this unique construction site. The components had to be lightweight for transport into orbit, yet durable. The station also had to be made with removable parts for easy servicing and repairs by astronauts. All of the tools necessary for service and repairs had to be designed for easy manipulation by a suited astronaut. And construction methods had to be efficient due to limited time the astronauts could remain outside their controlled environment. In lieu of all the specific needs for this project, an environment on Earth had to be developed that could simulate a low gravity atmosphere. A Neutral Buoyancy Simulator (NBS) was constructed by NASA Marshall Space Flight Center (MSFC) in 1968. Since then, NASA scientists have used this facility to understand how humans work best in low gravity and also provide information about the different kinds of structures that can be built.Pictured is an experiment where the astronaut is required to move a large object which weighed 19,000 pounds. It was moved with realitive ease once the astronaut became familiar with his environment and his near weightless condition. Experiments of this nature provided scientists with the information needed regarding weight and mass allowances astronauts could manage in preparation for building a permanent space station in the future.

  11. Neutral Buoyancy Test NB-14 Large Space Structure Assembly

    NASA Technical Reports Server (NTRS)

    1977-01-01

    Once the United States' space program had progressed from Earth's orbit into outerspace, the prospect of building and maintaining a permanent presence in space was realized. To accomplish this feat, NASA launched a temporary workstation, Skylab, to discover the effects of low gravity and weightlessness on the human body, and also to develop tools and equipment that would be needed in the future to build and maintain a more permanent space station. The structures, techniques, and work schedules had to be carefully designed to fit this unique construction site. The components had to be lightweight for transport into orbit, yet durable. The station also had to be made with removable parts for easy servicing and repairs by astronauts. All of the tools necessary for service and repairs had to be designed for easy manipulation by a suited astronaut. And construction methods had to be efficient due to limited time the astronauts could remain outside their controlled environment. In lieu of all the specific needs for this project, an environment on Earth had to be developed that could simulate a low gravity atmosphere. A Neutral Buoyancy Simulator (NBS) was constructed by NASA Marshall Space Flight Center (MSFC) in 1968. Since then, NASA scientists have used this facility to understand how humans work best in low gravity and also provide information about the different kinds of structures that can be built.Pictured is an experiment where the astronaut is required to move a large object which weighed 19,000 pounds. It was moved with realitive ease once the astronaut became familiar with his environment and his near weightless condition. Experiments of this nature provided scientists with the information needed regarding weight and mass allowances astronauts could manage in preparation for building a permanent space station in the future.

  12. Hybrid Symbiotic Organisms Search Optimization Algorithm for Scheduling of Tasks on Cloud Computing Environment

    PubMed Central

    Abdullahi, Mohammed; Ngadi, Md Asri

    2016-01-01

    Cloud computing has attracted significant attention from research community because of rapid migration rate of Information Technology services to its domain. Advances in virtualization technology has made cloud computing very popular as a result of easier deployment of application services. Tasks are submitted to cloud datacenters to be processed on pay as you go fashion. Task scheduling is one the significant research challenges in cloud computing environment. The current formulation of task scheduling problems has been shown to be NP-complete, hence finding the exact solution especially for large problem sizes is intractable. The heterogeneous and dynamic feature of cloud resources makes optimum task scheduling non-trivial. Therefore, efficient task scheduling algorithms are required for optimum resource utilization. Symbiotic Organisms Search (SOS) has been shown to perform competitively with Particle Swarm Optimization (PSO). The aim of this study is to optimize task scheduling in cloud computing environment based on a proposed Simulated Annealing (SA) based SOS (SASOS) in order to improve the convergence rate and quality of solution of SOS. The SOS algorithm has a strong global exploration capability and uses fewer parameters. The systematic reasoning ability of SA is employed to find better solutions on local solution regions, hence, adding exploration ability to SOS. Also, a fitness function is proposed which takes into account the utilization level of virtual machines (VMs) which reduced makespan and degree of imbalance among VMs. CloudSim toolkit was used to evaluate the efficiency of the proposed method using both synthetic and standard workload. Results of simulation showed that hybrid SOS performs better than SOS in terms of convergence speed, response time, degree of imbalance, and makespan. PMID:27348127

  13. Modeling and Simulation Verification, Validation and Accreditation (VV&A): A New Undertaking for the Exploration Systems Mission Directorate

    NASA Technical Reports Server (NTRS)

    Prill, Mark E.

    2005-01-01

    and Accreditation (VV&A) session audience, a snapshot review of the Exploration Space Mission Directorate s (ESMD) investigation into implementation of a modeling and simulation (M&S) VV&A program. The presentation provides some legacy ESMD reference material, including information on the then-current organizational structure, and M&S (Simulation Based Acquisition (SBA)) focus contained therein, to provide a context for the proposed M&S VV&A approach. This reference material briefly highlights the SBA goals and objectives, and outlines FY05 M&S development and implementation consistent with the Subjective Assessment, Constructive Assessment, Operator-in-the-Loop Assessment, Hardware-in-the-Loop Assessment, and In Service Operations Assessment M&S construct, the NASA Exploration Information Ontology Model (NExIOM) data model, and integration with the Windchill-based Integrated Collaborative Environment (ICE). The presentation then addresses the ESMD team s initial conclusions regarding an M&S VV&A program, summarizes the general VV&A implementation approach anticipated, and outlines some of the recognized VV&A program challenges, all within a broader context of the overarching Integrated Modeling and Simulation (IM&S) environment at both the ESMD and Agency (NASA) levels. The presentation concludes with a status on the current M&S organization s progress to date relative to the recommended IM&S implementation activity. The overall presentation was focused to provide, for the Verification, Validation,

  14. MO-DE-BRA-02: SIMAC: A Simulation Tool for Teaching Linear Accelerator Physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlone, M; Harnett, N; Department of Radiation Oncology, University of Toronto, Toronto, Ontario

    Purpose: The first goal of this work is to develop software that can simulate the physics of linear accelerators (linac). The second goal is to show that this simulation tool is effective in teaching linac physics to medical physicists and linac service engineers. Methods: Linacs were modeled using analytical expressions that can correctly describe the physical response of a linac to parameter changes in real time. These expressions were programmed with a graphical user interface in order to produce an environment similar to that of linac service mode. The software, “SIMAC”, has been used as a learning aid in amore » professional development course 3 times (2014 – 2016) as well as in a physics graduate program. Exercises were developed to supplement the didactic components of the courses consisting of activites designed to reinforce the concepts of beam loading; the effect of steering coil currents on beam symmetry; and the relationship between beam energy and flatness. Results: SIMAC was used to teach 35 professionals (medical physicists; regulators; service engineers; 1 week course) as well as 20 graduate students (1 month project). In the student evaluations, 85% of the students rated the effectiveness of SIMAC as very good or outstanding, and 70% rated the software as the most effective part of the courses. Exercise results were collected showing that 100% of the students were able to use the software correctly. In exercises involving gross changes to linac operating points (i.e. energy changes) the majority of students were able to correctly perform these beam adjustments. Conclusion: Software simulation(SIMAC), can be used to effectively teach linac physics. In short courses, students were able to correctly make gross parameter adjustments that typically require much longer training times using conventional training methods.« less

  15. Computer network environment planning and analysis

    NASA Technical Reports Server (NTRS)

    Dalphin, John F.

    1989-01-01

    The GSFC Computer Network Environment provides a broadband RF cable between campus buildings and ethernet spines in buildings for the interlinking of Local Area Networks (LANs). This system provides terminal and computer linkage among host and user systems thereby providing E-mail services, file exchange capability, and certain distributed computing opportunities. The Environment is designed to be transparent and supports multiple protocols. Networking at Goddard has a short history and has been under coordinated control of a Network Steering Committee for slightly more than two years; network growth has been rapid with more than 1500 nodes currently addressed and greater expansion expected. A new RF cable system with a different topology is being installed during summer 1989; consideration of a fiber optics system for the future will begin soon. Summmer study was directed toward Network Steering Committee operation and planning plus consideration of Center Network Environment analysis and modeling. Biweekly Steering Committee meetings were attended to learn the background of the network and the concerns of those managing it. Suggestions for historical data gathering have been made to support future planning and modeling. Data Systems Dynamic Simulator, a simulation package developed at NASA and maintained at GSFC was studied as a possible modeling tool for the network environment. A modeling concept based on a hierarchical model was hypothesized for further development. Such a model would allow input of newly updated parameters and would provide an estimation of the behavior of the network.

  16. Proceedings of the Military Operations Research Society (MORS) Workshop on Future Wargaming Developments Held at Newport, Rhode Island on 5-7 December 1989

    DTIC Science & Technology

    1989-12-01

    game) and how this compares with other methods of analysis that might be used to accomplish the same things . Technology wargaming (TWG) has a basic goal... things that technology war games can provide that might be more successful than other methods. Games and simulations can be conducted for the purpose of...technology games for LIC scenarios. 9. Enviroment . All Services face difficult environments such as poor visibility, unsuitable road conditions, high sea

  17. Propagation considerations in the American Mobile Satellite system design

    NASA Technical Reports Server (NTRS)

    Kittiver, Charles; Sigler, Charles E., Jr.

    1993-01-01

    An overview of the American Mobile Satellite Corporation (AMSC) mobile satellite services (MSS) system with special emphasis given to the propagation issues that were considered in the design is presented. The aspects of the voice codec design that effect system performance in a shadowed environment are discussed. The strategies for overcoming Ku-Band rain fades in the uplink and downlink paths of the gateway station are presented. A land mobile propagation study that has both measurement and simulation activities is described.

  18. Aviation Safety Modeling and Simulation (ASMM) Propulsion Fleet Modeling: A Tool for Semi-Automatic Construction of CORBA-based Applications from Legacy Fortran Programs

    NASA Technical Reports Server (NTRS)

    Sang, Janche

    2003-01-01

    Within NASA's Aviation Safety Program, NASA GRC participates in the Modeling and Simulation Project called ASMM. NASA GRC s focus is to characterize the propulsion systems performance from a fleet management and maintenance perspective by modeling and through simulation predict the characteristics of two classes of commercial engines (CFM56 and GE90). In prior years, the High Performance Computing and Communication (HPCC) program funded, NASA Glenn in developing a large scale, detailed simulations for the analysis and design of aircraft engines called the Numerical Propulsion System Simulation (NPSS). Three major aspects of this modeling included the integration of different engine components, coupling of multiple disciplines, and engine component zooming at appropriate level fidelity, require relatively tight coupling of different analysis codes. Most of these codes in aerodynamics and solid mechanics are written in Fortran. Refitting these legacy Fortran codes with distributed objects can increase these codes reusability. Aviation Safety s modeling and simulation use in characterizing fleet management has similar needs. The modeling and simulation of these propulsion systems use existing Fortran and C codes that are instrumental in determining the performance of the fleet. The research centers on building a CORBA-based development environment for programmers to easily wrap and couple legacy Fortran codes. This environment consists of a C++ wrapper library to hide the details of CORBA and an efficient remote variable scheme to facilitate data exchange between the client and the server model. Additionally, a Web Service model should also be constructed for evaluation of this technology s use over the next two- three years.

  19. Models and applications for space weather forecasting and analysis at the Community Coordinated Modeling Center.

    NASA Astrophysics Data System (ADS)

    Kuznetsova, Maria

    The Community Coordinated Modeling Center (CCMC, http://ccmc.gsfc.nasa.gov) was established at the dawn of the new millennium as a long-term flexible solution to the problem of transition of progress in space environment modeling to operational space weather forecasting. CCMC hosts an expanding collection of state-of-the-art space weather models developed by the international space science community. Over the years the CCMC acquired the unique experience in preparing complex models and model chains for operational environment and developing and maintaining custom displays and powerful web-based systems and tools ready to be used by researchers, space weather service providers and decision makers. In support of space weather needs of NASA users CCMC is developing highly-tailored applications and services that target specific orbits or locations in space and partnering with NASA mission specialists on linking CCMC space environment modeling with impacts on biological and technological systems in space. Confidence assessment of model predictions is an essential element of space environment modeling. CCMC facilitates interaction between model owners and users in defining physical parameters and metrics formats relevant to specific applications and leads community efforts to quantify models ability to simulate and predict space environment events. Interactive on-line model validation systems developed at CCMC make validation a seamless part of model development circle. The talk will showcase innovative solutions for space weather research, validation, anomaly analysis and forecasting and review on-going community-wide model validation initiatives enabled by CCMC applications.

  20. CDPP Tools in the IMPEx infrastructure

    NASA Astrophysics Data System (ADS)

    Gangloff, Michel; Génot, Vincent; Bourrel, Nataliya; Hess, Sébastien; Khodachenko, Maxim; Modolo, Ronan; Kallio, Esa; Alexeev, Igor; Al-Ubaidi, Tarek; Cecconi, Baptiste; André, Nicolas; Budnik, Elena; Bouchemit, Myriam; Dufourg, Nicolas; Beigbeder, Laurent

    2014-05-01

    The CDPP (Centre de Données de la Physique des Plasmas, http://cdpp.eu/), the French data center for plasma physics, is engaged for more than a decade in the archiving and dissemination of plasma data products from space missions and ground observatories. Besides these activities, the CDPP developed services like AMDA (http://amda.cdpp.eu/) which enables in depth analysis of large amount of data through dedicated functionalities such as: visualization, conditional search, cataloguing, and 3DView (http://3dview.cdpp.eu/) which provides immersive visualisations in planetary environments and is further developed to include simulation and observational data. Both tools implement the IMPEx protocol (http://impexfp7.oeaw.ac.at/) to give access to outputs of simulation runs and models in planetary sciences from several providers like LATMOS, FMI , SINP; prototypes have also been built to access some UCLA and CCMC simulations. These tools and their interaction will be presented together with the IMPEx simulation data model (http://impex.latmos.ipsl.fr/tools/DataModel.htm) used for the interface to model databases.

  1. Simulation of thermomechanical fatigue in solder joints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fang, H.E.; Porter, V.L.; Fye, R.M.

    1997-12-31

    Thermomechanical fatigue (TMF) is a very complex phenomenon in electronic component systems and has been identified as one prominent degradation mechanism for surface mount solder joints in the stockpile. In order to precisely predict the TMF-related effects on the reliability of electronic components in weapons, a multi-level simulation methodology is being developed at Sandia National Laboratories. This methodology links simulation codes of continuum mechanics (JAS3D), microstructural mechanics (GLAD), and microstructural evolution (PARGRAIN) to treat the disparate length scales that exist between the macroscopic response of the component and the microstructural changes occurring in its constituent materials. JAS3D is used tomore » predict strain/temperature distributions in the component due to environmental variable fluctuations. GLAD identifies damage initiation and accumulation in detail based on the spatial information provided by JAS3D. PARGRAIN simulates the changes of material microstructure, such as the heterogeneous coarsening in Sn-Pb solder, when the component`s service environment varies.« less

  2. An Interactive, Web-based High Performance Modeling Environment for Computational Epidemiology.

    PubMed

    Deodhar, Suruchi; Bisset, Keith R; Chen, Jiangzhuo; Ma, Yifei; Marathe, Madhav V

    2014-07-01

    We present an integrated interactive modeling environment to support public health epidemiology. The environment combines a high resolution individual-based model with a user-friendly web-based interface that allows analysts to access the models and the analytics back-end remotely from a desktop or a mobile device. The environment is based on a loosely-coupled service-oriented-architecture that allows analysts to explore various counter factual scenarios. As the modeling tools for public health epidemiology are getting more sophisticated, it is becoming increasingly hard for non-computational scientists to effectively use the systems that incorporate such models. Thus an important design consideration for an integrated modeling environment is to improve ease of use such that experimental simulations can be driven by the users. This is achieved by designing intuitive and user-friendly interfaces that allow users to design and analyze a computational experiment and steer the experiment based on the state of the system. A key feature of a system that supports this design goal is the ability to start, stop, pause and roll-back the disease propagation and intervention application process interactively. An analyst can access the state of the system at any point in time and formulate dynamic interventions based on additional information obtained through state assessment. In addition, the environment provides automated services for experiment set-up and management, thus reducing the overall time for conducting end-to-end experimental studies. We illustrate the applicability of the system by describing computational experiments based on realistic pandemic planning scenarios. The experiments are designed to demonstrate the system's capability and enhanced user productivity.

  3. An Interactive, Web-based High Performance Modeling Environment for Computational Epidemiology

    PubMed Central

    Deodhar, Suruchi; Bisset, Keith R.; Chen, Jiangzhuo; Ma, Yifei; Marathe, Madhav V.

    2014-01-01

    We present an integrated interactive modeling environment to support public health epidemiology. The environment combines a high resolution individual-based model with a user-friendly web-based interface that allows analysts to access the models and the analytics back-end remotely from a desktop or a mobile device. The environment is based on a loosely-coupled service-oriented-architecture that allows analysts to explore various counter factual scenarios. As the modeling tools for public health epidemiology are getting more sophisticated, it is becoming increasingly hard for non-computational scientists to effectively use the systems that incorporate such models. Thus an important design consideration for an integrated modeling environment is to improve ease of use such that experimental simulations can be driven by the users. This is achieved by designing intuitive and user-friendly interfaces that allow users to design and analyze a computational experiment and steer the experiment based on the state of the system. A key feature of a system that supports this design goal is the ability to start, stop, pause and roll-back the disease propagation and intervention application process interactively. An analyst can access the state of the system at any point in time and formulate dynamic interventions based on additional information obtained through state assessment. In addition, the environment provides automated services for experiment set-up and management, thus reducing the overall time for conducting end-to-end experimental studies. We illustrate the applicability of the system by describing computational experiments based on realistic pandemic planning scenarios. The experiments are designed to demonstrate the system's capability and enhanced user productivity. PMID:25530914

  4. Distributed Data Service for Data Management in Internet of Things Middleware

    PubMed Central

    Cruz Huacarpuma, Ruben; de Sousa Junior, Rafael Timoteo; de Holanda, Maristela Terto; de Oliveira Albuquerque, Robson; García Villalba, Luis Javier; Kim, Tai-Hoon

    2017-01-01

    The development of the Internet of Things (IoT) is closely related to a considerable increase in the number and variety of devices connected to the Internet. Sensors have become a regular component of our environment, as well as smart phones and other devices that continuously collect data about our lives even without our intervention. With such connected devices, a broad range of applications has been developed and deployed, including those dealing with massive volumes of data. In this paper, we introduce a Distributed Data Service (DDS) to collect and process data for IoT environments. One central goal of this DDS is to enable multiple and distinct IoT middleware systems to share common data services from a loosely-coupled provider. In this context, we propose a new specification of functionalities for a DDS and the conception of the corresponding techniques for collecting, filtering and storing data conveniently and efficiently in this environment. Another contribution is a data aggregation component that is proposed to support efficient real-time data querying. To validate its data collecting and querying functionalities and performance, the proposed DDS is evaluated in two case studies regarding a simulated smart home system, the first case devoted to evaluating data collection and aggregation when the DDS is interacting with the UIoT middleware, and the second aimed at comparing the DDS data collection with this same functionality implemented within the Kaa middleware. PMID:28448469

  5. Based new WiMax simulation model to investigate Qos with OPNET modeler in sheduling environment

    NASA Astrophysics Data System (ADS)

    Saini, Sanju; Saini, K. K.

    2012-11-01

    WiMAX stands for World Interoperability for Microwave Access. It is considered a major part of broadband wireless network having the IEEE 802.16 standard. WiMAX provides innovative, fixed as well as mobile platforms for broadband internet access anywhere anytime with different transmission modes. The results show approximately equal load and throughput while the delay values vary among the different Base Stations Introducing the various type of scheduling algorithm, like FIFO,PQ,WFQ, for comparison of four type of scheduling service, with its own QoS needs and also introducing OPNET modeler support for Worldwide Interoperability for Microwave Access (WiMAX) network. The simulation results indicate the correctness and the effectiveness of this algorithm. This paper presents a WiMAX simulation model designed with OPNET modeler 14 to measure the delay, load and the throughput performance factors.

  6. A simulation-based training program improves emergency department staff communication.

    PubMed

    Sweeney, Lynn A; Warren, Otis; Gardner, Liz; Rojek, Adam; Lindquist, David G

    2014-01-01

    The objectives of this study were to evaluate the effectiveness of Project CLEAR!, a novel simulation-based training program designed to instill Crew Resource Management (CRM) as the communication standard and to create a service-focused environment in the emergency department (ED) by standardizing the patient encounter. A survey-based study compared physicians' and nurses' perceptions of the quality of communication before and after the training program. Surveys were developed to measure ED staff perceptions of the quality of communication between staff members and with patients. Pretraining and posttraining survey results were compared. After the training program, survey scores improved significantly on questions that asked participants to rate the overall communication between staff members and between staff and patients. A simulation-based training program focusing on CRM and standardizing the patient encounter improves communication in the ED, both between staff members and between staff members and patients.

  7. The effects of background noise on cognitive performance during a 70 hour simulation of conditions aboard the International Space Station.

    PubMed

    Smith, D G; Baranski, J V; Thompson, M M; Abel, S M

    2003-01-01

    A total of twenty-five subjects were cloistered for a period of 70 hours, five at a time, in a hyperbaric chamber modified to simulate the conditions aboard the International Space Station (ISS). A recording of 72 dBA background noise from the ISS service module was used to simulate noise conditions on the ISS. Two groups experienced the background noise throughout the experiment, two other groups experienced the noise only during the day, and one control group was cloistered in a quiet environment. All subjects completed a battery of cognitive tests nine times throughout the experiment. The data showed little or no effect of noise on reasoning, perceptual decision-making, memory, vigilance, mood, or subjective indices of fatigue. Our results suggest that the level of noise on the space station should not affect cognitive performance, at least over a period of several days.

  8. Simulation Modeling and Performance Evaluation of Space Networks

    NASA Technical Reports Server (NTRS)

    Jennings, Esther H.; Segui, John

    2006-01-01

    In space exploration missions, the coordinated use of spacecraft as communication relays increases the efficiency of the endeavors. To conduct trade-off studies of the performance and resource usage of different communication protocols and network designs, JPL designed a comprehensive extendable tool, the Multi-mission Advanced Communications Hybrid Environment for Test and Evaluation (MACHETE). The design and development of MACHETE began in 2000 and is constantly evolving. Currently, MACHETE contains Consultative Committee for Space Data Systems (CCSDS) protocol standards such as Proximity-1, Advanced Orbiting Systems (AOS), Packet Telemetry/Telecommand, Space Communications Protocol Specification (SCPS), and the CCSDS File Delivery Protocol (CFDP). MACHETE uses the Aerospace Corporation s Satellite Orbital Analysis Program (SOAP) to generate the orbital geometry information and contact opportunities. Matlab scripts provide the link characteristics. At the core of MACHETE is a discrete event simulator, QualNet. Delay Tolerant Networking (DTN) is an end-to-end architecture providing communication in and/or through highly stressed networking environments. Stressed networking environments include those with intermittent connectivity, large and/or variable delays, and high bit error rates. To provide its services, the DTN protocols reside at the application layer of the constituent internets, forming a store-and-forward overlay network. The key capabilities of the bundling protocols include custody-based reliability, ability to cope with intermittent connectivity, ability to take advantage of scheduled and opportunistic connectivity, and late binding of names to addresses. In this presentation, we report on the addition of MACHETE models needed to support DTN, namely: the Bundle Protocol (BP) model. To illustrate the use of MACHETE with the additional DTN model, we provide an example simulation to benchmark its performance. We demonstrate the use of the DTN protocol and discuss statistics gathered concerning the total time needed to simulate numerous bundle transmissions

  9. Analysis and Description of HOLTIN Service Provision for AECG monitoring in Complex Indoor Environments

    PubMed Central

    Led, Santiago; Azpilicueta, Leire; Aguirre, Erik; de Espronceda, Miguel Martínez; Serrano, Luis; Falcone, Francisco

    2013-01-01

    In this work, a novel ambulatory ECG monitoring device developed in-house called HOLTIN is analyzed when operating in complex indoor scenarios. The HOLTIN system is described, from the technological platform level to its functional model. In addition, by using in-house 3D ray launching simulation code, the wireless channel behavior, which enables ubiquitous operation, is performed. The effect of human body presence is taken into account by a novel simplified model embedded within the 3D Ray Launching code. Simulation as well as measurement results are presented, showing good agreement. These results may aid in the adequate deployment of this novel device to automate conventional medical processes, increasing the coverage radius and optimizing energy consumption. PMID:23584122

  10. On securing wireless sensor network--novel authentication scheme against DOS attacks.

    PubMed

    Raja, K Nirmal; Beno, M Marsaline

    2014-10-01

    Wireless sensor networks are generally deployed for collecting data from various environments. Several applications specific sensor network cryptography algorithms have been proposed in research. However WSN's has many constrictions, including low computation capability, less memory, limited energy resources, vulnerability to physical capture, which enforce unique security challenges needs to make a lot of improvements. This paper presents a novel security mechanism and algorithm for wireless sensor network security and also an application of this algorithm. The proposed scheme is given to strong authentication against Denial of Service Attacks (DOS). The scheme is simulated using network simulator2 (NS2). Then this scheme is analyzed based on the network packet delivery ratio and found that throughput has improved.

  11. Simulation Platform: a cloud-based online simulation environment.

    PubMed

    Yamazaki, Tadashi; Ikeno, Hidetoshi; Okumura, Yoshihiro; Satoh, Shunji; Kamiyama, Yoshimi; Hirata, Yutaka; Inagaki, Keiichiro; Ishihara, Akito; Kannon, Takayuki; Usui, Shiro

    2011-09-01

    For multi-scale and multi-modal neural modeling, it is needed to handle multiple neural models described at different levels seamlessly. Database technology will become more important for these studies, specifically for downloading and handling the neural models seamlessly and effortlessly. To date, conventional neuroinformatics databases have solely been designed to archive model files, but the databases should provide a chance for users to validate the models before downloading them. In this paper, we report our on-going project to develop a cloud-based web service for online simulation called "Simulation Platform". Simulation Platform is a cloud of virtual machines running GNU/Linux. On a virtual machine, various software including developer tools such as compilers and libraries, popular neural simulators such as GENESIS, NEURON and NEST, and scientific software such as Gnuplot, R and Octave, are pre-installed. When a user posts a request, a virtual machine is assigned to the user, and the simulation starts on that machine. The user remotely accesses to the machine through a web browser and carries out the simulation, without the need to install any software but a web browser on the user's own computer. Therefore, Simulation Platform is expected to eliminate impediments to handle multiple neural models that require multiple software. Copyright © 2011 Elsevier Ltd. All rights reserved.

  12. Reprint of: Simulation Platform: a cloud-based online simulation environment.

    PubMed

    Yamazaki, Tadashi; Ikeno, Hidetoshi; Okumura, Yoshihiro; Satoh, Shunji; Kamiyama, Yoshimi; Hirata, Yutaka; Inagaki, Keiichiro; Ishihara, Akito; Kannon, Takayuki; Usui, Shiro

    2011-11-01

    For multi-scale and multi-modal neural modeling, it is needed to handle multiple neural models described at different levels seamlessly. Database technology will become more important for these studies, specifically for downloading and handling the neural models seamlessly and effortlessly. To date, conventional neuroinformatics databases have solely been designed to archive model files, but the databases should provide a chance for users to validate the models before downloading them. In this paper, we report our on-going project to develop a cloud-based web service for online simulation called "Simulation Platform". Simulation Platform is a cloud of virtual machines running GNU/Linux. On a virtual machine, various software including developer tools such as compilers and libraries, popular neural simulators such as GENESIS, NEURON and NEST, and scientific software such as Gnuplot, R and Octave, are pre-installed. When a user posts a request, a virtual machine is assigned to the user, and the simulation starts on that machine. The user remotely accesses to the machine through a web browser and carries out the simulation, without the need to install any software but a web browser on the user's own computer. Therefore, Simulation Platform is expected to eliminate impediments to handle multiple neural models that require multiple software. Copyright © 2011 Elsevier Ltd. All rights reserved.

  13. VERCE: a productive e-Infrastructure and e-Science environment for data-intensive seismology research

    NASA Astrophysics Data System (ADS)

    Vilotte, J. P.; Atkinson, M.; Spinuso, A.; Rietbrock, A.; Michelini, A.; Igel, H.; Frank, A.; Carpené, M.; Schwichtenberg, H.; Casarotti, E.; Filgueira, R.; Garth, T.; Germünd, A.; Klampanos, I.; Krause, A.; Krischer, L.; Leong, S. H.; Magnoni, F.; Matser, J.; Moguilny, G.

    2015-12-01

    Seismology addresses both fundamental problems in understanding the Earth's internal wave sources and structures and augmented societal applications, like earthquake and tsunami hazard assessment and risk mitigation; and puts a premium on open-data accessible by the Federated Digital Seismological Networks. The VERCE project, "Virtual Earthquake and seismology Research Community e-science environment in Europe", has initiated a virtual research environment to support complex orchestrated workflows combining state-of-art wave simulation codes and data analysis tools on distributed computing and data infrastructures (DCIs) along with multiple sources of observational data and new capabilities to combine simulation results with observational data. The VERCE Science Gateway provides a view of all the available resources, supporting collaboration with shared data and methods, with data access controls. The mapping to DCIs handles identity management, authority controls, transformations between representations and controls, and access to resources. The framework for computational science that provides simulation codes, like SPECFEM3D, democratizes their use by getting data from multiple sources, managing Earth models and meshes, distilling them as input data, and capturing results with meta-data. The dispel4py data-intensive framework allows for developing data-analysis applications using Python and the ObsPy library, which can be executed on different DCIs. A set of tools allows coupling with seismology and external data services. Provenance driven tools validate results and show relationships between data to facilitate method improvement. Lessons learned from VERCE training lead us to conclude that solid-Earth scientists could make significant progress by using VERCE e-science environment. VERCE has already contributed to the European Plate Observation System (EPOS), and is part of the EPOS implementation phase. Its cross-disciplinary capabilities are being extended for the EPOS implantation phase.

  14. CREATE-IP and CREATE-V: Data and Services Update

    NASA Astrophysics Data System (ADS)

    Carriere, L.; Potter, G. L.; Hertz, J.; Peters, J.; Maxwell, T. P.; Strong, S.; Shute, J.; Shen, Y.; Duffy, D.

    2017-12-01

    The NASA Center for Climate Simulation (NCCS) at the Goddard Space Flight Center and the Earth System Grid Federation (ESGF) are working together to build a uniform environment for the comparative study and use of a group of reanalysis datasets of particular importance to the research community. This effort is called the Collaborative REAnalysis Technical Environment (CREATE) and it contains two components: the CREATE-Intercomparison Project (CREATE-IP) and CREATE-V. This year's efforts included generating and publishing an atmospheric reanalysis ensemble mean and spread and improving the analytics available through CREATE-V. Related activities included adding access to subsets of the reanalysis data through ArcGIS and expanding the visualization tool to GMAO forecast data. This poster will present the access mechanisms to this data and use cases including example Jupyter Notebook code. The reanalysis ensemble was generated using two methods, first using standard Python tools for regridding, extracting levels and creating the ensemble mean and spread on a virtual server in the NCCS environment. The second was using a new analytics software suite, the Earth Data Analytics Services (EDAS), coupled with a high-performance Data Analytics and Storage System (DASS) developed at the NCCS. Results were compared to validate the EDAS methodologies, and the results, including time to process, will be presented. The ensemble includes selected 6 hourly and monthly variables, regridded to 1.25 degrees, with 24 common levels used for the 3D variables. Use cases for the new data and services will be presented, including the use of EDAS for the backend analytics on CREATE-V, the use of the GMAO forecast aerosol and cloud data in CREATE-V, and the ability to connect CREATE-V data to NCCS ArcGIS services.

  15. A Hybrid Demand Response Simulator Version 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2012-05-02

    A hybrid demand response simulator is developed to test different control algorithms for centralized and distributed demand response (DR) programs in a small distribution power grid. The HDRS is designed to model a wide variety of DR services such as peak having, load shifting, arbitrage, spinning reserves, load following, regulation, emergency load shedding, etc. The HDRS does not model the dynamic behaviors of the loads, rather, it simulates the load scheduling and dispatch process. The load models include TCAs (water heaters, air conditioners, refrigerators, freezers, etc) and non-TCAs (lighting, washer, dishwasher, etc.) The ambient temperature changes, thermal resistance, capacitance, andmore » the unit control logics can be modeled for TCA loads. The use patterns of the non-TCA can be modeled by probability of use and probabilistic durations. Some of the communication network characteristics, such as delays and errors, can also be modeled. Most importantly, because the simulator is modular and greatly simplified the thermal models for TCA loads, it is very easy and fast to be used to test and validate different control algorithms in a simulated environment.« less

  16. OpenSimulator Interoperability with DRDC Simulation Tools: Compatibility Study

    DTIC Science & Technology

    2014-09-01

    into two components: (1) backend data services consisting of user accounts, login service, assets, and inventory; and (2) the simulator server which...components are combined into a single OpenSimulator process. In grid mode, the two components are separated, placing the backend services into a ROBUST... mobile devices. Potential points of compatibility between Unity and OpenSimulator include: a Unity-based desktop computer OpenSimulator viewer; a

  17. The effects of simulated patients and simulated gynecologic models on student anxiety in providing IUD services.

    PubMed

    Khadivzadeh, Talat; Erfanian, Fatemeh

    2012-10-01

    Midwifery students experience high levels of stress during their initial clinical practices. Addressing the learner's source of anxiety and discomfort can ease the learning experience and lead to better outcomes. The aim of this study was to find out the effect of a simulation-based course, using simulated patients and simulated gynecologic models on student anxiety and comfort while practicing to provide intrauterine device (IUD) services. Fifty-six eligible midwifery students were randomly allocated into simulation-based and traditional training groups. They participated in a 12-hour workshop in providing IUD services. The simulation group was trained through an educational program including simulated gynecologic models and simulated patients. The students in both groups then practiced IUD consultation and insertion with real patients in the clinic. The students' anxiety in IUD insertion was assessed using the "Spielberger anxiety test" and the "comfort in providing IUD services" questionnaire. There were significant differences between students in 2 aspects of anxiety including state (P < 0.001) and trait (P = 0.024) and the level of comfort (P = 0.000) in providing IUD services in simulation and traditional groups. "Fear of uterine perforation during insertion" was the most important cause of students' anxiety in providing IUD services, which was reported by 74.34% of students. Simulated patients and simulated gynecologic models are effective in optimizing students' anxiety levels when practicing to deliver IUD services. Therefore, it is recommended that simulated patients and simulated gynecologic models be used before engaging students in real clinical practice.

  18. The simulated clinical environment: Cognitive and emotional impact among undergraduates.

    PubMed

    Tremblay, Marie-Laurence; Lafleur, Alexandre; Leppink, Jimmie; Dolmans, Diana H J M

    2017-02-01

    Simulated clinical immersion (SCI) is used in undergraduate healthcare programs to expose the learner to real-life situations in authentic simulated clinical environments. For novices, the environment in which the simulation occurs can be distracting and stressful, hence potentially compromising learning. This study aims to determine whether SCI (with environment) imposes greater extraneous cognitive load and stress on undergraduate pharmacy students than simulated patients (SP) (without environment). It also aims to explore how features of the simulated environment influence students' perception of learning. In this mixed-methods study, 143 undergraduate pharmacy students experienced both SCI and SP in a crossover design. After the simulations, participants rated their cognitive load and emotions. Thirty-five students met in focus groups to explore their perception of learning in simulation. Intrinsic and extraneous cognitive load and stress scores in SCI were significantly but modestly higher compared to SP. Qualitative findings reveal that the physical environment in SCI generated more stress and affected students? focus. In SP, students concentrated on clinical reasoning. SCI stimulated a focus on data collection but impeded in-depth problem solving processes. The physical environment in simulation influences what and how students learn. SCI was reported as more cognitively demanding than SP. Our findings emphasize the need for the development of adapted instructional design guidelines in simulation for novices.

  19. Corrosion of Advanced Steels: Challenges in the Oil and Gas Industry

    NASA Astrophysics Data System (ADS)

    Mishra, Brajendra; Apelian, Diran

    Drill pipe steels are in contact with CO2 and H2S environments, depending on the oil and gas field. These steels have to be resistant to various in-service conditions including aggressive environments containing CO2, H2S, O2, and chlorides, in addition to static and dynamic mechanical stresses. In this respect stress corrosion cracking susceptibility of two grades of drill pipe steel in CO2 environment have been studied simulating the bottom hole oil and gas well conditions. SSRT results show that SCC susceptibility or loss of ductility changes with temperature and increasing temperature increases the loss of ductility. No FeCO3 is observed below 100 °C, and density of FeCO3 is higher in grip section than gauge length and this is due to strain disturbance of growth of iron carbonate crystals. Material selection for down hole in CO2 containing environments needs has been reviewed and probability of SCC occurrence in higher temperatures has been included.

  20. Integrated Clinical Training for Space Flight Using a High-Fidelity Patient Simulator in a Simulated Microgravity Environment

    NASA Technical Reports Server (NTRS)

    Hurst, Victor; Doerr, Harold K.; Polk, J. D.; Schmid, Josef; Parazynksi, Scott; Kelly, Scott

    2007-01-01

    This viewgraph presentation reviews the use of telemedicine in a simulated microgravity environment using a patient simulator. For decades, telemedicine techniques have been used in terrestrial environments by many cohorts with varied clinical experience. The success of these techniques has been recently expanded to include microgravity environments aboard the International Space Station (ISS). In order to investigate how an astronaut crew medical officer will execute medical tasks in a microgravity environment, while being remotely guided by a flight surgeon, the Medical Operation Support Team (MOST) used the simulated microgravity environment provided aboard DC-9 aircraft teams of crew medical officers, and remote flight surgeons performed several tasks on a patient simulator.

  1. Community Coordinated Modeling Center (CCMC): Using innovative tools and services to support worldwide space weather scientific communities and networks

    NASA Astrophysics Data System (ADS)

    Mendoza, A. M.; Bakshi, S.; Berrios, D.; Chulaki, A.; Evans, R. M.; Kuznetsova, M. M.; Lee, H.; MacNeice, P. J.; Maddox, M. M.; Mays, M. L.; Mullinix, R. E.; Ngwira, C. M.; Patel, K.; Pulkkinen, A.; Rastaetter, L.; Shim, J.; Taktakishvili, A.; Zheng, Y.

    2012-12-01

    Community Coordinated Modeling Center (CCMC) was established to enhance basic solar terrestrial research and to aid in the development of models for specifying and forecasting conditions in the space environment. In achieving this goal, CCMC has developed and provides a set of innovative tools varying from: Integrated Space Weather Analysis (iSWA) web -based dissemination system for space weather information, Runs-On-Request System providing access to unique collection of state-of-the-art solar and space physics models (unmatched anywhere in the world), Advanced Online Visualization and Analysis tools for more accurate interpretation of model results, Standard Data formats for Simulation Data downloads, and recently Mobile apps (iPhone/Android) to view space weather data anywhere to the scientific community. The number of runs requested and the number of resulting scientific publications and presentations from the research community has not only been an indication of the broad scientific usage of the CCMC and effective participation by space scientists and researchers, but also guarantees active collaboration and coordination amongst the space weather research community. Arising from the course of CCMC activities, CCMC also supports community-wide model validation challenges and research focus group projects for a broad range of programs such as the multi-agency National Space Weather Program, NSF's CEDAR (Coupling, Energetics and Dynamics of Atmospheric Regions), GEM (Geospace Environment Modeling) and Shine (Solar Heliospheric and INterplanetary Environment) programs. In addition to performing research and model development, CCMC also supports space science education by hosting summer students through local universities; through the provision of simulations in support of classroom programs such as Heliophysics Summer School (with student research contest) and CCMC Workshops; training next generation of junior scientists in space weather forecasting; and educating the general public about the importance and impacts of space weather effects. Although CCMC is organizationally comprised of United States federal agencies, CCMC services are open to members of the international science community and encourages interagency and international collaboration. In this poster, we provide an overview of using Community Coordinated Modeling Center (CCMC) tools and services to support worldwide space weather scientific communities and networks.;

  2. NPSS on NASA's Information Power Grid: Using CORBA and Globus to Coordinate Multidisciplinary Aeroscience Applications

    NASA Technical Reports Server (NTRS)

    Lopez, Isaac; Follen, Gregory J.; Gutierrez, Richard; Foster, Ian; Ginsburg, Brian; Larsson, Olle; Martin, Stuart; Tuecke, Steven; Woodford, David

    2000-01-01

    This paper describes a project to evaluate the feasibility of combining Grid and Numerical Propulsion System Simulation (NPSS) technologies, with a view to leveraging the numerous advantages of commodity technologies in a high-performance Grid environment. A team from the NASA Glenn Research Center and Argonne National Laboratory has been studying three problems: a desktop-controlled parameter study using Excel (Microsoft Corporation); a multicomponent application using ADPAC, NPSS, and a controller program-, and an aviation safety application running about 100 jobs in near real time. The team has successfully demonstrated (1) a Common-Object- Request-Broker-Architecture- (CORBA-) to-Globus resource manager gateway that allows CORBA remote procedure calls to be used to control the submission and execution of programs on workstations and massively parallel computers, (2) a gateway from the CORBA Trader service to the Grid information service, and (3) a preliminary integration of CORBA and Grid security mechanisms. We have applied these technologies to two applications related to NPSS, namely a parameter study and a multicomponent simulation.

  3. AceCloud: Molecular Dynamics Simulations in the Cloud.

    PubMed

    Harvey, M J; De Fabritiis, G

    2015-05-26

    We present AceCloud, an on-demand service for molecular dynamics simulations. AceCloud is designed to facilitate the secure execution of large ensembles of simulations on an external cloud computing service (currently Amazon Web Services). The AceCloud client, integrated into the ACEMD molecular dynamics package, provides an easy-to-use interface that abstracts all aspects of interaction with the cloud services. This gives the user the experience that all simulations are running on their local machine, minimizing the learning curve typically associated with the transition to using high performance computing services.

  4. Modeling the Ecosystem Services Provided by Trees in Urban Ecosystems: Using Biome-BGC to Improve i-Tree Eco

    NASA Technical Reports Server (NTRS)

    Brown, Molly E.; McGroddy, Megan; Spence, Caitlin; Flake, Leah; Sarfraz, Amna; Nowak, David J.; Milesi, Cristina

    2012-01-01

    As the world becomes increasingly urban, the need to quantify the effect of trees in urban environments on energy usage, air pollution, local climate and nutrient run-off has increased. By identifying, quantifying and valuing the ecological activity that provides services in urban areas, stronger policies and improved quality of life for urban residents can be obtained. Here we focus on two radically different models that can be used to characterize urban forests. The i-Tree Eco model (formerly UFORE model) quantifies ecosystem services (e.g., air pollution removal, carbon storage) and values derived from urban trees based on field measurements of trees and local ancillary data sets. Biome-BGC (Biome BioGeoChemistry) is used to simulate the fluxes and storage of carbon, water, and nitrogen in natural environments. This paper compares i-Tree Eco's methods to those of Biome-BGC, which estimates the fluxes and storage of energy, carbon, water and nitrogen for vegetation and soil components of the ecosystem. We describe the two models and their differences in the way they calculate similar properties, with a focus on carbon and nitrogen. Finally, we discuss the implications of further integration of these two communities for land managers such as those in Maryland.

  5. Multi-service highly sensitive rectifier for enhanced RF energy scavenging.

    PubMed

    Shariati, Negin; Rowe, Wayne S T; Scott, James R; Ghorbani, Kamran

    2015-05-07

    Due to the growing implications of energy costs and carbon footprints, the need to adopt inexpensive, green energy harvesting strategies are of paramount importance for the long-term conservation of the environment and the global economy. To address this, the feasibility of harvesting low power density ambient RF energy simultaneously from multiple sources is examined. A high efficiency multi-resonant rectifier is proposed, which operates at two frequency bands (478-496 and 852-869 MHz) and exhibits favorable impedance matching over a broad input power range (-40 to -10 dBm). Simulation and experimental results of input reflection coefficient and rectified output power are in excellent agreement, demonstrating the usefulness of this innovative low-power rectification technique. Measurement results indicate an effective efficiency of 54.3%, and an output DC voltage of 772.8 mV is achieved for a multi-tone input power of -10 dBm. Furthermore, the measured output DC power from harvesting RF energy from multiple services concurrently exhibits a 3.14 and 7.24 fold increase over single frequency rectification at 490 and 860 MHz respectively. Therefore, the proposed multi-service highly sensitive rectifier is a promising technique for providing a sustainable energy source for low power applications in urban environments.

  6. Multi-Service Highly Sensitive Rectifier for Enhanced RF Energy Scavenging

    PubMed Central

    Shariati, Negin; Rowe, Wayne S. T.; Scott, James R.; Ghorbani, Kamran

    2015-01-01

    Due to the growing implications of energy costs and carbon footprints, the need to adopt inexpensive, green energy harvesting strategies are of paramount importance for the long-term conservation of the environment and the global economy. To address this, the feasibility of harvesting low power density ambient RF energy simultaneously from multiple sources is examined. A high efficiency multi-resonant rectifier is proposed, which operates at two frequency bands (478–496 and 852–869 MHz) and exhibits favorable impedance matching over a broad input power range (−40 to −10 dBm). Simulation and experimental results of input reflection coefficient and rectified output power are in excellent agreement, demonstrating the usefulness of this innovative low-power rectification technique. Measurement results indicate an effective efficiency of 54.3%, and an output DC voltage of 772.8 mV is achieved for a multi-tone input power of −10 dBm. Furthermore, the measured output DC power from harvesting RF energy from multiple services concurrently exhibits a 3.14 and 7.24 fold increase over single frequency rectification at 490 and 860 MHz respectively. Therefore, the proposed multi-service highly sensitive rectifier is a promising technique for providing a sustainable energy source for low power applications in urban environments. PMID:25951137

  7. IoT-Based User-Driven Service Modeling Environment for a Smart Space Management System

    PubMed Central

    Choi, Hoan-Suk; Rhee, Woo-Seop

    2014-01-01

    The existing Internet environment has been extended to the Internet of Things (IoT) as an emerging new paradigm. The IoT connects various physical entities. These entities have communication capability and deploy the observed information to various service areas such as building management, energy-saving systems, surveillance services, and smart homes. These services are designed and developed by professional service providers. Moreover, users' needs have become more complicated and personalized with the spread of user-participation services such as social media and blogging. Therefore, some active users want to create their own services to satisfy their needs, but the existing IoT service-creation environment is difficult for the non-technical user because it requires a programming capability to create a service. To solve this problem, we propose the IoT-based user-driven service modeling environment to provide an easy way to create IoT services. Also, the proposed environment deploys the defined service to another user. Through the personalization and customization of the defined service, the value and dissemination of the service is increased. This environment also provides the ontology-based context-information processing that produces and describes the context information for the IoT-based user-driven service. PMID:25420153

  8. IoT-based user-driven service modeling environment for a smart space management system.

    PubMed

    Choi, Hoan-Suk; Rhee, Woo-Seop

    2014-11-20

    The existing Internet environment has been extended to the Internet of Things (IoT) as an emerging new paradigm. The IoT connects various physical entities. These entities have communication capability and deploy the observed information to various service areas such as building management, energy-saving systems, surveillance services, and smart homes. These services are designed and developed by professional service providers. Moreover, users' needs have become more complicated and personalized with the spread of user-participation services such as social media and blogging. Therefore, some active users want to create their own services to satisfy their needs, but the existing IoT service-creation environment is difficult for the non-technical user because it requires a programming capability to create a service. To solve this problem, we propose the IoT-based user-driven service modeling environment to provide an easy way to create IoT services. Also, the proposed environment deploys the defined service to another user. Through the personalization and customization of the defined service, the value and dissemination of the service is increased. This environment also provides the ontology-based context-information processing that produces and describes the context information for the IoT-based user-driven service.

  9. An intelligent processing environment for real-time simulation

    NASA Technical Reports Server (NTRS)

    Carroll, Chester C.; Wells, Buren Earl, Jr.

    1988-01-01

    The development of a highly efficient and thus truly intelligent processing environment for real-time general purpose simulation of continuous systems is described. Such an environment can be created by mapping the simulation process directly onto the University of Alamba's OPERA architecture. To facilitate this effort, the field of continuous simulation is explored, highlighting areas in which efficiency can be improved. Areas in which parallel processing can be applied are also identified, and several general OPERA type hardware configurations that support improved simulation are investigated. Three direct execution parallel processing environments are introduced, each of which greatly improves efficiency by exploiting distinct areas of the simulation process. These suggested environments are candidate architectures around which a highly intelligent real-time simulation configuration can be developed.

  10. Orion Crew Module / Service Module Structural Weight and Center of Gravity Simulator and Vehicle Motion Simulator Hoist Structure for Orion Service Module Umbilical Testing

    NASA Technical Reports Server (NTRS)

    Ascoli, Peter A.; Haddock, Michael H.

    2014-01-01

    An Orion Crew Module Service Module Structural Weight and Center of Gravity Simulator and a Vehicle Motion Simulator Hoist Structure for Orion Service Module Umbilical Testing were designed during a summer 2014 internship in Kennedy Space Centers Structures and Mechanisms Design Branch. The simulator is a structure that supports ballast, which will be integrated into an existing Orion mock-up to simulate the mass properties of the Exploration Mission-1 flight vehicle in both fueled and unfueled states. The simulator mimics these configurations through the use of approximately 40,000 lbf of steel and water ballast, and a steel support structure. Draining four water tanks, which house the water ballast, transitions the simulator from the fueled to unfueled mass properties. The Ground Systems Development and Operations organization will utilize the simulator to verify and validate equipment used to maneuver and transport the Orion spacecraft in its fueled and unfueled configurations. The second design comprises a cantilevered tripod hoist structure that provides the capability to position a large Orion Service Module Umbilical in proximity to the Vehicle Motion Simulator. The Ground Systems Development and Operations organization will utilize the Vehicle Motion Simulator, with the hoist structure attached, to test the Orion Service Module Umbilical for proper operation prior to installation on the Mobile Launcher. Overall, these two designs provide NASA engineers viable concepts worthy of fabricating and placing into service to prepare for the launch of Orion in 2017.

  11. Time-Aware Service Ranking Prediction in the Internet of Things Environment

    PubMed Central

    Huang, Yuze; Huang, Jiwei; Cheng, Bo; He, Shuqing; Chen, Junliang

    2017-01-01

    With the rapid development of the Internet of things (IoT), building IoT systems with high quality of service (QoS) has become an urgent requirement in both academia and industry. During the procedures of building IoT systems, QoS-aware service selection is an important concern, which requires the ranking of a set of functionally similar services according to their QoS values. In reality, however, it is quite expensive and even impractical to evaluate all geographically-dispersed IoT services at a single client to obtain such a ranking. Nevertheless, distributed measurement and ranking aggregation have to deal with the high dynamics of QoS values and the inconsistency of partial rankings. To address these challenges, we propose a time-aware service ranking prediction approach named TSRPred for obtaining the global ranking from the collection of partial rankings. Specifically, a pairwise comparison model is constructed to describe the relationships between different services, where the partial rankings are obtained by time series forecasting on QoS values. The comparisons of IoT services are formulated by random walks, and thus, the global ranking can be obtained by sorting the steady-state probabilities of the underlying Markov chain. Finally, the efficacy of TSRPred is validated by simulation experiments based on large-scale real-world datasets. PMID:28448451

  12. Time-Aware Service Ranking Prediction in the Internet of Things Environment.

    PubMed

    Huang, Yuze; Huang, Jiwei; Cheng, Bo; He, Shuqing; Chen, Junliang

    2017-04-27

    With the rapid development of the Internet of things (IoT), building IoT systems with high quality of service (QoS) has become an urgent requirement in both academia and industry. During the procedures of building IoT systems, QoS-aware service selection is an important concern, which requires the ranking of a set of functionally similar services according to their QoS values. In reality, however, it is quite expensive and even impractical to evaluate all geographically-dispersed IoT services at a single client to obtain such a ranking. Nevertheless, distributed measurement and ranking aggregation have to deal with the high dynamics of QoS values and the inconsistency of partial rankings. To address these challenges, we propose a time-aware service ranking prediction approach named TSRPred for obtaining the global ranking from the collection of partial rankings. Specifically, a pairwise comparison model is constructed to describe the relationships between different services, where the partial rankings are obtained by time series forecasting on QoS values. The comparisons of IoT services are formulated by random walks, and thus, the global ranking can be obtained by sorting the steady-state probabilities of the underlying Markov chain. Finally, the efficacy of TSRPred is validated by simulation experiments based on large-scale real-world datasets.

  13. Operational Numerical Weather Prediction at the Met Office and potential ways forward for operational space weather prediction systems

    NASA Astrophysics Data System (ADS)

    Jackson, David

    NICT (National Institute of Information and Communications Technology) has been in charge of space weather forecast service in Japan for more than 20 years. The main target region of the space weather is the geo-space in the vicinity of the Earth where human activities are dominant. In the geo-space, serious damages of satellites, international space stations and astronauts take place caused by energetic particles or electromagnetic disturbances: the origin of the causes is dynamically changing of solar activities. Positioning systems via GPS satellites are also im-portant recently. Since the most significant effect of positioning error comes from disturbances of the ionosphere, it is crucial to estimate time-dependent modulation of the electron density profiles in the ionosphere. NICT is one of the 13 members of the ISES (International Space Environment Service), which is an international assembly of space weather forecast centers under the UNESCO. With help of geo-space environment data exchanging among the member nations, NICT operates daily space weather forecast service every day to provide informa-tion on forecasts of solar flare, geomagnetic disturbances, solar proton event, and radio-wave propagation conditions in the ionosphere. The space weather forecast at NICT is conducted based on the three methodologies: observations, simulations and informatics (OSI model). For real-time or quasi real-time reporting of space weather, we conduct our original observations: Hiraiso solar observatory to monitor the solar activity (solar flare, coronal mass ejection, and so on), domestic ionosonde network, magnetometer HF radar observations in far-east Siberia, and south-east Asia low-latitude ionosonde network (SEALION). Real-time observation data to monitor solar and solar-wind activities are obtained through antennae at NICT from ACE and STEREO satellites. We have a middle-class super-computer (NEC SX-8R) to maintain real-time computer simulations for solar and solar-wind, magnetosphere and ionosphere. The three simulations are directly or indirectly connected each other based on real-time observa-tion data to reproduce a virtual geo-space region on the super-computer. Informatics is a new methodology to make precise forecast of space weather. Based on new information and communication technologies (ICT), it provides more information in both quality and quantity. At NICT, we have been developing a cloud-computing system named "space weather cloud" based on a high-speed network system (JGN2+). Huge-scale distributed storage (1PB), clus-ter computers, visualization systems and other resources are expected to derive new findings and services of space weather forecasting. The final goal of NICT space weather service is to predict near-future space weather conditions and disturbances which will be causes of satellite malfunctions, tele-communication problems, and error of GPS navigations. In the present talk, we introduce our recent activities on the space weather services and discuss how we are going to develop the services from the view points of space science and practical uses.

  14. Activities of NICT space weather project

    NASA Astrophysics Data System (ADS)

    Murata, Ken T.; Nagatsuma, Tsutomu; Watari, Shinichi; Shinagawa, Hiroyuki; Ishii, Mamoru

    NICT (National Institute of Information and Communications Technology) has been in charge of space weather forecast service in Japan for more than 20 years. The main target region of the space weather is the geo-space in the vicinity of the Earth where human activities are dominant. In the geo-space, serious damages of satellites, international space stations and astronauts take place caused by energetic particles or electromagnetic disturbances: the origin of the causes is dynamically changing of solar activities. Positioning systems via GPS satellites are also im-portant recently. Since the most significant effect of positioning error comes from disturbances of the ionosphere, it is crucial to estimate time-dependent modulation of the electron density profiles in the ionosphere. NICT is one of the 13 members of the ISES (International Space Environment Service), which is an international assembly of space weather forecast centers under the UNESCO. With help of geo-space environment data exchanging among the member nations, NICT operates daily space weather forecast service every day to provide informa-tion on forecasts of solar flare, geomagnetic disturbances, solar proton event, and radio-wave propagation conditions in the ionosphere. The space weather forecast at NICT is conducted based on the three methodologies: observations, simulations and informatics (OSI model). For real-time or quasi real-time reporting of space weather, we conduct our original observations: Hiraiso solar observatory to monitor the solar activity (solar flare, coronal mass ejection, and so on), domestic ionosonde network, magnetometer HF radar observations in far-east Siberia, and south-east Asia low-latitude ionosonde network (SEALION). Real-time observation data to monitor solar and solar-wind activities are obtained through antennae at NICT from ACE and STEREO satellites. We have a middle-class super-computer (NEC SX-8R) to maintain real-time computer simulations for solar and solar-wind, magnetosphere and ionosphere. The three simulations are directly or indirectly connected each other based on real-time observa-tion data to reproduce a virtual geo-space region on the super-computer. Informatics is a new methodology to make precise forecast of space weather. Based on new information and communication technologies (ICT), it provides more information in both quality and quantity. At NICT, we have been developing a cloud-computing system named "space weather cloud" based on a high-speed network system (JGN2+). Huge-scale distributed storage (1PB), clus-ter computers, visualization systems and other resources are expected to derive new findings and services of space weather forecasting. The final goal of NICT space weather service is to predict near-future space weather conditions and disturbances which will be causes of satellite malfunctions, tele-communication problems, and error of GPS navigations. In the present talk, we introduce our recent activities on the space weather services and discuss how we are going to develop the services from the view points of space science and practical uses.

  15. Measuring sense of presence and user characteristics to predict effective training in an online simulated virtual environment.

    PubMed

    De Leo, Gianluca; Diggs, Leigh A; Radici, Elena; Mastaglio, Thomas W

    2014-02-01

    Virtual-reality solutions have successfully been used to train distributed teams. This study aimed to investigate the correlation between user characteristics and sense of presence in an online virtual-reality environment where distributed teams are trained. A greater sense of presence has the potential to make training in the virtual environment more effective, leading to the formation of teams that perform better in a real environment. Being able to identify, before starting online training, those user characteristics that are predictors of a greater sense of presence can lead to the selection of trainees who would benefit most from the online simulated training. This is an observational study with a retrospective postsurvey of participants' user characteristics and degree of sense of presence. Twenty-nine members from 3 Air Force National Guard Medical Service expeditionary medical support teams participated in an online virtual environment training exercise and completed the Independent Television Commission-Sense of Presence Inventory survey, which measures sense of presence and user characteristics. Nonparametric statistics were applied to determine the statistical significance of user characteristics to sense of presence. Comparing user characteristics to the 4 scales of the Independent Television Commission-Sense of Presence Inventory using Kendall τ test gave the following results: the user characteristics "how often you play video games" (τ(26)=-0.458, P<0.01) and "television/film production knowledge" (τ(27)=-0.516, P<0.01) were significantly related to negative effects. Negative effects refer to adverse physiologic reactions owing to the virtual environment experience such as dizziness, nausea, headache, and eyestrain. The user characteristic "knowledge of virtual reality" was significantly related to engagement (τ(26)=0.463, P<0.01) and negative effects (τ(26)=-0.404, P<0.05). Individuals who have knowledge about virtual environments and experience with gaming environments report a higher sense of presence that indicates that they will likely benefit more from online virtual training. Future research studies could include a larger population of expeditionary medical support, and the results obtained could be used to create a model that predicts the level of presence based on the user characteristics. To maximize results and minimize costs, only those individuals who, based on their characteristics, are supposed to have a higher sense of presence and less negative effects could be selected for online simulated virtual environment training.

  16. Recent Developments in Hardware-in-the-Loop Formation Navigation and Control

    NASA Technical Reports Server (NTRS)

    Mitchell, Jason W.; Luquette, Richard J.

    2005-01-01

    The Formation Flying Test-Bed (FFTB) at NASA Goddard Space Flight Center (GSFC) provides a hardware-in-the-loop test environment for formation navigation and control. The facility is evolving as a modular, hybrid, dynamic simulation facility for end-tc-end guidance, navigation, and control (GN&C) design and analysis of formation flying spacecraft. The core capabilities of the FFTB, as a platform for testing critical hardware and software algorithms in-the-loop, are reviewed with a focus on many recent improvements. Two significant upgrades to the FFTB are a message-oriented middleware (MOM) architecture, and a software crosslink for inter-spacecraft ranging. The MOM architecture provides a common messaging bus for software agents, easing integration, arid supporting the GSFC Mission Services Evolution Center (GMSEC) architecture via software bridge. Additionally, the FFTB s hardware capabilities are expanding. Recently, two Low-Power Transceivers (LPTs) with ranging capability have been introduced into the FFTB. The LPT crosslinks will be connected to a modified Crosslink Channel Simulator (CCS), which applies realistic space-environment effects to the Radio Frequency (RF) signals produced by the LPTs.

  17. NPSS on NASA's IPG: Using CORBA and Globus to Coordinate Multidisciplinary Aeroscience Applications

    NASA Technical Reports Server (NTRS)

    Lopez, Isaac; Follen, Gregory J.; Gutierrez, Richard; Naiman, Cynthia G.; Foster, Ian; Ginsburg, Brian; Larsson, Olle; Martin, Stuart; Tuecke, Steven; Woodford, David

    2000-01-01

    Within NASA's High Performance Computing and Communication (HPCC) program, the NASA Glenn Research Center is developing an environment for the analysis/design of aircraft engines called the Numerical Propulsion System Simulation (NPSS). The vision for NPSS is to create a "numerical test cell" enabling full engine simulations overnight on cost-effective computing platforms. To this end, NPSS integrates multiple disciplines such as aerodynamics, structures, and heat transfer and supports "numerical zooming" between O-dimensional to 1-, 2-, and 3-dimensional component engine codes. In order to facilitate the timely and cost-effective capture of complex physical processes, NPSS uses object-oriented technologies such as C++ objects to encapsulate individual engine components and CORBA ORBs for object communication and deployment across heterogeneous computing platforms. Recently, the HPCC program has initiated a concept called the Information Power Grid (IPG), a virtual computing environment that integrates computers and other resources at different sites. IPG implements a range of Grid services such as resource discovery, scheduling, security, instrumentation, and data access, many of which are provided by the Globus toolkit. IPG facilities have the potential to benefit NPSS considerably. For example, NPSS should in principle be able to use Grid services to discover dynamically and then co-schedule the resources required for a particular engine simulation, rather than relying on manual placement of ORBs as at present. Grid services can also be used to initiate simulation components on parallel computers (MPPs) and to address inter-site security issues that currently hinder the coupling of components across multiple sites. These considerations led NASA Glenn and Globus project personnel to formulate a collaborative project designed to evaluate whether and how benefits such as those just listed can be achieved in practice. This project involves firstly development of the basic techniques required to achieve co-existence of commodity object technologies and Grid technologies; and secondly the evaluation of these techniques in the context of NPSS-oriented challenge problems. The work on basic techniques seeks to understand how "commodity" technologies (CORBA, DCOM, Excel, etc.) can be used in concert with specialized "Grid" technologies (for security, MPP scheduling, etc.). In principle, this coordinated use should be straightforward because of the Globus and IPG philosophy of providing low-level Grid mechanisms that can be used to implement a wide variety of application-level programming models. (Globus technologies have previously been used to implement Grid-enabled message-passing libraries, collaborative environments, and parameter study tools, among others.) Results obtained to date are encouraging: we have successfully demonstrated a CORBA to Globus resource manager gateway that allows the use of CORBA RPCs to control submission and execution of programs on workstations and MPPs; a gateway from the CORBA Trader service to the Grid information service; and a preliminary integration of CORBA and Grid security mechanisms. The two challenge problems that we consider are the following: 1) Desktop-controlled parameter study. Here, an Excel spreadsheet is used to define and control a CFD parameter study, via a CORBA interface to a high throughput broker that runs individual cases on different IPG resources. 2) Aviation safety. Here, about 100 near real time jobs running NPSS need to be submitted, run and data returned in near real time. Evaluation will address such issues as time to port, execution time, potential scalability of simulation, and reliability of resources. The full paper will present the following information: 1. A detailed analysis of the requirements that NPSS applications place on IPG. 2. A description of the techniques used to meet these requirements via the coordinated use of CORBA and Globus. 3. A description of results obtained to date in the first two challenge problems.

  18. In-situ medical simulation for pre-implementation testing of clinical service in a regional hospital in Hong Kong.

    PubMed

    Chen, P P; Tsui, N Tk; Fung, A Sw; Chiu, A Hf; Wong, W Cw; Leong, H T; Lee, P Sf; Lau, J Yw

    2017-08-01

    The implementation of a new clinical service is associated with anxiety and challenges that may prevent smooth and safe execution of the service. Unexpected issues may not be apparent until the actual clinical service commences. We present a novel approach to test the new clinical setting before actual implementation of our endovascular aortic repair service. In-situ simulation at the new clinical location would enable identification of potential process and system issues prior to implementation of the service. After preliminary planning, a simulation test utilising a case scenario with actual simulation of the entire care process was carried out to identify any logistic, equipment, settings or clinical workflow issues, and to trial a contingency plan for a surgical complication. All patient care including anaesthetic, surgical, and nursing procedures and processes were simulated and tested. Overall, 17 vital process and system issues were identified during the simulation as potential clinical concerns. They included difficult patient positioning, draping pattern, unsatisfactory equipment setup, inadequate critical surgical instruments, blood products logistics, and inadequate nursing support during crisis. In-situ simulation provides an innovative method to identify critical deficiencies and unexpected issues before implementation of a new clinical service. Life-threatening and serious practical issues can be identified and corrected before formal service commences. This article describes our experience with the use of simulation in pre-implementation testing of a clinical process or service. We found the method useful and would recommend it to others.

  19. Dynamical nexus of water supply, hydropower and environment based on the modeling of multiple socio-natural processes: from socio-hydrological perspective

    NASA Astrophysics Data System (ADS)

    Liu, D.; Wei, X.; Li, H. Y.; Lin, M.; Tian, F.; Huang, Q.

    2017-12-01

    In the socio-hydrological system, the ecological functions and environmental services, which are chosen to maintain, are determined by the preference of the society, which is making the trade-off among the values of riparian vegetation, fish, river landscape, water supply, hydropower, navigation and so on. As the society develops, the preference of the value will change and the ecological functions and environmental services which are chosen to maintain will change. The aim of the study is to focus on revealing the feedback relationship of water supply, hydropower and environment and the dynamical feedback mechanism at macro-scale, and to establish socio-hydrological evolution model of the watershed based on the modeling of multiple socio-natural processes. The study will aim at the Han River in China, analyze the impact of the water supply and hydropower on the ecology, hydrology and other environment elements, and study the effect on the water supply and hydropower to ensure the ecological and environmental water of the different level. Water supply and ecology are usually competitive. In some reservoirs, hydropower and ecology are synergic relationship while they are competitive in some reservoirs. The study will analyze the multiple mechanisms to implement the dynamical feedbacks of environment to hydropower, set up the quantitative relationship description of the feedback mechanisms, recognize the dominant processes in the feedback relationships of hydropower and environment and then analyze the positive and negative feedbacks in the feedback networks. The socio-hydrological evolution model at the watershed scale will be built and applied to simulate the long-term evolution processes of the watershed of the current situation. Dynamical nexus of water supply, hydropower and environment will be investigated.

  20. Integrating macro and micro scale approaches in the agent-based modeling of residential dynamics

    NASA Astrophysics Data System (ADS)

    Saeedi, Sara

    2018-06-01

    With the advancement of computational modeling and simulation (M&S) methods as well as data collection technologies, urban dynamics modeling substantially improved over the last several decades. The complex urban dynamics processes are most effectively modeled not at the macro-scale, but following a bottom-up approach, by simulating the decisions of individual entities, or residents. Agent-based modeling (ABM) provides the key to a dynamic M&S framework that is able to integrate socioeconomic with environmental models, and to operate at both micro and macro geographical scales. In this study, a multi-agent system is proposed to simulate residential dynamics by considering spatiotemporal land use changes. In the proposed ABM, macro-scale land use change prediction is modeled by Artificial Neural Network (ANN) and deployed as the agent environment and micro-scale residential dynamics behaviors autonomously implemented by household agents. These two levels of simulation interacted and jointly promoted urbanization process in an urban area of Tehran city in Iran. The model simulates the behavior of individual households in finding ideal locations to dwell. The household agents are divided into three main groups based on their income rank and they are further classified into different categories based on a number of attributes. These attributes determine the households' preferences for finding new dwellings and change with time. The ABM environment is represented by a land-use map in which the properties of the land parcels change dynamically over the simulation time. The outputs of this model are a set of maps showing the pattern of different groups of households in the city. These patterns can be used by city planners to find optimum locations for building new residential units or adding new services to the city. The simulation results show that combining macro- and micro-level simulation can give full play to the potential of the ABM to understand the driving mechanism of urbanization and provide decision-making support for urban management.

  1. Cybersickness and Anxiety During Simulated Motion: Implications for VRET.

    PubMed

    Bruck, Susan; Watters, Paul

    2009-01-01

    Some clinicians have suggested using virtual reality environments to deliver psychological interventions to treat anxiety disorders. However, given a significant body of work on cybersickness symptoms which may arise in virtual environments - especially those involving simulated motion - we tested (a) whether being exposed to a virtual reality environment alone causes anxiety to increase, and (b) whether exposure to simulated motion in a virtual reality environment increases anxiety. Using a repeated measures design, we used Kim's Anxiety Scale questionnaire to compare baseline anxiety, anxiety after virtual environment exposure, and anxiety after simulated motion. While there was no significant effect on anxiety for being in a virtual environment with no simulated motion, the introduction of simulated motion caused anxiety to significantly increase, but not to a severe or extreme level. The implications of this work for virtual reality exposure therapy (VRET) are discussed.

  2. A Comparison of Students' Conceptual Understanding of Electric Circuits in Simulation Only and Simulation-Laboratory Contexts

    ERIC Educational Resources Information Center

    Jaakkola, Tomi; Nurmi, Sami; Veermans, Koen

    2011-01-01

    The aim of this experimental study was to compare learning outcomes of students using a simulation alone (simulation environment) with outcomes of those using a simulation in parallel with real circuits (combination environment) in the domain of electricity, and to explore how learning outcomes in these environments are mediated by implicit (only…

  3. Oxidation-Reduction Resistance of Advanced Copper Alloys

    NASA Technical Reports Server (NTRS)

    Greenbauer-Seng, L. (Technical Monitor); Thomas-Ogbuji, L.; Humphrey, D. L.; Setlock, J. A.

    2003-01-01

    Resistance to oxidation and blanching is a key issue for advanced copper alloys under development for NASA's next generation of reusable launch vehicles. Candidate alloys, including dispersion-strengthened Cu-Cr-Nb, solution-strengthened Cu-Ag-Zr, and ODS Cu-Al2O3, are being evaluated for oxidation resistance by static TGA exposures in low-p(O2) and cyclic oxidation in air, and by cyclic oxidation-reduction exposures (using air for oxidation and CO/CO2 or H2/Ar for reduction) to simulate expected service environments. The test protocol and results are presented.

  4. Flexible workflow sharing and execution services for e-scientists

    NASA Astrophysics Data System (ADS)

    Kacsuk, Péter; Terstyanszky, Gábor; Kiss, Tamas; Sipos, Gergely

    2013-04-01

    The sequence of computational and data manipulation steps required to perform a specific scientific analysis is called a workflow. Workflows that orchestrate data and/or compute intensive applications on Distributed Computing Infrastructures (DCIs) recently became standard tools in e-science. At the same time the broad and fragmented landscape of workflows and DCIs slows down the uptake of workflow-based work. The development, sharing, integration and execution of workflows is still a challenge for many scientists. The FP7 "Sharing Interoperable Workflow for Large-Scale Scientific Simulation on Available DCIs" (SHIWA) project significantly improved the situation, with a simulation platform that connects different workflow systems, different workflow languages, different DCIs and workflows into a single, interoperable unit. The SHIWA Simulation Platform is a service package, already used by various scientific communities, and used as a tool by the recently started ER-flow FP7 project to expand the use of workflows among European scientists. The presentation will introduce the SHIWA Simulation Platform and the services that ER-flow provides based on the platform to space and earth science researchers. The SHIWA Simulation Platform includes: 1. SHIWA Repository: A database where workflows and meta-data about workflows can be stored. The database is a central repository to discover and share workflows within and among communities . 2. SHIWA Portal: A web portal that is integrated with the SHIWA Repository and includes a workflow executor engine that can orchestrate various types of workflows on various grid and cloud platforms. 3. SHIWA Desktop: A desktop environment that provides similar access capabilities than the SHIWA Portal, however it runs on the users' desktops/laptops instead of a portal server. 4. Workflow engines: the ASKALON, Galaxy, GWES, Kepler, LONI Pipeline, MOTEUR, Pegasus, P-GRADE, ProActive, Triana, Taverna and WS-PGRADE workflow engines are already integrated with the execution engine of the SHIWA Portal. Other engines can be added when required. Through the SHIWA Portal one can define and run simulations on the SHIWA Virtual Organisation, an e-infrastructure that gathers computing and data resources from various DCIs, including the European Grid Infrastructure. The Portal via third party workflow engines provides support for the most widely used academic workflow engines and it can be extended with other engines on demand. Such extensions translate between workflow languages and facilitate the nesting of workflows into larger workflows even when those are written in different languages and require different interpreters for execution. Through the workflow repository and the portal lonely scientists and scientific collaborations can share and offer workflows for reuse and execution. Given the integrated nature of the SHIWA Simulation Platform the shared workflows can be executed online, without installing any special client environment and downloading workflows. The FP7 "Building a European Research Community through Interoperable Workflows and Data" (ER-flow) project disseminates the achievements of the SHIWA project and use these achievements to build workflow user communities across Europe. ER-flow provides application supports to research communities within and beyond the project consortium to develop, share and run workflows with the SHIWA Simulation Platform.

  5. Full immersion simulation: validation of a distributed simulation environment for technical and non-technical skills training in Urology.

    PubMed

    Brewin, James; Tang, Jessica; Dasgupta, Prokar; Khan, Muhammad S; Ahmed, Kamran; Bello, Fernando; Kneebone, Roger; Jaye, Peter

    2015-07-01

    To evaluate the face, content and construct validity of the distributed simulation (DS) environment for technical and non-technical skills training in endourology. To evaluate the educational impact of DS for urology training. DS offers a portable, low-cost simulated operating room environment that can be set up in any open space. A prospective mixed methods design using established validation methodology was conducted in this simulated environment with 10 experienced and 10 trainee urologists. All participants performed a simulated prostate resection in the DS environment. Outcome measures included surveys to evaluate the DS, as well as comparative analyses of experienced and trainee urologist's performance using real-time and 'blinded' video analysis and validated performance metrics. Non-parametric statistical methods were used to compare differences between groups. The DS environment demonstrated face, content and construct validity for both non-technical and technical skills. Kirkpatrick level 1 evidence for the educational impact of the DS environment was shown. Further studies are needed to evaluate the effect of simulated operating room training on real operating room performance. This study has shown the validity of the DS environment for non-technical, as well as technical skills training. DS-based simulation appears to be a valuable addition to traditional classroom-based simulation training. © 2014 The Authors BJU International © 2014 BJU International Published by John Wiley & Sons Ltd.

  6. Patch Transporter: Incentivized, Decentralized Software Patch System for WSN and IoT Environments

    PubMed Central

    Lee, JongHyup

    2018-01-01

    In the complicated settings of WSN (Wireless Sensor Networks) and IoT (Internet of Things) environments, keeping a number of heterogeneous devices updated is a challenging job, especially with respect to effectively discovering target devices and rapidly delivering the software updates. In this paper, we convert the traditional software update process to a distributed service. We set an incentive system for faithfully transporting the patches to the recipient devices. The incentive system motivates independent, self-interested transporters for helping the devices to be updated. To ensure the system correctly operates, we employ the blockchain system that enforces the commitment in a decentralized manner. We also present a detailed specification for the proposed protocol and validate it by model checking and simulations for correctness. PMID:29438337

  7. Patch Transporter: Incentivized, Decentralized Software Patch System for WSN and IoT Environments.

    PubMed

    Lee, JongHyup

    2018-02-13

    [-12]In the complicated settings of WSN (Wireless Sensor Networks) and IoT (Internet of Things) environments, keeping a number of heterogeneous devices updated is a challenging job, especially with respect to effectively discovering target devices and rapidly delivering the software updates. In this paper, we convert the traditional software update process to a distributed service. We set an incentive system for faithfully transporting the patches to the recipient devices. The incentive system motivates independent, self-interested transporters for helping the devices to be updated. To ensure the system correctly operates, we employ the blockchain system that enforces the commitment in a decentralized manner. We also present a detailed specification for the proposed protocol and validate it by model checking and simulations for correctness.

  8. Enhanced Electric Power Transmission by Hybrid Compensation Technique

    NASA Astrophysics Data System (ADS)

    Palanichamy, C.; Kiu, G. Q.

    2015-04-01

    In today's competitive environment, new power system engineers are likely to contribute immediately to the task, without years of seasoning via on-the-job training, mentoring, and rotation assignments. At the same time it is becoming obligatory to train power system engineering graduates for an increasingly quality-minded corporate environment. In order to achieve this, there is a need to make available better-quality tools for educating and training power system engineering students and in-service system engineers too. As a result of the swift advances in computer hardware and software, many windows-based computer software packages were developed for the purpose of educating and training. In line with those packages, a simulation package called Hybrid Series-Shunt Compensators (HSSC) has been developed and presented in this paper for educational purposes.

  9. Crowd Sensing-Enabling Security Service Recommendation for Social Fog Computing Systems

    PubMed Central

    Wu, Jun; Su, Zhou; Li, Jianhua

    2017-01-01

    Fog computing, shifting intelligence and resources from the remote cloud to edge networks, has the potential of providing low-latency for the communication from sensing data sources to users. For the objects from the Internet of Things (IoT) to the cloud, it is a new trend that the objects establish social-like relationships with each other, which efficiently brings the benefits of developed sociality to a complex environment. As fog service become more sophisticated, it will become more convenient for fog users to share their own services, resources, and data via social networks. Meanwhile, the efficient social organization can enable more flexible, secure, and collaborative networking. Aforementioned advantages make the social network a potential architecture for fog computing systems. In this paper, we design an architecture for social fog computing, in which the services of fog are provisioned based on “friend” relationships. To the best of our knowledge, this is the first attempt at an organized fog computing system-based social model. Meanwhile, social networking enhances the complexity and security risks of fog computing services, creating difficulties of security service recommendations in social fog computing. To address this, we propose a novel crowd sensing-enabling security service provisioning method to recommend security services accurately in social fog computing systems. Simulation results show the feasibilities and efficiency of the crowd sensing-enabling security service recommendation method for social fog computing systems. PMID:28758943

  10. Crowd Sensing-Enabling Security Service Recommendation for Social Fog Computing Systems.

    PubMed

    Wu, Jun; Su, Zhou; Wang, Shen; Li, Jianhua

    2017-07-30

    Fog computing, shifting intelligence and resources from the remote cloud to edge networks, has the potential of providing low-latency for the communication from sensing data sources to users. For the objects from the Internet of Things (IoT) to the cloud, it is a new trend that the objects establish social-like relationships with each other, which efficiently brings the benefits of developed sociality to a complex environment. As fog service become more sophisticated, it will become more convenient for fog users to share their own services, resources, and data via social networks. Meanwhile, the efficient social organization can enable more flexible, secure, and collaborative networking. Aforementioned advantages make the social network a potential architecture for fog computing systems. In this paper, we design an architecture for social fog computing, in which the services of fog are provisioned based on "friend" relationships. To the best of our knowledge, this is the first attempt at an organized fog computing system-based social model. Meanwhile, social networking enhances the complexity and security risks of fog computing services, creating difficulties of security service recommendations in social fog computing. To address this, we propose a novel crowd sensing-enabling security service provisioning method to recommend security services accurately in social fog computing systems. Simulation results show the feasibilities and efficiency of the crowd sensing-enabling security service recommendation method for social fog computing systems.

  11. The TRIDEC Virtual Tsunami Atlas - customized value-added simulation data products for Tsunami Early Warning generated on compute clusters

    NASA Astrophysics Data System (ADS)

    Löwe, P.; Hammitzsch, M.; Babeyko, A.; Wächter, J.

    2012-04-01

    The development of new Tsunami Early Warning Systems (TEWS) requires the modelling of spatio-temporal spreading of tsunami waves both recorded from past events and hypothetical future cases. The model results are maintained in digital repositories for use in TEWS command and control units for situation assessment once a real tsunami occurs. Thus the simulation results must be absolutely trustworthy, in a sense that the quality of these datasets is assured. This is a prerequisite as solid decision making during a crisis event and the dissemination of dependable warning messages to communities under risk will be based on them. This requires data format validity, but even more the integrity and information value of the content, being a derived value-added product derived from raw tsunami model output. Quality checking of simulation result products can be done in multiple ways, yet the visual verification of both temporal and spatial spreading characteristics for each simulation remains important. The eye of the human observer still remains an unmatched tool for the detection of irregularities. This requires the availability of convenient, human-accessible mappings of each simulation. The improvement of tsunami models necessitates the changes in many variables, including simulation end-parameters. Whenever new improved iterations of the general models or underlying spatial data are evaluated, hundreds to thousands of tsunami model results must be generated for each model iteration, each one having distinct initial parameter settings. The use of a Compute Cluster Environment (CCE) of sufficient size allows the automated generation of all tsunami-results within model iterations in little time. This is a significant improvement to linear processing on dedicated desktop machines or servers. This allows for accelerated/improved visual quality checking iterations, which in turn can provide a positive feedback into the overall model improvement iteratively. An approach to set-up and utilize the CCE has been implemented by the project Collaborative, Complex, and Critical Decision Processes in Evolving Crises (TRIDEC) funded under the European Union's FP7. TRIDEC focuses on real-time intelligent information management in Earth management. The addressed challenges include the design and implementation of a robust and scalable service infrastructure supporting the integration and utilisation of existing resources with accelerated generation of large volumes of data. These include sensor systems, geo-information repositories, simulations and data fusion tools. Additionally, TRIDEC adopts enhancements of Service Oriented Architecture (SOA) principles in terms of Event Driven Architecture (EDA) design. As a next step the implemented CCE's services to generate derived and customized simulation products are foreseen to be provided via an EDA service for on-demand processing for specific threat-parameters and to accommodate for model improvements.

  12. A comparison of the accuracy of intraoral scanners using an intraoral environment simulator.

    PubMed

    Park, Hye-Nan; Lim, Young-Jun; Yi, Won-Jin; Han, Jung-Suk; Lee, Seung-Pyo

    2018-02-01

    The aim of this study was to design an intraoral environment simulator and to assess the accuracy of two intraoral scanners using the simulator. A box-shaped intraoral environment simulator was designed to simulate two specific intraoral environments. The cast was scanned 10 times by Identica Blue (MEDIT, Seoul, South Korea), TRIOS (3Shape, Copenhagen, Denmark), and CS3500 (Carestream Dental, Georgia, USA) scanners in the two simulated groups. The distances between the left and right canines (D3), first molars (D6), second molars (D7), and the left canine and left second molar (D37) were measured. The distance data were analyzed by the Kruskal-Wallis test. The differences in intraoral environments were not statistically significant ( P >.05). Between intraoral scanners, statistically significant differences ( P <.05) were revealed by the Kruskal-Wallis test with regard to D3 and D6. No difference due to the intraoral environment was revealed. The simulator will contribute to the higher accuracy of intraoral scanners in the future.

  13. The Simultaneous Production Model; A Model for the Construction, Testing, Implementation and Revision of Educational Computer Simulation Environments.

    ERIC Educational Resources Information Center

    Zillesen, Pieter G. van Schaick

    This paper introduces a hardware and software independent model for producing educational computer simulation environments. The model, which is based on the results of 32 studies of educational computer simulations program production, implies that educational computer simulation environments are specified, constructed, tested, implemented, and…

  14. Beating the tyranny of scale with a private cloud configured for Big Data

    NASA Astrophysics Data System (ADS)

    Lawrence, Bryan; Bennett, Victoria; Churchill, Jonathan; Juckes, Martin; Kershaw, Philip; Pepler, Sam; Pritchard, Matt; Stephens, Ag

    2015-04-01

    The Joint Analysis System, JASMIN, consists of a five significant hardware components: a batch computing cluster, a hypervisor cluster, bulk disk storage, high performance disk storage, and access to a tape robot. Each of the computing clusters consists of a heterogeneous set of servers, supporting a range of possible data analysis tasks - and a unique network environment makes it relatively trivial to migrate servers between the two clusters. The high performance disk storage will include the world's largest (publicly visible) deployment of the Panasas parallel disk system. Initially deployed in April 2012, JASMIN has already undergone two major upgrades, culminating in a system which by April 2015, will have in excess of 16 PB of disk and 4000 cores. Layered on the basic hardware are a range of services, ranging from managed services, such as the curated archives of the Centre for Environmental Data Archival or the data analysis environment for the National Centres for Atmospheric Science and Earth Observation, to a generic Infrastructure as a Service (IaaS) offering for the UK environmental science community. Here we present examples of some of the big data workloads being supported in this environment - ranging from data management tasks, such as checksumming 3 PB of data held in over one hundred million files, to science tasks, such as re-processing satellite observations with new algorithms, or calculating new diagnostics on petascale climate simulation outputs. We will demonstrate how the provision of a cloud environment closely coupled to a batch computing environment, all sharing the same high performance disk system allows massively parallel processing without the necessity to shuffle data excessively - even as it supports many different virtual communities, each with guaranteed performance. We will discuss the advantages of having a heterogeneous range of servers with available memory from tens of GB at the low end to (currently) two TB at the high end. There are some limitations of the JASMIN environment, the high performance disk environment is not fully available in the IaaS environment, and a planned ability to burst compute heavy jobs into the public cloud is not yet fully available. There are load balancing and performance issues that need to be understood. We will conclude with projections for future usage, and our plans to meet those requirements.

  15. Virtual Observatories for Space Physics Observations and Simulations: New Routes to Efficient Access and Visualization

    NASA Technical Reports Server (NTRS)

    Roberts, Aaron

    2005-01-01

    New tools for data access and visualization promise to make the analysis of space plasma data both more efficient and more powerful, especially for answering questions about the global structure and dynamics of the Sun-Earth system. We will show how new existing tools (particularly the Virtual Space Physics Observatory-VSPO-and the Visual System for Browsing, Analysis and Retrieval of Data-ViSBARD; look for the acronyms in Google) already provide rapid access to such information as spacecraft orbits, browse plots, and detailed data, as well as visualizations that can quickly unite our view of multispacecraft observations. We will show movies illustrating multispacecraft observations of the solar wind and magnetosphere during a magnetic storm, and of simulations of 3 0-spacecraft observations derived from MHD simulations of the magnetosphere sampled along likely trajectories of the spacecraft for the MagCon mission. An important issue remaining to be solved is how best to integrate simulation data and services into the Virtual Observatory environment, and this talk will hopefully stimulate further discussion along these lines.

  16. An Efficient and QoS Supported Multichannel MAC Protocol for Vehicular Ad Hoc Networks

    PubMed Central

    Tan, Guozhen; Yu, Chao

    2017-01-01

    Vehicular Ad Hoc Networks (VANETs) employ multichannel to provide a variety of safety and non-safety (transport efficiency and infotainment) applications, based on the IEEE 802.11p and IEEE 1609.4 protocols. Different types of applications require different levels Quality-of-Service (QoS) support. Recently, transport efficiency and infotainment applications (e.g., electronic map download and Internet access) have received more and more attention, and this kind of applications is expected to become a big market driver in a near future. In this paper, we propose an Efficient and QoS supported Multichannel Medium Access Control (EQM-MAC) protocol for VANETs in a highway environment. The EQM-MAC protocol utilizes the service channel resources for non-safety message transmissions during the whole synchronization interval, and it dynamically adjusts minimum contention window size for different non-safety services according to the traffic conditions. Theoretical model analysis and extensive simulation results show that the EQM-MAC protocol can support QoS services, while ensuring the high saturation throughput and low transmission delay for non-safety applications. PMID:28991217

  17. KENNEDY SPACE CENTER, FLA. -- In the Space Life Sciences (SLS) Lab, Jan Bauer, with Dynamac Corp., places samples of onion tissue in the elemental analyzer, which analyzes for carbon, hydrogen, nitrogen and sulfur. The 100,000 square-foot SLS houses labs for NASA’s ongoing research efforts, microbiology/microbial ecology studies and analytical chemistry labs. Also calling the new lab home are facilities for space flight-experiment and flight-hardware development, new plant growth chambers, and an Orbiter Environment Simulator that will be used to conduct ground control experiments in simulated flight conditions for space flight experiments. The SLS Lab, formerly known as the Space Experiment Research and Processing Laboratory or SERPL, provides space for NASA’s Life Sciences Services contractor Dynamac Corporation, Bionetics Corporation, and researchers from the University of Florida. NASA’s Office of Biological and Physical Research will use the facility for processing life sciences experiments that will be conducted on the International Space Station. The SLS Lab is the magnet facility for the International Space Research Park at KSC being developed in partnership with Florida Space Authority.

    NASA Image and Video Library

    2004-01-05

    KENNEDY SPACE CENTER, FLA. -- In the Space Life Sciences (SLS) Lab, Jan Bauer, with Dynamac Corp., places samples of onion tissue in the elemental analyzer, which analyzes for carbon, hydrogen, nitrogen and sulfur. The 100,000 square-foot SLS houses labs for NASA’s ongoing research efforts, microbiology/microbial ecology studies and analytical chemistry labs. Also calling the new lab home are facilities for space flight-experiment and flight-hardware development, new plant growth chambers, and an Orbiter Environment Simulator that will be used to conduct ground control experiments in simulated flight conditions for space flight experiments. The SLS Lab, formerly known as the Space Experiment Research and Processing Laboratory or SERPL, provides space for NASA’s Life Sciences Services contractor Dynamac Corporation, Bionetics Corporation, and researchers from the University of Florida. NASA’s Office of Biological and Physical Research will use the facility for processing life sciences experiments that will be conducted on the International Space Station. The SLS Lab is the magnet facility for the International Space Research Park at KSC being developed in partnership with Florida Space Authority.

  18. KENNEDY SPACE CENTER, FLA. -- Sharon Edney, with Dynamac Corp., measures photosynthesis on Bibb lettuce being grown hydroponically for study in the Space Life Sciences Lab. The 100,000 square-foot facility houses labs for NASA’s ongoing research efforts, microbiology/microbial ecology studies and analytical chemistry labs. Also calling the new lab home are facilities for space flight-experiment and flight-hardware development, new plant growth chambers, and an Orbiter Environment Simulator that will be used to conduct ground control experiments in simulated flight conditions for space flight experiments. The SLS Lab, formerly known as the Space Experiment Research and Processing Laboratory or SERPL, provides space for NASA’s Life Sciences Services contractor Dynamac Corporation, Bionetics Corporation, and researchers from the University of Florida. NASA’s Office of Biological and Physical Research will use the facility for processing life sciences experiments that will be conducted on the International Space Station. The SLS Lab is the magnet facility for the International Space Research Park at KSC being developed in partnership with Florida Space Authority.

    NASA Image and Video Library

    2004-01-05

    KENNEDY SPACE CENTER, FLA. -- Sharon Edney, with Dynamac Corp., measures photosynthesis on Bibb lettuce being grown hydroponically for study in the Space Life Sciences Lab. The 100,000 square-foot facility houses labs for NASA’s ongoing research efforts, microbiology/microbial ecology studies and analytical chemistry labs. Also calling the new lab home are facilities for space flight-experiment and flight-hardware development, new plant growth chambers, and an Orbiter Environment Simulator that will be used to conduct ground control experiments in simulated flight conditions for space flight experiments. The SLS Lab, formerly known as the Space Experiment Research and Processing Laboratory or SERPL, provides space for NASA’s Life Sciences Services contractor Dynamac Corporation, Bionetics Corporation, and researchers from the University of Florida. NASA’s Office of Biological and Physical Research will use the facility for processing life sciences experiments that will be conducted on the International Space Station. The SLS Lab is the magnet facility for the International Space Research Park at KSC being developed in partnership with Florida Space Authority.

  19. KENNEDY SPACE CENTER, FLA. -- Sharon Edney, with Dynamac Corp., checks the roots of green onions being grown hydroponically for study in the Space Life Sciences Lab. The 100,000 square-foot facility houses labs for NASA’s ongoing research efforts, microbiology/microbial ecology studies and analytical chemistry labs. Also calling the new lab home are facilities for space flight-experiment and flight-hardware development, new plant growth chambers, and an Orbiter Environment Simulator that will be used to conduct ground control experiments in simulated flight conditions for space flight experiments. The SLS Lab, formerly known as the Space Experiment Research and Processing Laboratory or SERPL, provides space for NASA’s Life Sciences Services contractor Dynamac Corporation, Bionetics Corporation, and researchers from the University of Florida. NASA’s Office of Biological and Physical Research will use the facility for processing life sciences experiments that will be conducted on the International Space Station. The SLS Lab is the magnet facility for the International Space Research Park at KSC being developed in partnership with Florida Space Authority.

    NASA Image and Video Library

    2004-01-05

    KENNEDY SPACE CENTER, FLA. -- Sharon Edney, with Dynamac Corp., checks the roots of green onions being grown hydroponically for study in the Space Life Sciences Lab. The 100,000 square-foot facility houses labs for NASA’s ongoing research efforts, microbiology/microbial ecology studies and analytical chemistry labs. Also calling the new lab home are facilities for space flight-experiment and flight-hardware development, new plant growth chambers, and an Orbiter Environment Simulator that will be used to conduct ground control experiments in simulated flight conditions for space flight experiments. The SLS Lab, formerly known as the Space Experiment Research and Processing Laboratory or SERPL, provides space for NASA’s Life Sciences Services contractor Dynamac Corporation, Bionetics Corporation, and researchers from the University of Florida. NASA’s Office of Biological and Physical Research will use the facility for processing life sciences experiments that will be conducted on the International Space Station. The SLS Lab is the magnet facility for the International Space Research Park at KSC being developed in partnership with Florida Space Authority.

  20. KENNEDY SPACE CENTER, FLA. -- Lanfang Levine, with Dynamac Corp., helps install a Dionex DX-500 IC/HPLC system in the Space Life Sciences Lab. The equipment will enable analysis of volatile compounds, such as from plants. The 100,000 square-foot facility houses labs for NASA’s ongoing research efforts, microbiology/microbial ecology studies and analytical chemistry labs. Also calling the new lab home are facilities for space flight-experiment and flight-hardware development, new plant growth chambers, and an Orbiter Environment Simulator that will be used to conduct ground control experiments in simulated flight conditions for space flight experiments. The SLS Lab, formerly known as the Space Experiment Research and Processing Laboratory or SERPL, provides space for NASA’s Life Sciences Services contractor Dynamac Corporation, Bionetics Corporation, and researchers from the University of Florida. NASA’s Office of Biological and Physical Research will use the facility for processing life sciences experiments that will be conducted on the International Space Station. The SLS Lab is the magnet facility for the International Space Research Park at KSC being developed in partnership with Florida Space Authority.

    NASA Image and Video Library

    2004-01-05

    KENNEDY SPACE CENTER, FLA. -- Lanfang Levine, with Dynamac Corp., helps install a Dionex DX-500 IC/HPLC system in the Space Life Sciences Lab. The equipment will enable analysis of volatile compounds, such as from plants. The 100,000 square-foot facility houses labs for NASA’s ongoing research efforts, microbiology/microbial ecology studies and analytical chemistry labs. Also calling the new lab home are facilities for space flight-experiment and flight-hardware development, new plant growth chambers, and an Orbiter Environment Simulator that will be used to conduct ground control experiments in simulated flight conditions for space flight experiments. The SLS Lab, formerly known as the Space Experiment Research and Processing Laboratory or SERPL, provides space for NASA’s Life Sciences Services contractor Dynamac Corporation, Bionetics Corporation, and researchers from the University of Florida. NASA’s Office of Biological and Physical Research will use the facility for processing life sciences experiments that will be conducted on the International Space Station. The SLS Lab is the magnet facility for the International Space Research Park at KSC being developed in partnership with Florida Space Authority.

  1. KENNEDY SPACE CENTER, FLA. -- In the Space Life Sciences (SLS) Lab, Jan Bauer, with Dynamac Corp., weighs samples of onion tissue for processing in the elemental analyzer behind it. The equipment analyzes for carbon, hydrogen, nitrogen and sulfur. The 100,000 square-foot SLS houses labs for NASA’s ongoing research efforts, microbiology/microbial ecology studies and analytical chemistry labs. Also calling the new lab home are facilities for space flight-experiment and flight-hardware development, new plant growth chambers, and an Orbiter Environment Simulator that will be used to conduct ground control experiments in simulated flight conditions for space flight experiments. The SLS Lab, formerly known as the Space Experiment Research and Processing Laboratory or SERPL, provides space for NASA’s Life Sciences Services contractor Dynamac Corporation, Bionetics Corporation, and researchers from the University of Florida. NASA’s Office of Biological and Physical Research will use the facility for processing life sciences experiments that will be conducted on the International Space Station. The SLS Lab is the magnet facility for the International Space Research Park at KSC being developed in partnership with Florida Space Authority.

    NASA Image and Video Library

    2004-01-05

    KENNEDY SPACE CENTER, FLA. -- In the Space Life Sciences (SLS) Lab, Jan Bauer, with Dynamac Corp., weighs samples of onion tissue for processing in the elemental analyzer behind it. The equipment analyzes for carbon, hydrogen, nitrogen and sulfur. The 100,000 square-foot SLS houses labs for NASA’s ongoing research efforts, microbiology/microbial ecology studies and analytical chemistry labs. Also calling the new lab home are facilities for space flight-experiment and flight-hardware development, new plant growth chambers, and an Orbiter Environment Simulator that will be used to conduct ground control experiments in simulated flight conditions for space flight experiments. The SLS Lab, formerly known as the Space Experiment Research and Processing Laboratory or SERPL, provides space for NASA’s Life Sciences Services contractor Dynamac Corporation, Bionetics Corporation, and researchers from the University of Florida. NASA’s Office of Biological and Physical Research will use the facility for processing life sciences experiments that will be conducted on the International Space Station. The SLS Lab is the magnet facility for the International Space Research Park at KSC being developed in partnership with Florida Space Authority.

  2. KENNEDY SPACE CENTER, FLA. -- Sharon Edney, with Dynamac Corp., checks the growth of radishes being grown hydroponically for study in the Space Life Sciences Lab. The 100,000 square-foot facility houses labs for NASA’s ongoing research efforts, microbiology/microbial ecology studies and analytical chemistry labs. Also calling the new lab home are facilities for space flight-experiment and flight-hardware development, new plant growth chambers, and an Orbiter Environment Simulator that will be used to conduct ground control experiments in simulated flight conditions for space flight experiments. The SLS Lab, formerly known as the Space Experiment Research and Processing Laboratory or SERPL, provides space for NASA’s Life Sciences Services contractor Dynamac Corporation, Bionetics Corporation, and researchers from the University of Florida. NASA’s Office of Biological and Physical Research will use the facility for processing life sciences experiments that will be conducted on the International Space Station. The SLS Lab is the magnet facility for the International Space Research Park at KSC being developed in partnership with Florida Space Authority.

    NASA Image and Video Library

    2004-01-05

    KENNEDY SPACE CENTER, FLA. -- Sharon Edney, with Dynamac Corp., checks the growth of radishes being grown hydroponically for study in the Space Life Sciences Lab. The 100,000 square-foot facility houses labs for NASA’s ongoing research efforts, microbiology/microbial ecology studies and analytical chemistry labs. Also calling the new lab home are facilities for space flight-experiment and flight-hardware development, new plant growth chambers, and an Orbiter Environment Simulator that will be used to conduct ground control experiments in simulated flight conditions for space flight experiments. The SLS Lab, formerly known as the Space Experiment Research and Processing Laboratory or SERPL, provides space for NASA’s Life Sciences Services contractor Dynamac Corporation, Bionetics Corporation, and researchers from the University of Florida. NASA’s Office of Biological and Physical Research will use the facility for processing life sciences experiments that will be conducted on the International Space Station. The SLS Lab is the magnet facility for the International Space Research Park at KSC being developed in partnership with Florida Space Authority.

  3. Cislan-2 extension final document by University of Twente (Netherlands)

    NASA Astrophysics Data System (ADS)

    Niemegeers, Ignas; Baumann, Frank; Beuwer, Wim; Jordense, Marcel; Pras, Aiko; Schutte, Leon; Tracey, Ian

    1992-01-01

    Results of worked performed under the so called Cislan extension contract are presented. The adaptation of the Cislan 2 prototype design to an environment of interconnected Local Area Networks (LAN's) instead of a single 802.5 token ring LAN is considered. In order to extend the network architecture, the Interconnection Function (IF) protocol layer was subdivided into two protocol layers: a new IF layer, and below the Medium Enhancement (ME) protocol layer. Some small enhancements to the distributed bandwidth allocation protocol were developed, which in fact are also applicable to the 'normal' Cislan 2 system. The new services and protocols are described together with some scenarios and requirements for the new internetting Cislan 2 system. How to overcome the degradation of the quality of speech due to packet loss on the LAN subsystem was studied. Experiments were planned in order to measure this speech quality degradation. Simulations were performed of two Cislan subsystems, the bandwidth allocation protocol and the clock synchronization mechanism. Results on both simulations, performed on SUN workstations using QNAP as a simulation tool, are given. Results of the simulations of the clock synchronization mechanism, and results of the simulation of the distributed bandwidth allocation protocol are given.

  4. El uso de las simulaciones educativas en la ensenanza de conceptos de ciencias y su importancia desde la perspectiva de los estudiantes candidatos a maestros

    NASA Astrophysics Data System (ADS)

    Crespo Ramos, Edwin O.

    This research was aimed at establishing the differences, if any, between traditional direct teaching and constructive teaching through the use of computer simulations and their effect on pre-service teachers. It's also intended to gain feedback on the users of these simulations as providers of constructive teaching and learning experiences. The experimental framework used a quantitative method with a descriptive focus. The research was guided by two hypothesis and five inquiries. The data was obtained from a group composed of twenty-nine students from a private Metropolitan University in Puerto Rico and elementary school pre-service teachers. They were divided into two sub-groups: experimental and control. Two means were used to collect data: tests and surveys. Quantitative data was analyzed through test "t" for paired samples and the non-parametric Wilcoxon test. The results of the pre and post tests do not provide enough evidence to conclude that using the simulations as learning tools was more effective than traditional teaching. However, the quantitative results obtained were not enough to reject or dismiss the hypothesis Ho1. On the other hand, an overall positive attitude towards these simulations was obtained from the surveys. The importance of including hands-on activities in daily lesson planning was proven and well recognized among practice teachers. After participating and working with these simulations, the practice teachers expressed being convinced that they would definitely use them as teaching tools in the classroom. Due to these results, hypothesis Ho2 was rejected. Evidence also proved that practice teachers need further professional development to improve their skills in the application of these simulations in the classroom environment. The majority of these practice teachers showed concern about not being instructed on important aspects of the use of simulation as part of their college education curriculum towards becoming teachers.

  5. Comparative Study of the Effectiveness of Three Learning Environments: Hyper-Realistic Virtual Simulations, Traditional Schematic Simulations and Traditional Laboratory

    ERIC Educational Resources Information Center

    Martinez, Guadalupe; Naranjo, Francisco L.; Perez, Angel L.; Suero, Maria Isabel; Pardo, Pedro J.

    2011-01-01

    This study compared the educational effects of computer simulations developed in a hyper-realistic virtual environment with the educational effects of either traditional schematic simulations or a traditional optics laboratory. The virtual environment was constructed on the basis of Java applets complemented with a photorealistic visual output.…

  6. Enhancing the Simulation Speed of Sensor Network Applications by Asynchronization of Interrupt Service Routines

    PubMed Central

    Joe, Hyunwoo; Woo, Duk-Kyun; Kim, Hyungshin

    2013-01-01

    Sensor network simulations require high fidelity and timing accuracy to be used as an implementation and evaluation tool. The cycle-accurate and instruction-level simulator is the known solution for these purposes. However, this type of simulation incurs a high computation cost since it has to model not only the instruction level behavior but also the synchronization between multiple sensors for their causality. This paper presents a novel technique that exploits asynchronous simulations of interrupt service routines (ISR). We can avoid the synchronization overheads when the interrupt service routines are simulated without preemption. If the causality errors occur, we devise a rollback procedure to restore the original synchronized simulation. This concept can be extended to any instruction-level sensor network simulator. Evaluation results show our method can enhance the simulation speed up to 52% in the case of our experiments. For applications with longer interrupt service routines and smaller number of preemptions, the speedup becomes greater. In addition, our simulator is 2 to 11 times faster than the well-known sensor network simulator. PMID:23966200

  7. Cross Support Transfer Service (CSTS) Framework Library

    NASA Technical Reports Server (NTRS)

    Ray, Timothy

    2014-01-01

    Within the Consultative Committee for Space Data Systems (CCSDS), there is an effort to standardize data transfer between ground stations and control centers. CCSDS plans to publish a collection of transfer services that will each address the transfer of a particular type of data (e.g., tracking data). These services will be called Cross Support Transfer Services (CSTSs). All of these services will make use of a common foundation that is called the CSTS Framework. This library implements the User side of the CSTS Framework. "User side" means that the library performs the role that is typically expected of the control center. This library was developed in support of the Goddard Data Standards program. This technology could be applicable for control centers, and possibly for use in control center simulators needed to test ground station capabilities. The main advantages of this implementation are its flexibility and simplicity. It provides the framework capabilities, while allowing the library user to provide a wrapper that adapts the library to any particular environment. The main purpose of this implementation was to support the inter-operability testing required by CCSDS. In addition, it is likely that the implementation will be useful within the Goddard mission community (for use in control centers).

  8. Cloud Based Drive Forensic and DDoS Analysis on Seafile as Case Study

    NASA Astrophysics Data System (ADS)

    Bahaweres, R. B.; Santo, N. B.; Ningsih, A. S.

    2017-01-01

    The rapid development of Internet due to increasing data rates through both broadband cable networks and 4G wireless mobile, make everyone easily connected to the internet. Storages as Services (StaaS) is more popular and many users want to store their data in one place so that whenever they need they can easily access anywhere, any place and anytime in the cloud. The use of the service makes it vulnerable to use by someone to commit a crime or can do Denial of Service (DoS) on cloud storage services. The criminals can use the cloud storage services to store, upload and download illegal file or document to the cloud storage. In this study, we try to implement a private cloud storage using Seafile on Raspberry Pi and perform simulations in Local Area Network and Wi-Fi environment to analyze forensically to discover or open a criminal act can be traced and proved forensically. Also, we can identify, collect and analyze the artifact of server and client, such as a registry of the desktop client, the file system, the log of seafile, the cache of the browser, and database forensic.

  9. A novel test method to determine the filter material service life of decentralized systems treating runoff from traffic areas.

    PubMed

    Huber, Maximilian; Welker, Antje; Dierschke, Martina; Drewes, Jörg E; Helmreich, Brigitte

    2016-09-01

    In recent years, there has been a significant increase in the development and application of technical decentralized filter systems for the treatment of runoff from traffic areas. However, there are still many uncertainties regarding the service life and the performance of filter materials that are employed in decentralized treatment systems. These filter media are designed to prevent the transport of pollutants into the environment. A novel pilot-scale test method was developed to determine - within a few days - the service lives and long-term removal efficiencies for dissolved heavy metals in stormwater treatment systems. The proposed method consists of several steps including preloading the filter media in a pilot-scale model with copper and zinc by a load of n-1 years of the estimated service life (n). Subsequently, three representative rain events are simulated to evaluate the long-term performance by dissolved copper and zinc during the last year of application. The presented results, which verified the applicability of this method, were obtained for three filter channel systems and six filter shaft systems. The performance of the evaluated systems varied largely for both tested heavy metals and during all three simulated rain events. A validation of the pilot-scale assessment method with field measurements was also performed for two systems. Findings of this study suggest that this novel method does provide a standardized and accurate estimation of service intervals of decentralized treatment systems employing various filter materials. The method also provides regulatory authorities, designers, and operators with an objective basis for performance assessment and supports stormwater managers to make decisions for the installation of such decentralized treatment systems. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. The Development and Evaluation of a Computer-Simulated Science Inquiry Environment Using Gamified Elements

    ERIC Educational Resources Information Center

    Tsai, Fu-Hsing

    2018-01-01

    This study developed a computer-simulated science inquiry environment, called the Science Detective Squad, to engage students in investigating an electricity problem that may happen in daily life. The environment combined the simulation of scientific instruments and a virtual environment, including gamified elements, such as points and a story for…

  11. Around Marshall

    NASA Image and Video Library

    1978-08-24

    Once the United States' space program had progressed from Earth's orbit into outerspace, the prospect of building and maintaining a permanent presence in space was realized. To accomplish this feat, NASA launched a temporary workstation, Skylab, to discover the effects of low gravity and weightlessness on the human body, and also to develop tools and equipment that would be needed in the future to build and maintain a more permanent space station. The structures, techniques, and work schedules had to be carefully designed to fit this unique construction site. The components had to be lightweight for transport into orbit, yet durable. The station also had to be made with removable parts for easy servicing and repairs by astronauts. All of the tools necessary for service and repairs had to be designed for easy manipulation by a suited astronaut. And construction methods had to be efficient due to limited time the astronauts could remain outside their controlled environment. In lieu of all the specific needs for this project, an environment on Earth had to be developed that could simulate a low gravity atmosphere. A Neutral Buoyancy Simulator (NBS) was constructed by NASA Marshall Space Flight Center (MSFC) in 1968. Since then, NASA scientists have used this facility to understand how humans work best in low gravity and also provide information about the different kinds of structures that can be built. Another facet of the space station would be electrical cornectors which would be used for powering tools the astronauts would need for construction, maintenance and repairs. Shown is an astronaut training during an underwater electrical connector test in the NBS.

  12. A piloted simulation study of data link ATC message exchange

    NASA Technical Reports Server (NTRS)

    Waller, Marvin C.; Lohr, Gary W.

    1989-01-01

    Data link Air Traffic Control (ATC) and Air Traffic Service (ATS) message and data exchange offers the potential benefits of increased flight safety and efficiency by reducing communication errors and allowing more information to be transferred between aircraft and ground facilities. Digital communication also presents an opportunity to relieve the overloading of ATC radio frequencies which hampers message exchange during peak traffic hours in many busy terminal areas. A piloted simulation study to develop pilot factor guidelines and assess potential flight crew benefits and liabilities from using data link ATC message exchange was completed. The data link ATC message exchange concept, implemented on an existing navigation computer Control Display Unit (CDU) required maintaining a voice radio telephone link with an appropriate ATC facility. Flight crew comments, scanning behavior, and measurements of time spent in ATC communication activities for data link ATC message exchange were compared to similar measures for simulated conventional voice radio operations. The results show crew preference for the quieter flight deck environment and a perception of lower communication workload.

  13. Neutral Buoyancy Simulator: MSFC-Langley joint test of large space structures component assembly:

    NASA Technical Reports Server (NTRS)

    1979-01-01

    Once the United States' space program had progressed from Earth's orbit into outerspace, the prospect of building and maintaining a permanent presence in space was realized. To accomplish this feat, NASA launched a temporary workstation, Skylab, to discover the effects of low gravity and weightlessness on the human body, and also to develop tools and equipment that would be needed in the future to build and maintain a more permanent space station. The structures, techniques, and work schedules had to be carefully designed to fit this unique construction site. The components had to be lightweight for transport into orbit, yet durable. The station also had to be made with removable parts for easy servicing and repairs by astronauts. All of the tools necessary for service and repairs had to be designed for easy manipulation by a suited astronaut. And construction methods had to be efficient due to limited time the astronauts could remain outside their controlled environment. In lieu of all the specific needs for this project, an environment on Earth had to be developed that could simulate a low gravity atmosphere. A Neutral Buoyancy Simulator (NBS) was constructed by NASA Marshall Space Flight Center (MSFC) in 1968. Since then, NASA scientists have used this facility to understand how humans work best in low gravity and also provide information about the different kinds of structures that can be built. With the help of the NBS, building a space station became more of a reality. In a joint venture between NASA/Langley Research Center in Hampton, VA and MSFC, the Assembly Concept for Construction of Erectable Space Structures (ACCESS) was developed and demonstrated at MSFC's NBS. The primary objective of this experiment was to test the ACCESS structural assembly concept for suitability as the framework for larger space structures and to identify ways to improve the productivity of space construction. Pictured is a demonstration of ACCESS.

  14. Around Marshall

    NASA Image and Video Library

    1979-08-13

    Once the United States' space program had progressed from Earth's orbit into outerspace, the prospect of building and maintaining a permanent presence in space was realized. To accomplish this feat, NASA launched a temporary workstation, Skylab, to discover the effects of low gravity and weightlessness on the human body, and also to develop tools and equipment that would be needed in the future to build and maintain a more permanent space station. The structures, techniques, and work schedules had to be carefully designed to fit this unique construction site. The components had to be lightweight for transport into orbit, yet durable. The station also had to be made with removable parts for easy servicing and repairs by astronauts. All of the tools necessary for service and repairs had to be designed for easy manipulation by a suited astronaut. And construction methods had to be efficient due to limited time the astronauts could remain outside their controlled environment. In lieu of all the specific needs for this project, an environment on Earth had to be developed that could simulate a low gravity atmosphere. A Neutral Buoyancy Simulator (NBS) was constructed by NASA Marshall Space Flight Center (MSFC) in 1968. Since then, NASA scientists have used this facility to understand how humans work best in low gravity and also provide information about the different kinds of structures that can be built. Included in the plans for the space station was a space telescope. This telescope would be attached to the space station and directed towards outerspace. Astronomers hoped that the space telescope would provide a look at space that is impossible to see from Earth because of Earth's atmosphere and other man made influences. In an effort to make replacement and repairs easier on astronauts the space telescope was designed to be modular. Practice makes perfect as demonstrated in this photo: an astronaut practices moving modular pieces of the space telescope in the Neutral Buoyancy Simulator (NBS) at MSFC. The space telescope was later deployed in April 1990 as the Hubble Space Telescope.

  15. Neutral Buoyancy Simulator-NB32-Large Space Structure Assembly

    NASA Technical Reports Server (NTRS)

    1980-01-01

    Once the United States' space program had progressed from Earth's orbit into outerspace, the prospect of building and maintaining a permanent presence in space was realized. To accomplish this feat, NASA launched a temporary workstation, Skylab, to discover the effects of low gravity and weightlessness on the human body, and also to develop tools and equipment that would be needed in the future to build and maintain a more permanent space station. The structures, techniques, and work schedules had to be carefully designed to fit this unique construction site. The components had to be lightweight for transport into orbit, yet durable. The station also had to be made with removable parts for easy servicing and repairs by astronauts. All of the tools necessary for service and repairs had to be designed for easy manipulation by a suited astronaut. Construction methods had to be efficient due to the limited time the astronauts could remain outside their controlled environment. In lieu of all the specific needs for this project, an environment on Earth had to be developed that could simulate a low gravity atmosphere. A Neutral Buoyancy Simulator (NBS) was constructed by NASA Marshall Space Flight Center (MSFC) in 1968. Since then, NASA scientists have used this facility to understand how humans work best in low gravity and also provide information about the different kinds of structures that can be built. As part of this experimentation, the Experimental Assembly of Structures in Extravehicular Activity (EASE) project was developed as a joint effort between MFSC and the Massachusetts Institute of Technology (MIT). The EASE experiment required that crew members assemble small components to form larger components, working from the payload bay of the space shuttle. Pictured is an entire unit that has been constructed and is sitting in the bottom of a mock-up shuttle cargo bay pallet.

  16. To Create Space on Earth: The Space Environment Simulation Laboratory and Project Apollo

    NASA Technical Reports Server (NTRS)

    Walters, Lori C.

    2003-01-01

    Few undertakings in the history of humanity can compare to the great technological achievement known as Project Apollo. Among those who witnessed Armstrong#s flickering television image were thousands of people who had directly contributed to this historic moment. Amongst those in this vast anonymous cadre were the personnel of the Space Environment Simulation Laboratory (SESL) at the Manned Spacecraft Center (MSC) in Houston, Texas. SESL houses two large thermal-vacuum chambers with solar simulation capabilities. At a time when NASA engineers had a limited understanding of the effects of extremes of space on hardware and crews, SESL was designed to literally create the conditions of space on Earth. With interior dimensions of 90 feet in height and a 55-foot diameter, Chamber A dwarfed the Apollo command/service module (CSM) it was constructed to test. The chamber#s vacuum pumping capacity of 1 x 10(exp -6) torr can simulate an altitude greater than 130 miles above the Earth. A "lunar plane" capable of rotating a 150,000-pound test vehicle 180 deg replicates the revolution of a craft in space. To reproduce the temperature extremes of space, interior chamber walls cool to -280F as two banks of carbon arc modules simulate the unfiltered solar light/heat of the Sun. With capabilities similar to that of Chamber A, early Chamber B tests included the Gemini modular maneuvering unit, Apollo EVA mobility unit and the lunar module. Since Gemini astronaut Charles Bassett first ventured into the chamber in 1966, Chamber B has assisted astronauts in testing hardware and preparing them for work in the harsh extremes of space.

  17. OpenKnowledge for peer-to-peer experimentation in protein identification by MS/MS

    PubMed Central

    2011-01-01

    Background Traditional scientific workflow platforms usually run individual experiments with little evaluation and analysis of performance as required by automated experimentation in which scientists are being allowed to access numerous applicable workflows rather than being committed to a single one. Experimental protocols and data under a peer-to-peer environment could potentially be shared freely without any single point of authority to dictate how experiments should be run. In such environment it is necessary to have mechanisms by which each individual scientist (peer) can assess, locally, how he or she wants to be involved with others in experiments. This study aims to implement and demonstrate simple peer ranking under the OpenKnowledge peer-to-peer infrastructure by both simulated and real-world bioinformatics experiments involving multi-agent interactions. Methods A simulated experiment environment with a peer ranking capability was specified by the Lightweight Coordination Calculus (LCC) and automatically executed under the OpenKnowledge infrastructure. The peers such as MS/MS protein identification services (including web-enabled and independent programs) were made accessible as OpenKnowledge Components (OKCs) for automated execution as peers in the experiments. The performance of the peers in these automated experiments was monitored and evaluated by simple peer ranking algorithms. Results Peer ranking experiments with simulated peers exhibited characteristic behaviours, e.g., power law effect (a few dominant peers dominate), similar to that observed in the traditional Web. Real-world experiments were run using an interaction model in LCC involving two different types of MS/MS protein identification peers, viz., peptide fragment fingerprinting (PFF) and de novo sequencing with another peer ranking algorithm simply based on counting the successful and failed runs. This study demonstrated a novel integration and useful evaluation of specific proteomic peers and found MASCOT to be a dominant peer as judged by peer ranking. Conclusion The simulated and real-world experiments in the present study demonstrated that the OpenKnowledge infrastructure with peer ranking capability can serve as an evaluative environment for automated experimentation. PMID:22192521

  18. Using a Computerized Classroom Simulation to Prepare Pre-Service Teachers

    ERIC Educational Resources Information Center

    McPherson, Rebekah; Tyler-Wood, Tandra; McEnturff Ellison, Amber; Peak, Pamela

    2011-01-01

    This study at a large midwestern university evaluated the use of a web-based simulated classroom, simSchool, with pre-service and in-service special education students, to determine if use of the simulated classroom influences students' perceptions of inclusion and teacher preparation. The project used a nonequivalent comparison group,…

  19. Degradation Mechanisms of an Advanced Jet Engine Service-Retired TBC Component

    NASA Astrophysics Data System (ADS)

    Wu, Rudder T.; Osawa, Makoto; Yokokawa, Tadaharu; Kawagishi, Kyoko; Harada, Hiroshi

    Current use of TBCs is subjected to premature spallation failure mainly due to the formation of thermally grown oxides (TGOs). Although extensive research has been carried out to gain better understanding of the thermo - mechanical and -chemical characteristics of TBCs, laboratory-scale studies and simulation tests are often carried out in conditions significantly differed from the complex and extreme environment typically of a modern gas-turbine engine, thus, failed to truly model service conditions. In particular, the difference in oxygen partial pressure and the effects of contaminants present in the engine compartment have often been neglected. In this respect, an investigation is carried out to study the in-service degradation of an EB-PVD TBC coated nozzle-guide vane. Several modes of degradation were observed due to three factors: 1) presence of residual stresses induced by the thermal-expansion mismatches, 2) evolution of bond coat microstructure and subsequent formation of oxide spinels, 3) deposition of CMAS on the surface of TBC.

  20. Space-based Networking Technology Developments in the Interplanetary Network Directorate Information Technology Program

    NASA Technical Reports Server (NTRS)

    Clare, Loren; Clement, B.; Gao, J.; Hutcherson, J.; Jennings, E.

    2006-01-01

    Described recent development of communications protocols, services, and associated tools targeted to reduce risk, reduce cost and increase efficiency of IND infrastructure and supported mission operations. Space-based networking technologies developed were: a) Provide differentiated quality of service (QoS) that will give precedence to traffic that users have selected as having the greatest importance and/or time-criticality; b) Improve the total value of information to users through the use of QoS prioritization techniques; c) Increase operational flexibility and improve command-response turnaround; d) Enable new class of networked and collaborative science missions; e) Simplify applications interfaces to communications services; and f) Reduce risk and cost from a common object model and automated scheduling and communications protocols. Technologies are described in three general areas: communications scheduling, middleware, and protocols. Additionally developed simulation environment, which provides comprehensive, quantitative understanding of the technologies performance within overall, evolving architecture, as well as ability to refine & optimize specific components.

  1. Environmental testing of terrestrial flat plate photovoltaic modules

    NASA Technical Reports Server (NTRS)

    Hoffman, A.; Griffith, J.

    1979-01-01

    The Low-Cost Solar Array (LSA) Project at the Jet Propulsion Laboratory has as one objective: the development and implementation of environmental tests for flat plate photovoltaic modules as part of the Department of Energy's terrestrial photovoltaic program. Modules procured under this program have been subjected to a variety of laboratory tests intended to simulate service environments, and the results of these tests have been compared to available data from actual field service. This comparison indicates that certain tests (notably temperature cycling, humidity cycling, and cyclic pressure loading) are effective indicators of some forms of field failures. Other tests have yielded results useful in formulating module design guidelines. Not all effects noted in field service have been successfully reproduced in the laboratory, however, and work is continuing in order to improve the value of the test program as a tool for evaluating module design and workmanship. This paper contains a review of these ongoing efforts and an assessment of significant test results to date.

  2. A Dynamic Approach to Rebalancing Bike-Sharing Systems

    PubMed Central

    2018-01-01

    Bike-sharing services are flourishing in Smart Cities worldwide. They provide a low-cost and environment-friendly transportation alternative and help reduce traffic congestion. However, these new services are still under development, and several challenges need to be solved. A major problem is the management of rebalancing trucks in order to ensure that bikes and stalls in the docking stations are always available when needed, despite the fluctuations in the service demand. In this work, we propose a dynamic rebalancing strategy that exploits historical data to predict the network conditions and promptly act in case of necessity. We use Birth-Death Processes to model the stations’ occupancy and decide when to redistribute bikes, and graph theory to select the rebalancing path and the stations involved. We validate the proposed framework on the data provided by New York City’s bike-sharing system. The numerical simulations show that a dynamic strategy able to adapt to the fluctuating nature of the network outperforms rebalancing schemes based on a static schedule. PMID:29419771

  3. Apollo: Giving application developers a single point of access to public health models using structured vocabularies and Web services

    PubMed Central

    Wagner, Michael M.; Levander, John D.; Brown, Shawn; Hogan, William R.; Millett, Nicholas; Hanna, Josh

    2013-01-01

    This paper describes the Apollo Web Services and Apollo-SV, its related ontology. The Apollo Web Services give an end-user application a single point of access to multiple epidemic simulators. An end user can specify an analytic problem—which we define as a configuration and a query of results—exactly once and submit it to multiple epidemic simulators. The end user represents the analytic problem using a standard syntax and vocabulary, not the native languages of the simulators. We have demonstrated the feasibility of this design by implementing a set of Apollo services that provide access to two epidemic simulators and two visualizer services. PMID:24551417

  4. Apollo: giving application developers a single point of access to public health models using structured vocabularies and Web services.

    PubMed

    Wagner, Michael M; Levander, John D; Brown, Shawn; Hogan, William R; Millett, Nicholas; Hanna, Josh

    2013-01-01

    This paper describes the Apollo Web Services and Apollo-SV, its related ontology. The Apollo Web Services give an end-user application a single point of access to multiple epidemic simulators. An end user can specify an analytic problem-which we define as a configuration and a query of results-exactly once and submit it to multiple epidemic simulators. The end user represents the analytic problem using a standard syntax and vocabulary, not the native languages of the simulators. We have demonstrated the feasibility of this design by implementing a set of Apollo services that provide access to two epidemic simulators and two visualizer services.

  5. HST Multi Layer Insulation Failure Review Board Findings

    NASA Technical Reports Server (NTRS)

    Townsend, Jacqueline; Hansen, Patricia

    1998-01-01

    The mechanical and optical properties of the thermal control materials on the Hubble Space Telescope (HST) have degraded over the nearly seven years the telescope has been in orbit. Astronaut observations and photographs from the Second Servicing Mission (SM2) revealed large cracks in the metallized Teflon fluorinated ethylene propylene (FEP), the outer layer of the multi-layer insulation (MLI), in many locations around the telescope. Also, the absorptance of the bonded metallized Teflon FEP radiator surfaces of the telescope has increased over time. A Failure Review Board was established to determine the damage mechanism and to identify a replacement material. Samples of the top layer of the MLI and radiator material were retrieved during SM2, and a thorough investigation into the degradation followed in order to determine the primary cause of the damage. Mapping of the cracks on HST and the ground testing showed that thermal cycling with deep-layer damage from electron and proton radiation are necessary to cause the observed embrittlement. Further, strong evidence was found indicating that chain scission (reduced molecular weight) is the dominant form of damage to the metallized Teflon FEP. Given the damage to the outer layer of the multi-layer insulation (MLI) that was apparent during the second servicing mission (SM2), the decision was made to replace the outer layer during subsequent servicing missions. The replacement material had to meet the stringent thermal requirements of the spacecraft and maintain structural integrity for at least ten years. Ten candidate materials were exposed to simulated orbital environments and a replacement material was selected. This presentation will summarize the FRB results, in particular, the analysis of the retrieved specimens, the results of the simulated environmental exposures, and the selection of the replacement material. The NASA Space Environments and Effects community needs to hear these results because they reveal that Teflon (FEP) films should not be used in LEO as routinely as they are today.

  6. Simulation fails to replicate stress in trainees performing a technical procedure in the clinical environment.

    PubMed

    Baker, B G; Bhalla, A; Doleman, B; Yarnold, E; Simons, S; Lund, J N; Williams, J P

    2017-01-01

    Simulation-based training (SBT) has become an increasingly important method by which doctors learn. Stress has an impact upon learning, performance, technical, and non-technical skills. However, there are currently no studies that compare stress in the clinical and simulated environment. We aimed to compare objective (heart rate variability, HRV) and subjective (state trait anxiety inventory, STAI) measures of stress theatre with a simulated environment. HRV recordings were obtained from eight anesthetic trainees performing an uncomplicated rapid sequence induction at pre-determined procedural steps using a wireless Polar RS800CX monitor © in an emergency theatre setting. This was repeated in the simulated environment. Participants completed an STAI before and after the procedure. Eight trainees completed the study. The theatre environment caused an increase in objective stress vs baseline (p = .004). There was no significant difference between average objective stress levels across all time points (p = .20) between environments. However, there was a significant interaction between the variables of objective stress and environment (p = .045). There was no significant difference in subjective stress (p = .27) between environments. Simulation was unable to accurately replicate the stress of the technical procedure. This is the first study that compares the stress during SBT with the theatre environment and has implications for the assessment of simulated environments for use in examinations, rating of technical and non-technical skills, and stress management training.

  7. Development of an Interactive Augmented Environment and Its Application to Autonomous Learning for Quadruped Robots

    NASA Astrophysics Data System (ADS)

    Kobayashi, Hayato; Osaki, Tsugutoyo; Okuyama, Tetsuro; Gramm, Joshua; Ishino, Akira; Shinohara, Ayumi

    This paper describes an interactive experimental environment for autonomous soccer robots, which is a soccer field augmented by utilizing camera input and projector output. This environment, in a sense, plays an intermediate role between simulated environments and real environments. We can simulate some parts of real environments, e.g., real objects such as robots or a ball, and reflect simulated data into the real environments, e.g., to visualize the positions on the field, so as to create a situation that allows easy debugging of robot programs. The significant point compared with analogous work is that virtual objects are touchable in this system owing to projectors. We also show the portable version of our system that does not require ceiling cameras. As an application in the augmented environment, we address the learning of goalie strategies on real quadruped robots in penalty kicks. We make our robots utilize virtual balls in order to perform only quadruped locomotion in real environments, which is quite difficult to simulate accurately. Our robots autonomously learn and acquire more beneficial strategies without human intervention in our augmented environment than those in a fully simulated environment.

  8. Thermal barrier coating life prediction model development

    NASA Technical Reports Server (NTRS)

    Sheffler, K. D.; Demasi, J. T.

    1985-01-01

    A methodology was established to predict thermal barrier coating life in an environment simulative of that experienced by gas turbine airfoils. Specifically, work is being conducted to determine failure modes of thermal barrier coatings in the aircraft engine environment. Analytical studies coupled with appropriate physical and mechanical property determinations are being employed to derive coating life prediction model(s) on the important failure mode(s). An initial review of experimental and flight service components indicates that the predominant mode of TBC failure involves thermomechanical spallation of the ceramic coating layer. This ceramic spallation involves the formation of a dominant crack in the ceramic coating parallel to and closely adjacent to the metal-ceramic interface. Initial results from a laboratory test program designed to study the influence of various driving forces such as temperature, thermal cycle frequency, environment, and coating thickness, on ceramic coating spalling life suggest that bond coat oxidation damage at the metal-ceramic interface contributes significantly to thermomechanical cracking in the ceramic layer. Low cycle rate furnace testing in air and in argon clearly shows a dramatic increase of spalling life in the non-oxidizing environments.

  9. Technology Developments Integrating a Space Network Communications Testbed

    NASA Technical Reports Server (NTRS)

    Kwong, Winston; Jennings, Esther; Clare, Loren; Leang, Dee

    2006-01-01

    As future manned and robotic space explorations missions involve more complex systems, it is essential to verify, validate, and optimize such systems through simulation and emulation in a low cost testbed environment. The goal of such a testbed is to perform detailed testing of advanced space and ground communications networks, technologies, and client applications that are essential for future space exploration missions. We describe the development of new technologies enhancing our Multi-mission Advanced Communications Hybrid Environment for Test and Evaluation (MACHETE) that enable its integration in a distributed space communications testbed. MACHETE combines orbital modeling, link analysis, and protocol and service modeling to quantify system performance based on comprehensive considerations of different aspects of space missions. It can simulate entire networks and can interface with external (testbed) systems. The key technology developments enabling the integration of MACHETE into a distributed testbed are the Monitor and Control module and the QualNet IP Network Emulator module. Specifically, the Monitor and Control module establishes a standard interface mechanism to centralize the management of each testbed component. The QualNet IP Network Emulator module allows externally generated network traffic to be passed through MACHETE to experience simulated network behaviors such as propagation delay, data loss, orbital effects and other communications characteristics, including entire network behaviors. We report a successful integration of MACHETE with a space communication testbed modeling a lunar exploration scenario. This document is the viewgraph slides of the presentation.

  10. CORBASec Used to Secure Distributed Aerospace Propulsion Simulations

    NASA Technical Reports Server (NTRS)

    Blaser, Tammy M.

    2003-01-01

    The NASA Glenn Research Center and its industry partners are developing a Common Object Request Broker (CORBA) Security (CORBASec) test bed to secure their distributed aerospace propulsion simulations. Glenn has been working with its aerospace propulsion industry partners to deploy the Numerical Propulsion System Simulation (NPSS) object-based technology. NPSS is a program focused on reducing the cost and time in developing aerospace propulsion engines. It was developed by Glenn and is being managed by the NASA Ames Research Center as the lead center reporting directly to NASA Headquarters' Aerospace Technology Enterprise. Glenn is an active domain member of the Object Management Group: an open membership, not-for-profit consortium that produces and manages computer industry specifications (i.e., CORBA) for interoperable enterprise applications. When NPSS is deployed, it will assemble a distributed aerospace propulsion simulation scenario from proprietary analytical CORBA servers and execute them with security afforded by the CORBASec implementation. The NPSS CORBASec test bed was initially developed with the TPBroker Security Service product (Hitachi Computer Products (America), Inc., Waltham, MA) using the Object Request Broker (ORB), which is based on the TPBroker Basic Object Adaptor, and using NPSS software across different firewall products. The test bed has been migrated to the Portable Object Adaptor architecture using the Hitachi Security Service product based on the VisiBroker 4.x ORB (Borland, Scotts Valley, CA) and on the Orbix 2000 ORB (Dublin, Ireland, with U.S. headquarters in Waltham, MA). Glenn, GE Aircraft Engines, and Pratt & Whitney Aircraft are the initial industry partners contributing to the NPSS CORBASec test bed. The test bed uses Security SecurID (RSA Security Inc., Bedford, MA) two-factor token-based authentication together with Hitachi Security Service digital-certificate-based authentication to validate the various NPSS users. The test bed is expected to demonstrate NPSS CORBASec-specific policy functionality, confirm adequate performance, and validate the required Internet configuration in a distributed collaborative aerospace propulsion environment.

  11. Freshwater Detention by Oyster Reefs: Quantifying a Keystone Ecosystem Service

    PubMed Central

    Olabarrieta, Maitane; Frederick, Peter; Valle-Levinson, Arnoldo

    2016-01-01

    Oyster reefs provide myriad ecosystem services, including water quality improvement, fisheries and other faunal support, shoreline protection from erosion and storm surge, and economic productivity. However, their role in directing flow during non-storm conditions has been largely neglected. In regions where oyster reefs form near the mouth of estuarine rivers, they likely alter ocean-estuary exchange by acting as fresh water “dams”. We hypothesize that these reefs have the potential to detain fresh water and influence salinity over extensive areas, thus providing a “keystone” ecosystem service by supporting estuarine functions that rely on the maintenance of estuarine (i.e., brackish) conditions in the near-shore environment. In this work, we investigated the effects of shore-parallel reefs on estuarine salinity using field data and hydrodynamic modeling in a degraded reef complex in the northeastern Gulf of Mexico. Results suggested that freshwater detention by long linear chains of oyster reefs plays an important role in modulating salinities, not only in the oysters’ local environment, but over extensive estuarine areas (tens of square kilometers). Field data confirmed the presence of salinity differences between landward and seaward sides of the reef, with long-term mean salinity differences of >30% between sides. Modeled results expanded experimental findings by illustrating how oyster reefs affect the lateral and offshore extent of freshwater influence. In general, the effects of simulated reefs were most pronounced when they were highest in elevation, without gaps, and when riverine discharge was low. Taken together, these results describe a poorly documented ecosystem service provided by oyster reefs; provide an estimate of the magnitude and spatial extent of this service; and offer quantitative information to help guide future oyster reef restoration. PMID:27936184

  12. Oyster Reefs Support Coastal Resilience by Altering Nearshore Salinity: An Observational and Modeling Study to Quantify a "Keystone" Ecosystem Service

    NASA Astrophysics Data System (ADS)

    Kaplan, D. A.; Olabarrieta, M.; Frederick, P.; Valle-Levinson, A.

    2016-12-01

    Oyster reefs provide myriad ecosystem services, including water quality improvement, fisheries and other faunal support, shoreline protection from erosion and storm surge, and economic productivity. However, their role in directing flow during non-storm conditions has been largely neglected. In regions where oyster reefs form near the mouth of estuarine rivers, they likely alter ocean-estuary exchange by acting as fresh water "dams". We hypothesize that these reefs have the potential to detain fresh water and influence salinity over extensive areas, thus providing a "keystone" ecosystem service by supporting estuarine functions that rely on the maintenance of estuarine (i.e., brackish) conditions in the near-shore environment. In this work, we investigated the effects of shore-parallel reefs on near-shore salinity using field data and hydrodynamic modeling in a degraded reef complex in Suwannee Sound (Florida, USA). Results suggested that freshwater detention by long linear chains of oyster reefs plays an important role in modulating salinities, not only in the oysters' local environment, but over extensive estuarine areas (tens of square kilometers). Field data confirmed the presence of salinity differences between landward and seaward sides of the reef, with long-term mean salinity differences of >30% between sides. Modeled results expanded experimental findings by illustrating how oyster reefs affect the lateral and offshore extent of freshwater influence. In general, the effects of simulated reefs were most pronounced when they were highest in elevation, without gaps, and when riverine discharge was low. Taken together, these results describe a poorly documented ecosystem service provided by oyster reefs; provide an estimate of the magnitude and spatial extent of this service; and offer quantitative information to help guide future oyster reef restoration.

  13. Freshwater Detention by Oyster Reefs: Quantifying a Keystone Ecosystem Service.

    PubMed

    Kaplan, David A; Olabarrieta, Maitane; Frederick, Peter; Valle-Levinson, Arnoldo

    2016-01-01

    Oyster reefs provide myriad ecosystem services, including water quality improvement, fisheries and other faunal support, shoreline protection from erosion and storm surge, and economic productivity. However, their role in directing flow during non-storm conditions has been largely neglected. In regions where oyster reefs form near the mouth of estuarine rivers, they likely alter ocean-estuary exchange by acting as fresh water "dams". We hypothesize that these reefs have the potential to detain fresh water and influence salinity over extensive areas, thus providing a "keystone" ecosystem service by supporting estuarine functions that rely on the maintenance of estuarine (i.e., brackish) conditions in the near-shore environment. In this work, we investigated the effects of shore-parallel reefs on estuarine salinity using field data and hydrodynamic modeling in a degraded reef complex in the northeastern Gulf of Mexico. Results suggested that freshwater detention by long linear chains of oyster reefs plays an important role in modulating salinities, not only in the oysters' local environment, but over extensive estuarine areas (tens of square kilometers). Field data confirmed the presence of salinity differences between landward and seaward sides of the reef, with long-term mean salinity differences of >30% between sides. Modeled results expanded experimental findings by illustrating how oyster reefs affect the lateral and offshore extent of freshwater influence. In general, the effects of simulated reefs were most pronounced when they were highest in elevation, without gaps, and when riverine discharge was low. Taken together, these results describe a poorly documented ecosystem service provided by oyster reefs; provide an estimate of the magnitude and spatial extent of this service; and offer quantitative information to help guide future oyster reef restoration.

  14. How to implement live video recording in the clinical environment: A practical guide for clinical services.

    PubMed

    Lloyd, Adam; Dewar, Alistair; Edgar, Simon; Caesar, Dave; Gowens, Paul; Clegg, Gareth

    2017-06-01

    The use of video in healthcare is becoming more common, particularly in simulation and educational settings. However, video recording live episodes of clinical care is far less routine. To provide a practical guide for clinical services to embed live video recording. Using Kotter's 8-step process for leading change, we provide a 'how to' guide to navigate the challenges required to implement a continuous video-audit system based on our experience of video recording in our emergency department resuscitation rooms. The most significant hurdles in installing continuous video audit in a busy clinical area involve change management rather than equipment. Clinicians are faced with considerable ethical, legal and data protection challenges which are the primary barriers for services that pursue video recording of patient care. Existing accounts of video use rarely acknowledge the organisational and cultural dimensions that are key to the success of establishing a video system. This article outlines core implementation issues that need to be addressed if video is to become part of routine care delivery. By focussing on issues such as staff acceptability, departmental culture and organisational readiness, we provide a roadmap that can be pragmatically adapted by all clinical environments, locally and internationally, that seek to utilise video recording as an approach to improving clinical care. © 2017 John Wiley & Sons Ltd.

  15. Real-time co-simulation of adjustable-speed pumped storage hydro for transient stability analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohanpurkar, Manish; Ouroua, Abdelhamid; Hovsapian, Rob

    Pumped storage hydro (PSH) based generation of electricity is a proven grid level storage technique. A new configuration i.e., adjustable speed PSH (AS-PSH) power plant is modeled and discussed in this paper. Hydrodynamic models are created using partial differential equations and the governor topology adopted from an existing, operational AS-PSH unit. Physics-based simulation of both hydrodynamics and power system dynamics has been studied individually in the past. This article demonstrates a co-simulation of an AS-PSH unit between penstock hydrodynamics and power system events in a real-time environment. Co-simulation provides an insight into the dynamic and transient operation of AS-PSH connectedmore » to a bulk power system network. The two modes of AS-PSH operation presented in this paper are turbine and pump modes. A general philosophy of operating in turbine mode is prevalent in the field when the prices of electricity are high and in the pumping mode when prices are low. However, recently there is renewed interest in operating PSH to also provide ancillary services. A real-time co-simulation at sub-second regime of AS-PSH connected to the IEEE 14 bus test system is performed using digital real-time simulator and the results are discussed.« less

  16. Real-time co-simulation of adjustable-speed pumped storage hydro for transient stability analysis

    DOE PAGES

    Mohanpurkar, Manish; Ouroua, Abdelhamid; Hovsapian, Rob; ...

    2017-09-12

    Pumped storage hydro (PSH) based generation of electricity is a proven grid level storage technique. A new configuration i.e., adjustable speed PSH (AS-PSH) power plant is modeled and discussed in this paper. Hydrodynamic models are created using partial differential equations and the governor topology adopted from an existing, operational AS-PSH unit. Physics-based simulation of both hydrodynamics and power system dynamics has been studied individually in the past. This article demonstrates a co-simulation of an AS-PSH unit between penstock hydrodynamics and power system events in a real-time environment. Co-simulation provides an insight into the dynamic and transient operation of AS-PSH connectedmore » to a bulk power system network. The two modes of AS-PSH operation presented in this paper are turbine and pump modes. A general philosophy of operating in turbine mode is prevalent in the field when the prices of electricity are high and in the pumping mode when prices are low. However, recently there is renewed interest in operating PSH to also provide ancillary services. A real-time co-simulation at sub-second regime of AS-PSH connected to the IEEE 14 bus test system is performed using digital real-time simulator and the results are discussed.« less

  17. Dispersal and fallout simulations for urban consequences management (u)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grinstein, Fernando F; Wachtor, Adam J; Nelson, Matt

    2010-01-01

    Hazardous chemical, biological, or radioactive releases from leaks, spills, fires, or blasts, may occur (intentionally or accidentally) in urban environments during warfare or as part of terrorist attacks on military bases or other facilities. The associated contaminant dispersion is complex and semi-chaotic. Urban predictive simulation capabilities can have direct impact in many threat-reduction areas of interest, including, urban sensor placement and threat analysis, contaminant transport (CT) effects on surrounding civilian population (dosages, evacuation, shelter-in-place), education and training of rescue teams and services. Detailed simulations for the various processes involved are in principle possible, but generally not fast. Predicting urban airflowmore » accompanied by CT presents extremely challenging requirements. Crucial technical issues include, simulating turbulent fluid and particulate transport, initial and boundary condition modeling incorporating a consistent stratified urban boundary layer with realistic wind fluctuations, and post-processing of the simulation results for practical consequences management. Relevant fluid dynamic processes to be simulated include, detailed energetic and contaminant sources, complex building vortex shedding and flows in recirculation zones, and modeling of particle distributions, including particulate fallout, as well as deposition, re-suspension and evaporation. Other issues include, modeling building damage effects due to eventual blasts, addressing appropriate regional and atmospheric data reduction.« less

  18. Development of a High-Fidelity Simulation Environment for Shadow-Mode Assessments of Air Traffic Concepts

    NASA Technical Reports Server (NTRS)

    Robinson, John E., III; Lee, Alan; Lai, Chok Fung

    2017-01-01

    This paper describes the Shadow-Mode Assessment Using Realistic Technologies for the National Airspace System (SMART-NAS) Test Bed. The SMART-NAS Test Bed is an air traffic simulation platform being developed by the National Aeronautics and Space Administration (NASA). The SMART-NAS Test Bed's core purpose is to conduct high-fidelity, real-time, human-in-the-loop and automation-in-the-loop simulations of current and proposed future air traffic concepts for the United States' Next Generation Air Transportation System called NextGen. The setup, configuration, coordination, and execution of realtime, human-in-the-loop air traffic management simulations are complex, tedious, time intensive, and expensive. The SMART-NAS Test Bed framework is an alternative to the current approach and will provide services throughout the simulation workflow pipeline to help alleviate these shortcomings. The principle concepts to be simulated include advanced gate-to-gate, trajectory-based operations, widespread integration of novel aircraft such as unmanned vehicles, and real-time safety assurance technologies to enable autonomous operations. To make this possible, SNTB will utilize Web-based technologies, cloud resources, and real-time, scalable, communication middleware. This paper describes the SMART-NAS Test Bed's vision, purpose, its concept of use, and the potential benefits, key capabilities, high-level requirements, architecture, software design, and usage.

  19. A comparison of the accuracy of intraoral scanners using an intraoral environment simulator

    PubMed Central

    Park, Hye-Nan; Lim, Young-Jun; Yi, Won-Jin

    2018-01-01

    PURPOSE The aim of this study was to design an intraoral environment simulator and to assess the accuracy of two intraoral scanners using the simulator. MATERIALS AND METHODS A box-shaped intraoral environment simulator was designed to simulate two specific intraoral environments. The cast was scanned 10 times by Identica Blue (MEDIT, Seoul, South Korea), TRIOS (3Shape, Copenhagen, Denmark), and CS3500 (Carestream Dental, Georgia, USA) scanners in the two simulated groups. The distances between the left and right canines (D3), first molars (D6), second molars (D7), and the left canine and left second molar (D37) were measured. The distance data were analyzed by the Kruskal-Wallis test. RESULTS The differences in intraoral environments were not statistically significant (P>.05). Between intraoral scanners, statistically significant differences (P<.05) were revealed by the Kruskal-Wallis test with regard to D3 and D6. CONCLUSION No difference due to the intraoral environment was revealed. The simulator will contribute to the higher accuracy of intraoral scanners in the future. PMID:29503715

  20. a Simulation-As Framework Facilitating Webgis Based Installation Planning

    NASA Astrophysics Data System (ADS)

    Zheng, Z.; Chang, Z. Y.; Fei, Y. F.

    2017-09-01

    Installation Planning is constrained by both natural and social conditions, especially for spatially sparse but functionally connected facilities. Simulation is important for proper deploy in space and configuration in function of facilities to make them a cohesive and supportive system to meet users' operation needs. Based on requirement analysis, we propose a framework to combine GIS and Agent simulation to overcome the shortness in temporal analysis and task simulation of traditional GIS. In this framework, Agent based simulation runs as a service on the server, exposes basic simulation functions, such as scenario configuration, simulation control, and simulation data retrieval to installation planners. At the same time, the simulation service is able to utilize various kinds of geoprocessing services in Agents' process logic to make sophisticated spatial inferences and analysis. This simulation-as-a-service framework has many potential benefits, such as easy-to-use, on-demand, shared understanding, and boosted performances. At the end, we present a preliminary implement of this concept using ArcGIS javascript api 4.0 and ArcGIS for server, showing how trip planning and driving can be carried out by agents.

  1. Supporting Shared Resource Usage for a Diverse User Community: the OSG Experience and Lessons Learned

    NASA Astrophysics Data System (ADS)

    Garzoglio, Gabriele; Levshina, Tanya; Rynge, Mats; Sehgal, Chander; Slyz, Marko

    2012-12-01

    The Open Science Grid (OSG) supports a diverse community of new and existing users in adopting and making effective use of the Distributed High Throughput Computing (DHTC) model. The LHC user community has deep local support within the experiments. For other smaller communities and individual users the OSG provides consulting and technical services through the User Support area. We describe these sometimes successful and sometimes not so successful experiences and analyze lessons learned that are helping us improve our services. The services offered include forums to enable shared learning and mutual support, tutorials and documentation for new technology, and troubleshooting of problematic or systemic failure modes. For new communities and users, we bootstrap their use of the distributed high throughput computing technologies and resources available on the OSG by following a phased approach. We first adapt the application and run a small production campaign on a subset of “friendly” sites. Only then do we move the user to run full production campaigns across the many remote sites on the OSG, adding to the community resources up to hundreds of thousands of CPU hours per day. This scaling up generates new challenges - like no determinism in the time to job completion, and diverse errors due to the heterogeneity of the configurations and environments - so some attention is needed to get good results. We cover recent experiences with image simulation for the Large Synoptic Survey Telescope (LSST), small-file large volume data movement for the Dark Energy Survey (DES), civil engineering simulation with the Network for Earthquake Engineering Simulation (NEES), and accelerator modeling with the Electron Ion Collider group at BNL. We will categorize and analyze the use cases and describe how our processes are evolving based on lessons learned.

  2. A Novel Petri Nets-Based Modeling Method for the Interaction between the Sensor and the Geographic Environment in Emerging Sensor Networks

    PubMed Central

    Zhang, Feng; Xu, Yuetong; Chou, Jarong

    2016-01-01

    The service of sensor device in Emerging Sensor Networks (ESNs) is the extension of traditional Web services. Through the sensor network, the service of sensor device can communicate directly with the entity in the geographic environment, and even impact the geographic entity directly. The interaction between the sensor device in ESNs and geographic environment is very complex, and the interaction modeling is a challenging problem. This paper proposed a novel Petri Nets-based modeling method for the interaction between the sensor device and the geographic environment. The feature of the sensor device service in ESNs is more easily affected by the geographic environment than the traditional Web service. Therefore, the response time, the fault-tolerant ability and the resource consumption become important factors in the performance of the whole sensor application system. Thus, this paper classified IoT services as Sensing services and Controlling services according to the interaction between IoT service and geographic entity, and classified GIS services as data services and processing services. Then, this paper designed and analyzed service algebra and Colored Petri Nets model to modeling the geo-feature, IoT service, GIS service and the interaction process between the sensor and the geographic enviroment. At last, the modeling process is discussed by examples. PMID:27681730

  3. [Investigation of cost and medical service fee for pharmaceutical management in home medical care].

    PubMed

    Honma, Katsuaki; Sakai, Ritsuko; Takeshima, Akiko; Shimamori, Yoshimitsu; Hayase, Yukitoshi

    2004-10-01

    Due to the evolvement of the aged society and the steep rise in medical costs, the environment encircling the medical care industry has been changing remarkably. For this reason, it has become both necessary and fundamental for a community pharmacist to participate in home medical care through the pharmaceutical management service. We have studied the associated costs and medical service fees for pharmaceutical management in home medical care. The costs and medical service fees were calculated based on the pharmaceutical management service data collected during the three years from November 1998 to October 2001. As a result, the medical service fees were calculated using the old system which lasted until March 2002. Calculations using this system took into account 550 points per visit, up to two visits per month. Under the new system which started in April 2002, the number of visits taken into account is four times a month, 500 points for the first visit, 300 points from the second through to the forth visit. Then, we simulated a break-even point (BEP). It is clear that it is difficult for any community pharmacy to be specialized in home medical care. In order for the pharmacist to actively participate in home medical care in the future, it is necessary to further improve the system.

  4. A Cost-Effective Virtual Environment for Simulating and Training Powered Wheelchairs Manoeuvres.

    PubMed

    Headleand, Christopher J; Day, Thomas; Pop, Serban R; Ritsos, Panagiotis D; John, Nigel W

    2016-01-01

    Control of a powered wheelchair is often not intuitive, making training of new users a challenging and sometimes hazardous task. Collisions, due to a lack of experience can result in injury for the user and other individuals. By conducting training activities in virtual reality (VR), we can potentially improve driving skills whilst avoiding the risks inherent to the real world. However, until recently VR technology has been expensive and limited the commercial feasibility of a general training solution. We describe Wheelchair-Rift, a cost effective prototype simulator that makes use of the Oculus Rift head mounted display and the Leap Motion hand tracking device. It has been assessed for face validity by a panel of experts from a local Posture and Mobility Service. Initial results augur well for our cost-effective training solution.

  5. Simulator platform for fast reactor operation and safety technology demonstration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vilim, R. B.; Park, Y. S.; Grandy, C.

    2012-07-30

    A simulator platform for visualization and demonstration of innovative concepts in fast reactor technology is described. The objective is to make more accessible the workings of fast reactor technology innovations and to do so in a human factors environment that uses state-of-the art visualization technologies. In this work the computer codes in use at Argonne National Laboratory (ANL) for the design of fast reactor systems are being integrated to run on this platform. This includes linking reactor systems codes with mechanical structures codes and using advanced graphics to depict the thermo-hydraulic-structure interactions that give rise to an inherently safe responsemore » to upsets. It also includes visualization of mechanical systems operation including advanced concepts that make use of robotics for operations, in-service inspection, and maintenance.« less

  6. Flight service evaluation of composite components on the Bell Helicopter model 206L: Design, fabrication and testing

    NASA Technical Reports Server (NTRS)

    Zinberg, H.

    1982-01-01

    The design, fabrication, and testing phases of a program to obtain long term flight service experience on representative helicopter airframe structural components operating in typical commercial environments are described. The aircraft chosen is the Bell Helicopter Model 206L. The structural components are the forward fairing, litter door, baggage door, and vertical fin. The advanced composite components were designed to replace the production parts in the field and were certified by the FAA to be operable through the full flight envelope of the 206L. A description of the fabrication process that was used for each of the components is given. Static failing load tests on all components were done. In addition fatigue tests were run on four specimens that simulated the attachment of the vertical fin to the helicopter's tail boom.

  7. Energy Systems Test Area (ESTA) Pyrotechnic Operations: User Test Planning Guide

    NASA Technical Reports Server (NTRS)

    Hacker, Scott

    2012-01-01

    The Johnson Space Center (JSC) has created and refined innovative analysis, design, development, and testing techniques that have been demonstrated in all phases of spaceflight. JSC is uniquely positioned to apply this expertise to components, systems, and vehicles that operate in remote or harsh environments. We offer a highly skilled workforce, unique facilities, flexible project management, and a proven management system. The purpose of this guide is to acquaint Test Requesters with the requirements for test, analysis, or simulation services at JSC. The guide includes facility services and capabilities, inputs required by the facility, major milestones, a roadmap of the facility s process, and roles and responsibilities of the facility and the requester. Samples of deliverables, facility interfaces, and inputs necessary to define the cost and schedule are included as appendices to the guide.

  8. Design and simulation of EVA tools for first servicing mission of HST

    NASA Technical Reports Server (NTRS)

    Naik, Dipak; Dehoff, P. H.

    1994-01-01

    The Hubble Space Telescope (HST) was launched into near-earth orbit by the Space Shuttle Discovery on April 24, 1990. The payload of two cameras, two spectrographs, and a high-speed photometer is supplemented by three fine-guidance sensors that can be used for astronomy as well as for star tracking. A widely reported spherical aberration in the primary mirror causes HST to produce images of much lower quality than intended. A Space Shuttle repair mission in January 1994 installed small corrective mirrors that restored the full intended optical capability of the HST. The First Servicing Mission (FSM) involved considerable Extra Vehicular Activity (EVA). Special EVA tools for the FSM were designed and developed for this specific purpose. In an earlier report, the details of the Data Acquisition System developed to test the performance of the various EVA tools in ambient as well as simulated space environment were presented. The general schematic of the test setup is reproduced in this report for continuity. Although the data acquisition system was used extensively to test a number of fasteners, only the results of one test each carried on various fasteners and the Power Ratchet Tool are included in this report.

  9. Stochastic simulation for the propagation of high-frequency acoustic waves through a random velocity field

    NASA Astrophysics Data System (ADS)

    Lu, B.; Darmon, M.; Leymarie, N.; Chatillon, S.; Potel, C.

    2012-05-01

    In-service inspection of Sodium-Cooled Fast Reactors (SFR) requires the development of non-destructive techniques adapted to the harsh environment conditions and the examination complexity. From past experiences, ultrasonic techniques are considered as suitable candidates. The ultrasonic telemetry is a technique used to constantly insure the safe functioning of reactor inner components by determining their exact position: it consists in measuring the time of flight of the ultrasonic response obtained after propagation of a pulse emitted by a transducer and its interaction with the targets. While in-service the sodium flow creates turbulences that lead to temperature inhomogeneities, which translates into ultrasonic velocity inhomogeneities. These velocity variations could directly impact the accuracy of the target locating by introducing time of flight variations. A stochastic simulation model has been developed to calculate the propagation of ultrasonic waves in such an inhomogeneous medium. Using this approach, the travel time is randomly generated by a stochastic process whose inputs are the statistical moments of travel times known analytically. The stochastic model predicts beam deviations due to velocity inhomogeneities, which are similar to those provided by a determinist method, such as the ray method.

  10. Simulated Solar Flare X-Ray and Thermal Cycling Durability Evaluation of Hubble Space Telescope Thermal Control Candidate Replacement Materials

    NASA Technical Reports Server (NTRS)

    deGroh, Kim K.; Banks, Bruce A.; Sechkar, Edward A.; Scheiman, David A.

    1998-01-01

    During the Hubble Space Telescope (HST) second servicing mission (SM2), astronauts noticed that the multilayer insulation (MLI) covering the telescope was damaged. Large pieces of the outer layer of MLI (aluminized Teflon fluorinated ethylene propylene (Al-FEP)) were torn in several locations around the telescope. A piece of curled up Al-FEP was retrieved by the astronauts and was found to be severely embrittled, as witnessed by ground testing. Goddard Space Flight Center (GSFC) organized a HST MLI Failure Review Board (FRB) to determine the damage mechanism of FEP in the HST environment, and to recommend replacement insulation material to be installed on HST during the third servicing mission (SM3) in 1999. Candidate thermal control replacement materials were chosen by the FRB and tested for environmental durability under various exposures and durations. This paper describes durability testing of candidate materials which were exposed to charged particle radiation, simulated solar flare x-ray radiation and thermal cycling under load. Samples were evaluated for changes in solar absorptance and tear resistance. Descriptions of environmental exposures and durability evaluations of these materials are presented.

  11. Processing ARM VAP data on an AWS cluster

    NASA Astrophysics Data System (ADS)

    Martin, T.; Macduff, M.; Shippert, T.

    2017-12-01

    The Atmospheric Radiation Measurement (ARM) Data Management Facility (DMF) manages over 18,000 processes and 1.3 TB of data each day. This includes many Value Added Products (VAPs) that make use of multiple instruments to produce the derived products that are scientifically relevant. A thermodynamic and cloud profile VAP is being developed to provide input to the ARM Large-eddy simulation (LES) ARM Symbiotic Simulation and Observation (LASSO) project (https://www.arm.gov/capabilities/vaps/lasso-122) . This algorithm is CPU intensive and the processing requirements exceeded the available DMF computing capacity. Amazon Web Service (AWS) along with CfnCluster was investigated to see how it would perform. This cluster environment is cost effective and scales dynamically based on demand. We were able to take advantage of autoscaling which allowed the cluster to grow and shrink based on the size of the processing queue. We also were able to take advantage of the Amazon Web Services spot market to further reduce the cost. Our test was very successful and found that cloud resources can be used to efficiently and effectively process time series data. This poster will present the resources and methodology used to successfully run the algorithm.

  12. Distributed Denial of Service Attack Source Detection Using Efficient Traceback Technique (ETT) in Cloud-Assisted Healthcare Environment.

    PubMed

    Latif, Rabia; Abbas, Haider; Latif, Seemab; Masood, Ashraf

    2016-07-01

    Security and privacy are the first and foremost concerns that should be given special attention when dealing with Wireless Body Area Networks (WBANs). As WBAN sensors operate in an unattended environment and carry critical patient health information, Distributed Denial of Service (DDoS) attack is one of the major attacks in WBAN environment that not only exhausts the available resources but also influence the reliability of information being transmitted. This research work is an extension of our previous work in which a machine learning based attack detection algorithm is proposed to detect DDoS attack in WBAN environment. However, in order to avoid complexity, no consideration was given to the traceback mechanism. During traceback, the challenge lies in reconstructing the attack path leading to identify the attack source. Among existing traceback techniques, Probabilistic Packet Marking (PPM) approach is the most commonly used technique in conventional IP- based networks. However, since marking probability assignment has significant effect on both the convergence time and performance of a scheme, it is not directly applicable in WBAN environment due to high convergence time and overhead on intermediate nodes. Therefore, in this paper we have proposed a new scheme called Efficient Traceback Technique (ETT) based on Dynamic Probability Packet Marking (DPPM) approach and uses MAC header in place of IP header. Instead of using fixed marking probability, the proposed scheme uses variable marking probability based on the number of hops travelled by a packet to reach the target node. Finally, path reconstruction algorithms are proposed to traceback an attacker. Evaluation and simulation results indicate that the proposed solution outperforms fixed PPM in terms of convergence time and computational overhead on nodes.

  13. Artificial intelligence framework for simulating clinical decision-making: a Markov decision process approach.

    PubMed

    Bennett, Casey C; Hauser, Kris

    2013-01-01

    In the modern healthcare system, rapidly expanding costs/complexity, the growing myriad of treatment options, and exploding information streams that often do not effectively reach the front lines hinder the ability to choose optimal treatment decisions over time. The goal in this paper is to develop a general purpose (non-disease-specific) computational/artificial intelligence (AI) framework to address these challenges. This framework serves two potential functions: (1) a simulation environment for exploring various healthcare policies, payment methodologies, etc., and (2) the basis for clinical artificial intelligence - an AI that can "think like a doctor". This approach combines Markov decision processes and dynamic decision networks to learn from clinical data and develop complex plans via simulation of alternative sequential decision paths while capturing the sometimes conflicting, sometimes synergistic interactions of various components in the healthcare system. It can operate in partially observable environments (in the case of missing observations or data) by maintaining belief states about patient health status and functions as an online agent that plans and re-plans as actions are performed and new observations are obtained. This framework was evaluated using real patient data from an electronic health record. The results demonstrate the feasibility of this approach; such an AI framework easily outperforms the current treatment-as-usual (TAU) case-rate/fee-for-service models of healthcare. The cost per unit of outcome change (CPUC) was $189 vs. $497 for AI vs. TAU (where lower is considered optimal) - while at the same time the AI approach could obtain a 30-35% increase in patient outcomes. Tweaking certain AI model parameters could further enhance this advantage, obtaining approximately 50% more improvement (outcome change) for roughly half the costs. Given careful design and problem formulation, an AI simulation framework can approximate optimal decisions even in complex and uncertain environments. Future work is described that outlines potential lines of research and integration of machine learning algorithms for personalized medicine. Copyright © 2012 Elsevier B.V. All rights reserved.

  14. SIMEDIS: a Discrete-Event Simulation Model for Testing Responses to Mass Casualty Incidents.

    PubMed

    Debacker, Michel; Van Utterbeeck, Filip; Ullrich, Christophe; Dhondt, Erwin; Hubloue, Ives

    2016-12-01

    It is recognized that the study of the disaster medical response (DMR) is a relatively new field. To date, there is no evidence-based literature that clearly defines the best medical response principles, concepts, structures and processes in a disaster setting. Much of what is known about the DMR results from descriptive studies and expert opinion. No experimental studies regarding the effects of DMR interventions on the health outcomes of disaster survivors have been carried out. Traditional analytic methods cannot fully capture the flow of disaster victims through a complex disaster medical response system (DMRS). Computer modelling and simulation enable to study and test operational assumptions in a virtual but controlled experimental environment. The SIMEDIS (Simulation for the assessment and optimization of medical disaster management) simulation model consists of 3 interacting components: the victim creation model, the victim monitoring model where the health state of each victim is monitored and adapted to the evolving clinical conditions of the victims, and the medical response model, where the victims interact with the environment and the resources at the disposal of the healthcare responders. Since the main aim of the DMR is to minimize as much as possible the mortality and morbidity of the survivors, we designed a victim-centred model in which the casualties pass through the different components and processes of a DMRS. The specificity of the SIMEDIS simulation model is the fact that the victim entities evolve in parallel through both the victim monitoring model and the medical response model. The interaction between both models is ensured through a time or medical intervention trigger. At each service point, a triage is performed together with a decision on the disposition of the victims regarding treatment and/or evacuation based on a priority code assigned to the victim and on the availability of resources at the service point. The aim of the case study is to implement the SIMEDIS model to the DMRS of an international airport and to test the medical response plan to an airplane crash simulation at the airport. In order to identify good response options, the model then was used to study the effect of a number of interventional factors on the performance of the DMRS. Our study reflects the potential of SIMEDIS to model complex systems, to test different aspects of DMR, and to be used as a tool in experimental research that might make a substantial contribution to provide the evidence base for the effectiveness and efficiency of disaster medical management.

  15. An Analysis of Pre-Service Science Teachers' Moral Considerations about Environment and Their Attitudes towards Sustainable Environment

    ERIC Educational Resources Information Center

    Alpak-Tunç, Gizem; Yenice, Nilgün

    2017-01-01

    This study aims at analysing the moral considerations of pre-service science teachers about environment and their attitudes towards sustainable environment. It was carried out during the school year of 2014-2015 with 1438 pre-service science teachers attending public universities in the Aegean region of Turkey. The data of the study were collected…

  16. Analyzing Cyber-Physical Threats on Robotic Platforms.

    PubMed

    Ahmad Yousef, Khalil M; AlMajali, Anas; Ghalyon, Salah Abu; Dweik, Waleed; Mohd, Bassam J

    2018-05-21

    Robots are increasingly involved in our daily lives. Fundamental to robots are the communication link (or stream) and the applications that connect the robots to their clients or users. Such communication link and applications are usually supported through client/server network connection. This networking system is amenable of being attacked and vulnerable to the security threats. Ensuring security and privacy for robotic platforms is thus critical, as failures and attacks could have devastating consequences. In this paper, we examine several cyber-physical security threats that are unique to the robotic platforms; specifically the communication link and the applications. Threats target integrity, availability and confidential security requirements of the robotic platforms, which use MobileEyes/arnlServer client/server applications. A robot attack tool (RAT) was developed to perform specific security attacks. An impact-oriented approach was adopted to analyze the assessment results of the attacks. Tests and experiments of attacks were conducted in simulation environment and physically on the robot. The simulation environment was based on MobileSim; a software tool for simulating, debugging and experimenting on MobileRobots/ActivMedia platforms and their environments. The robot platform PeopleBot TM was used for physical experiments. The analysis and testing results show that certain attacks were successful at breaching the robot security. Integrity attacks modified commands and manipulated the robot behavior. Availability attacks were able to cause Denial-of-Service (DoS) and the robot was not responsive to MobileEyes commands. Integrity and availability attacks caused sensitive information on the robot to be hijacked. To mitigate security threats, we provide possible mitigation techniques and suggestions to raise awareness of threats on the robotic platforms, especially when the robots are involved in critical missions or applications.

  17. Analyzing Cyber-Physical Threats on Robotic Platforms †

    PubMed Central

    2018-01-01

    Robots are increasingly involved in our daily lives. Fundamental to robots are the communication link (or stream) and the applications that connect the robots to their clients or users. Such communication link and applications are usually supported through client/server network connection. This networking system is amenable of being attacked and vulnerable to the security threats. Ensuring security and privacy for robotic platforms is thus critical, as failures and attacks could have devastating consequences. In this paper, we examine several cyber-physical security threats that are unique to the robotic platforms; specifically the communication link and the applications. Threats target integrity, availability and confidential security requirements of the robotic platforms, which use MobileEyes/arnlServer client/server applications. A robot attack tool (RAT) was developed to perform specific security attacks. An impact-oriented approach was adopted to analyze the assessment results of the attacks. Tests and experiments of attacks were conducted in simulation environment and physically on the robot. The simulation environment was based on MobileSim; a software tool for simulating, debugging and experimenting on MobileRobots/ActivMedia platforms and their environments. The robot platform PeopleBotTM was used for physical experiments. The analysis and testing results show that certain attacks were successful at breaching the robot security. Integrity attacks modified commands and manipulated the robot behavior. Availability attacks were able to cause Denial-of-Service (DoS) and the robot was not responsive to MobileEyes commands. Integrity and availability attacks caused sensitive information on the robot to be hijacked. To mitigate security threats, we provide possible mitigation techniques and suggestions to raise awareness of threats on the robotic platforms, especially when the robots are involved in critical missions or applications. PMID:29883403

  18. Smart learning services based on smart cloud computing.

    PubMed

    Kim, Svetlana; Song, Su-Mi; Yoon, Yong-Ik

    2011-01-01

    Context-aware technologies can make e-learning services smarter and more efficient since context-aware services are based on the user's behavior. To add those technologies into existing e-learning services, a service architecture model is needed to transform the existing e-learning environment, which is situation-aware, into the environment that understands context as well. The context-awareness in e-learning may include the awareness of user profile and terminal context. In this paper, we propose a new notion of service that provides context-awareness to smart learning content in a cloud computing environment. We suggest the elastic four smarts (E4S)--smart pull, smart prospect, smart content, and smart push--concept to the cloud services so smart learning services are possible. The E4S focuses on meeting the users' needs by collecting and analyzing users' behavior, prospecting future services, building corresponding contents, and delivering the contents through cloud computing environment. Users' behavior can be collected through mobile devices such as smart phones that have built-in sensors. As results, the proposed smart e-learning model in cloud computing environment provides personalized and customized learning services to its users.

  19. Smart Learning Services Based on Smart Cloud Computing

    PubMed Central

    Kim, Svetlana; Song, Su-Mi; Yoon, Yong-Ik

    2011-01-01

    Context-aware technologies can make e-learning services smarter and more efficient since context-aware services are based on the user’s behavior. To add those technologies into existing e-learning services, a service architecture model is needed to transform the existing e-learning environment, which is situation-aware, into the environment that understands context as well. The context-awareness in e-learning may include the awareness of user profile and terminal context. In this paper, we propose a new notion of service that provides context-awareness to smart learning content in a cloud computing environment. We suggest the elastic four smarts (E4S)—smart pull, smart prospect, smart content, and smart push—concept to the cloud services so smart learning services are possible. The E4S focuses on meeting the users’ needs by collecting and analyzing users’ behavior, prospecting future services, building corresponding contents, and delivering the contents through cloud computing environment. Users’ behavior can be collected through mobile devices such as smart phones that have built-in sensors. As results, the proposed smart e-learning model in cloud computing environment provides personalized and customized learning services to its users. PMID:22164048

  20. Measuring the food service environment: development and implementation of assessment tools.

    PubMed

    Minaker, Leia M; Raine, Kim D; Cash, Sean B

    2009-01-01

    The food environment is increasingly being implicated in the obesity epidemic, though few reported measures of it exist. In order to assess the impact of the food environment on food intake, valid measures must be developed and tested. The current study describes the development of a food service environment assessment tool and its implementation in a community setting. A descriptive study with mixed qualitative and quantitative methods at a large, North American university campus was undertaken. Measures were developed on the basis of a conceptual model of nutrition environments. Measures of community nutrition environment were the number, type and hours of operation of each food service outlet on campus. Measures of consumer nutrition environment were food availability, food affordability, food promotion and nutrition information availability. Seventy-five food service outlets within the geographic boundaries were assessed. Assessment tools could be implemented in a reasonable amount of time and showed good face and content validity. The food environments were described and measures were grouped so that food service outlet types could be compared in terms of purchasing convenience, cost/value, healthy food promotion and health. Food service outlet types that scored higher in purchasing convenience and cost/value tended to score lower in healthy food promotion and health. This study adds evidence that food service outlet types that are convenient to consumers and supply high value (in terms of calories per dollar) tend to be less health-promoting. Results from this study also suggest the possibility of characterizing the food environment according to the type of food service outlet observed.

  1. Simulated learning environments in speech-language pathology: an Australian response.

    PubMed

    MacBean, Naomi; Theodoros, Deborah; Davidson, Bronwyn; Hill, Anne E

    2013-06-01

    The rising demand for health professionals to service the Australian population is placing pressure on traditional approaches to clinical education in the allied health professions. Existing research suggests that simulated learning environments (SLEs) have the potential to increase student placement capacity while providing quality learning experiences with comparable or superior outcomes to traditional methods. This project investigated the current use of SLEs in Australian speech-language pathology curricula, and the potential future applications of SLEs to the clinical education curricula through an extensive consultative process with stakeholders (all 10 Australian universities offering speech-language pathology programs in 2010, Speech Pathology Australia, members of the speech-language pathology profession, and current student body). Current use of SLEs in speech-language pathology education was found to be limited, with additional resources required to further develop SLEs and maintain their use within the curriculum. Perceived benefits included: students' increased clinical skills prior to workforce placement, additional exposure to specialized areas of speech-language pathology practice, inter-professional learning, and richer observational experiences for novice students. Stakeholders perceived SLEs to have considerable potential for clinical learning. A nationally endorsed recommendation for SLE development and curricula integration was prepared.

  2. A methodology towards virtualisation-based high performance simulation platform supporting multidisciplinary design of complex products

    NASA Astrophysics Data System (ADS)

    Ren, Lei; Zhang, Lin; Tao, Fei; (Luke) Zhang, Xiaolong; Luo, Yongliang; Zhang, Yabin

    2012-08-01

    Multidisciplinary design of complex products leads to an increasing demand for high performance simulation (HPS) platforms. One great challenge is how to achieve high efficient utilisation of large-scale simulation resources in distributed and heterogeneous environments. This article reports a virtualisation-based methodology to realise a HPS platform. This research is driven by the issues concerning large-scale simulation resources deployment and complex simulation environment construction, efficient and transparent utilisation of fine-grained simulation resources and high reliable simulation with fault tolerance. A framework of virtualisation-based simulation platform (VSIM) is first proposed. Then the article investigates and discusses key approaches in VSIM, including simulation resources modelling, a method to automatically deploying simulation resources for dynamic construction of system environment, and a live migration mechanism in case of faults in run-time simulation. Furthermore, the proposed methodology is applied to a multidisciplinary design system for aircraft virtual prototyping and some experiments are conducted. The experimental results show that the proposed methodology can (1) significantly improve the utilisation of fine-grained simulation resources, (2) result in a great reduction in deployment time and an increased flexibility for simulation environment construction and (3)achieve fault tolerant simulation.

  3. UPSS and G2

    NASA Technical Reports Server (NTRS)

    Dito, Scott J.

    2014-01-01

    The Universal Propellant Servicing System (UPSS) is a dedicated mobile launcher propellant delivery method that will minimize danger and complexity in order to allow vehicles to be serviced and ultimately launched from a variety of locations previously not seen fit for space launch. The UPPS/G2 project is the development of a model, simulation, and ultimately a working application that will control and monitor the cryogenic fluid delivery to the rocket for testing purposes. To accomplish this, the project is using the programming language/environment Gensym G2. The environment is an all-inclusive application that allows development, testing, modeling, and finally operation of the unique application through graphical and programmatic methods. We have learned G2 through classes and trial-and-error, and are now in the process of building the application that will soon be able to be tested on apparatuses here at Kennedy Space Center, and eventually on the actual unit. The UPSS will bring near-autonomous control of launches to those that need it, as well it will be a great addition to NASA and KSC's operational viability and the opportunity to bring space launches to parts of the world, and in time constraints, once not thought possible.

  4. Virtual Wireless Sensor Networks: Adaptive Brain-Inspired Configuration for Internet of Things Applications

    PubMed Central

    Toyonaga, Shinya; Kominami, Daichi; Murata, Masayuki

    2016-01-01

    Many researchers are devoting attention to the so-called “Internet of Things” (IoT), and wireless sensor networks (WSNs) are regarded as a critical technology for realizing the communication infrastructure of the future, including the IoT. Against this background, virtualization is a crucial technique for the integration of multiple WSNs. Designing virtualized WSNs for actual environments will require further detailed studies. Within the IoT environment, physical networks can undergo dynamic change, and so, many problems exist that could prevent applications from running without interruption when using the existing approaches. In this paper, we show an overall architecture that is suitable for constructing and running virtual wireless sensor network (VWSN) services within a VWSN topology. Our approach provides users with a reliable VWSN network by assigning redundant resources according to each user’s demand and providing a recovery method to incorporate environmental changes. We tested this approach by simulation experiment, with the results showing that the VWSN network is reliable in many cases, although physical deployment of sensor nodes and the modular structure of the VWSN will be quite important to the stability of services within the VWSN topology. PMID:27548177

  5. Virtual Wireless Sensor Networks: Adaptive Brain-Inspired Configuration for Internet of Things Applications.

    PubMed

    Toyonaga, Shinya; Kominami, Daichi; Murata, Masayuki

    2016-08-19

    Many researchers are devoting attention to the so-called "Internet of Things" (IoT), and wireless sensor networks (WSNs) are regarded as a critical technology for realizing the communication infrastructure of the future, including the IoT. Against this background, virtualization is a crucial technique for the integration of multiple WSNs. Designing virtualized WSNs for actual environments will require further detailed studies. Within the IoT environment, physical networks can undergo dynamic change, and so, many problems exist that could prevent applications from running without interruption when using the existing approaches. In this paper, we show an overall architecture that is suitable for constructing and running virtual wireless sensor network (VWSN) services within a VWSN topology. Our approach provides users with a reliable VWSN network by assigning redundant resources according to each user's demand and providing a recovery method to incorporate environmental changes. We tested this approach by simulation experiment, with the results showing that the VWSN network is reliable in many cases, although physical deployment of sensor nodes and the modular structure of the VWSN will be quite important to the stability of services within the VWSN topology.

  6. 3D positioning scheme exploiting nano-scale IR-UWB orthogonal pulses.

    PubMed

    Kim, Nammoon; Kim, Youngok

    2011-10-04

    In these days, the development of positioning technology for realizing ubiquitous environments has become one of the most important issues. The Global Positioning System (GPS) is a well-known positioning scheme, but it is not suitable for positioning in in-door/building environments because it is difficult to maintain line-of-sight condition between satellites and a GPS receiver. To such problem, various positioning methods such as RFID, WLAN, ZigBee, and Bluetooth have been developed for indoor positioning scheme. However, the majority of positioning schemes are focused on the two-dimension positioning even though three-dimension (3D) positioning information is more useful especially in indoor applications, such as smart space, U-health service, context aware service, etc. In this paper, a 3D positioning system based on mutually orthogonal nano-scale impulse radio ultra-wideband (IR-UWB) signals and cross array antenna is proposed. The proposed scheme uses nano-scale IR-UWB signals providing fine time resolution and high-resolution multiple signal specification algorithm for the time-of-arrival and the angle-of-arrival estimation. The performance is evaluated over various IEEE 802.15.4a channel models, and simulation results show the effectiveness of proposed scheme.

  7. QuakeSim: a Web Service Environment for Productive Investigations with Earth Surface Sensor Data

    NASA Astrophysics Data System (ADS)

    Parker, J. W.; Donnellan, A.; Granat, R. A.; Lyzenga, G. A.; Glasscoe, M. T.; McLeod, D.; Al-Ghanmi, R.; Pierce, M.; Fox, G.; Grant Ludwig, L.; Rundle, J. B.

    2011-12-01

    The QuakeSim science gateway environment includes a visually rich portal interface, web service access to data and data processing operations, and the QuakeTables ontology-based database of fault models and sensor data. The integrated tools and services are designed to assist investigators by covering the entire earthquake cycle of strain accumulation and release. The Web interface now includes Drupal-based access to diverse and changing content, with new ability to access data and data processing directly from the public page, as well as the traditional project management areas that require password access. The system is designed to make initial browsing of fault models and deformation data particularly engaging for new users. Popular data and data processing include GPS time series with data mining techniques to find anomalies in time and space, experimental forecasting methods based on catalogue seismicity, faulted deformation models (both half-space and finite element), and model-based inversion of sensor data. The fault models include the CGS and UCERF 2.0 faults of California and are easily augmented with self-consistent fault models from other regions. The QuakeTables deformation data include the comprehensive set of UAVSAR interferograms as well as a growing collection of satellite InSAR data.. Fault interaction simulations are also being incorporated in the web environment based on Virtual California. A sample usage scenario is presented which follows an investigation of UAVSAR data from viewing as an overlay in Google Maps, to selection of an area of interest via a polygon tool, to fast extraction of the relevant correlation and phase information from large data files, to a model inversion of fault slip followed by calculation and display of a synthetic model interferogram.

  8. A Serious Game for Massive Training and Assessment of French Soldiers Involved in Forward Combat Casualty Care (3D-SC1): Development and Deployment.

    PubMed

    Pasquier, Pierre; Mérat, Stéphane; Malgras, Brice; Petit, Ludovic; Queran, Xavier; Bay, Christian; Boutonnet, Mathieu; Jault, Patrick; Ausset, Sylvain; Auroy, Yves; Perez, Jean Paul; Tesnière, Antoine; Pons, François; Mignon, Alexandre

    2016-05-18

    The French Military Health Service has standardized its military prehospital care policy in a ''Sauvetage au Combat'' (SC) program (Forward Combat Casualty Care). A major part of the SC training program relies on simulations, which are challenging and costly when dealing with more than 80,000 soldiers. In 2014, the French Military Health Service decided to develop and deploy 3D-SC1, a serious game (SG) intended to train and assess soldiers managing the early steps of SC. The purpose of this paper is to describe the creation and production of 3D-SC1 and to present its deployment. A group of 10 experts and the Paris Descartes University Medical Simulation Department spin-off, Medusims, coproduced 3D-SC1. Medusims are virtual medical experiences using 3D real-time videogame technology (creation of an environment and avatars in different scenarios) designed for educational purposes (training and assessment) to simulate medical situations. These virtual situations have been created based on real cases and tested on mannequins by experts. Trainees are asked to manage specific situations according to best practices recommended by SC, and receive a score and a personalized feedback regarding their performance. The scenario simulated in the SG is an attack on a patrol of 3 soldiers with an improvised explosive device explosion as a result of which one soldier dies, one soldier is slightly stunned, and the third soldier experiences a leg amputation and other injuries. This scenario was first tested with mannequins in military simulation centers, before being transformed into a virtual 3D real-time scenario using a multi-support, multi-operating system platform, Unity. Processes of gamification and scoring were applied, with 2 levels of difficulty. A personalized debriefing was integrated at the end of the simulations. The design and production of the SG took 9 months. The deployment, performed in 3 months, has reached 84 of 96 (88%) French Army units, with a total of 818 hours of connection in the first 3 months. The development of 3D-SC1 involved a collaborative platform with interdisciplinary actors from the French Health Service, a university, and videogame industry. Training each French soldier with simulation exercises and mannequins is challenging and costly. Implementation of SGs into the training program could offer a unique opportunity at a lower cost to improve training and subsequently the real-time performance of soldiers when managing combat casualties; ideally, these should be combined with physical simulations.

  9. Astronauts Greg Harbaugh and Joe Tanner suit up for training in WETF

    NASA Image and Video Library

    1996-06-11

    S96-12830 (10 June 1996) --- Astronaut Joseph R. Tanner, STS-82 mission specialist assigned to extravehicular activity (EVA) involved with the servicing of the Hubble Space Telescope (HST), dons the gloves for his extravehicular mobility unit (EMU) space suit. He is about to be submerged in a 25-ft. deep pool at the Johnson Space Center's weightless environment training facility (WET-F) to participate in simulations for some of the EVA work. Out of frame, astronaut Gregory J. Harbaugh was on the other side of the platform, waiting to join Tanner in the spacewalk rehearsal.

  10. A QoS Framework with Traffic Request in Wireless Mesh Network

    NASA Astrophysics Data System (ADS)

    Fu, Bo; Huang, Hejiao

    In this paper, we consider major issues in ensuring greater Quality-of-Service (QoS) in Wireless Mesh Networks (WMNs), specifically with regard to reliability and delay. To this end, we use traffic request to record QoS requirements of data flows. In order to achieve required QoS for all data flows efficiently and with high portability, we develop Network State Update Algorithm. All assumptions, definitions, and algorithms are made exclusively with WMNs in mind, guaranteeing the portability of our framework to various environments in WMNs. The simulation results in proof that our framework is correct.

  11. Mission Simulation Toolkit

    NASA Technical Reports Server (NTRS)

    Pisaich, Gregory; Flueckiger, Lorenzo; Neukom, Christian; Wagner, Mike; Buchanan, Eric; Plice, Laura

    2007-01-01

    The Mission Simulation Toolkit (MST) is a flexible software system for autonomy research. It was developed as part of the Mission Simulation Facility (MSF) project that was started in 2001 to facilitate the development of autonomous planetary robotic missions. Autonomy is a key enabling factor for robotic exploration. There has been a large gap between autonomy software (at the research level), and software that is ready for insertion into near-term space missions. The MST bridges this gap by providing a simulation framework and a suite of tools for supporting research and maturation of autonomy. MST uses a distributed framework based on the High Level Architecture (HLA) standard. A key feature of the MST framework is the ability to plug in new models to replace existing ones with the same services. This enables significant simulation flexibility, particularly the mixing and control of fidelity level. In addition, the MST provides automatic code generation from robot interfaces defined with the Unified Modeling Language (UML), methods for maintaining synchronization across distributed simulation systems, XML-based robot description, and an environment server. Finally, the MSF supports a number of third-party products including dynamic models and terrain databases. Although the communication objects and some of the simulation components that are provided with this toolkit are specifically designed for terrestrial surface rovers, the MST can be applied to any other domain, such as aerial, aquatic, or space.

  12. Creating the Thermal Environment for Safely Testing the James Webb Space Telescope at the Johnson Space Center's Chamber A

    NASA Technical Reports Server (NTRS)

    Homan, Jonathan L.; Lauterbach, John; Garcia, Sam

    2016-01-01

    Chamber A is the largest thermal vacuum chamber at the Johnson Space Center and is one of the largest space environment chambers in the world. The chamber is 19.8 m (65 ft) in diameter and 36.6 m (120 ft) tall and is equipped with cryogenic liquid nitrogen panels (shrouds) and gaseous helium shrouds to create a simulated space environment. The chamber was originally built to support testing of the Apollo Service and Command Module for lunar missions, but underwent major modifications to be able to test the James Webb Space Telescope in a simulated deep space environment. To date seven tests have been performed in preparation of testing the flight optics for the James Webb Space Telescope (JWST). Each test has had a uniquie thermal profile and set of thermal requirements for cooling down and warming up, controlling contamination, and releasing condensed air. These range from temperatures from 335K to 15K, with tight uniformity and controllability for maintining thermal stability and pressure control. One unique requirement for two test was structurally proof loading hardware by creating thermal gradients at specific temperatures. This paper will discuss the thermal requirements and goals of the tests, the original requirements of the chamber thermal systems for planned operation, and how the new requirements were met by the team using the hardware, system flexiblilty, and engineering creativity. It will also discuss the mistakes and successes to meet the unique goals, especially when meeting the thermal proof load.

  13. 40 CFR 65.108 - Standards: Connectors in gas/vapor service and in light liquid service.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Standards: Connectors in gas/vapor service and in light liquid service. (a) Compliance schedule. Except as... 40 Protection of Environment 16 2014-07-01 2014-07-01 false Standards: Connectors in gas/vapor service and in light liquid service. 65.108 Section 65.108 Protection of Environment ENVIRONMENTAL...

  14. 40 CFR 65.108 - Standards: Connectors in gas/vapor service and in light liquid service.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Standards: Connectors in gas/vapor service and in light liquid service. (a) Compliance schedule. Except as... 40 Protection of Environment 16 2012-07-01 2012-07-01 false Standards: Connectors in gas/vapor service and in light liquid service. 65.108 Section 65.108 Protection of Environment ENVIRONMENTAL...

  15. 40 CFR 65.108 - Standards: Connectors in gas/vapor service and in light liquid service.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Standards: Connectors in gas/vapor service and in light liquid service. (a) Compliance schedule. Except as... 40 Protection of Environment 15 2010-07-01 2010-07-01 false Standards: Connectors in gas/vapor service and in light liquid service. 65.108 Section 65.108 Protection of Environment ENVIRONMENTAL...

  16. 40 CFR 65.108 - Standards: Connectors in gas/vapor service and in light liquid service.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Standards: Connectors in gas/vapor service and in light liquid service. (a) Compliance schedule. Except as... 40 Protection of Environment 15 2011-07-01 2011-07-01 false Standards: Connectors in gas/vapor service and in light liquid service. 65.108 Section 65.108 Protection of Environment ENVIRONMENTAL...

  17. 40 CFR 65.108 - Standards: Connectors in gas/vapor service and in light liquid service.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Standards: Connectors in gas/vapor service and in light liquid service. (a) Compliance schedule. Except as... 40 Protection of Environment 16 2013-07-01 2013-07-01 false Standards: Connectors in gas/vapor service and in light liquid service. 65.108 Section 65.108 Protection of Environment ENVIRONMENTAL...

  18. A Collaborative Extensible User Environment for Simulation and Knowledge Management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Freedman, Vicky L.; Lansing, Carina S.; Porter, Ellen A.

    2015-06-01

    In scientific simulation, scientists use measured data to create numerical models, execute simulations and analyze results from advanced simulators executing on high performance computing platforms. This process usually requires a team of scientists collaborating on data collection, model creation and analysis, and on authorship of publications and data. This paper shows that scientific teams can benefit from a user environment called Akuna that permits subsurface scientists in disparate locations to collaborate on numerical modeling and analysis projects. The Akuna user environment is built on the Velo framework that provides both a rich client environment for conducting and analyzing simulations andmore » a Web environment for data sharing and annotation. Akuna is an extensible toolset that integrates with Velo, and is designed to support any type of simulator. This is achieved through data-driven user interface generation, use of a customizable knowledge management platform, and an extensible framework for simulation execution, monitoring and analysis. This paper describes how the customized Velo content management system and the Akuna toolset are used to integrate and enhance an effective collaborative research and application environment. The extensible architecture of Akuna is also described and demonstrates its usage for creation and execution of a 3D subsurface simulation.« less

  19. Discrete event simulation modelling of patient service management with Arena

    NASA Astrophysics Data System (ADS)

    Guseva, Elena; Varfolomeyeva, Tatyana; Efimova, Irina; Movchan, Irina

    2018-05-01

    This paper describes the simulation modeling methodology aimed to aid in solving the practical problems of the research and analysing the complex systems. The paper gives the review of a simulation platform sand example of simulation model development with Arena 15.0 (Rockwell Automation).The provided example of the simulation model for the patient service management helps to evaluate the workload of the clinic doctors, determine the number of the general practitioners, surgeons, traumatologists and other specialized doctors required for the patient service and develop recommendations to ensure timely delivery of medical care and improve the efficiency of the clinic operation.

  20. A Simulation Approach to Decision Making in IT Service Strategy

    PubMed Central

    2014-01-01

    We propose to use simulation modeling to support decision making in IT service strategy scope. Our main contribution is a simulation model that helps service providers analyze the consequences of changes in both the service capacity assigned to their customers and the tendency of service requests received on the fulfillment of a business rule associated with the strategic goal of customer satisfaction. This business rule is set in the SLAs that service provider and its customers agree to, which determine the maximum percentage of service requests that are permitted to be abandoned because they have exceeded the waiting time allowed. To illustrate the use and applications of the model, we include some of the experiments conducted and describe our conclusions. PMID:24790583

  1. Creating the Deep Space Environment for Testing the James Webb Space Telescope (JWST) at NASA Johnson Space Center's Chamber A

    NASA Technical Reports Server (NTRS)

    Homan, Jonathan L.; Cerimele, Mary P.; Montz, Michael E.; Bachtel, Russell; Speed, John; O'Rear, Patrick

    2013-01-01

    Chamber A is the largest thermal vacuum chamber at the Johnson Space Center and is one of the largest space environment chambers in the world. The chamber is 19.8 m (65 ft) in diameter and 36.6 m (120 ft) tall and is equipped with cryogenic liquid nitrogen panels (shrouds) and gaseous helium shrouds to create a simulated space environment. It was originally designed and built in the mid 1960 s to test the Apollo Command and Service Module and several manned tests were conducted on that spacecraft, contributing to the success of the program. The chamber has been used since that time to test spacecraft active thermal control systems, Shuttle DTO, DOD, and ESA hardware in simulated Low Earth Orbit (LEO) conditions. NASA is now moving from LEO towards exploration of locations with environments approaching those of deep space. Therefore, Chamber A has undergone major modifications to enable it to simulate these deeper space environments. Environmental requirements were driven, and modifications were funded by the James Webb Space Telescope program, and this telescope which will orbit Solar/Earth L2, will be the first test article to benefit from the chamber s new capabilities. To accommodate JWST, the Chamber A high vacuum system has been modernized, additional LN2 shrouds have been installed, the liquid nitrogen system has been modified to remove dependency on electrical power and increase its reliability, a new helium shroud/refrigeration system has been installed to create a colder more stable and uniform heat sink, and the controls have been updated to increase the level of automation and improve operator interfaces. Testing of these major modifications was conducted in August of 2012 and this initial test was very successful, with all major systems exceeding their performance requirements. This paper will outline the changes in overall environmental requirements, discuss the technical design data that was used in the decisions leading to the extensive modifications, and describe the new capabilities of the chamber.

  2. Creating the Deep Space Environment for Testing the James Webb Space Telescope at NASA Johnson Space Center's Chamber A

    NASA Technical Reports Server (NTRS)

    Homan, Jonathan L.; Cerimele, Mary P.; Montz, Michael E.; Bachtel, Russell; Speed, John; O'Rear, Patrick

    2013-01-01

    Chamber A is the largest thermal vacuum chamber at the Johnson Space Center and is one of the largest space environment chambers in the world. The chamber is 19.8 m (65 ft.) in diameter and 36.6 m (120 ft.) tall and is equipped with cryogenic liquid nitrogen panels (shrouds) and gaseous helium shrouds to create a simulated space environment. It was originally designed and built in the mid 1960 s to test the Apollo Command and Service Module and several manned tests were conducted on that spacecraft, contributing to the success of the program. The chamber has been used since that time to test spacecraft active thermal control systems, Shuttle DTO, DOD, and ESA hardware in simulated Low Earth Orbit (LEO) conditions. NASA is now moving from LEO towards exploration of locations with environments approaching those of deep space. Therefore, Chamber A has undergone major modifications to enable it to simulate these deeper space environments. Environmental requirements were driven, and modifications were funded by the James Webb Space Telescope program, and this telescope, which will orbit Solar/Earth L2, will be the first test article to benefit from the chamber s new capabilities. To accommodate JWST, the Chamber A high vacuum system has been modernized, additional LN2 shrouds have been installed, the liquid nitrogen system has been modified to minimize dependency on electrical power and increase its reliability, a new helium shroud/refrigeration system has been installed to create a colder more stable and uniform heat sink, and the controls have been updated to increase the level of automation and improve operator interfaces. Testing of these major modifications was conducted in August of 2012 and this initial test was very successful, with all major systems exceeding their performance requirements. This paper will outline the changes in overall environmental requirements, discuss the technical design data that was used in the decisions leading to the extensive modifications, and describe the new capabilities of the chamber.

  3. Creating the Deep Space Environment for Testing the James Webb Space Telescope at the Johnson Space Center's Chamber A

    NASA Technical Reports Server (NTRS)

    Homan, Jonathan L.; Cerimele, Mary P.; Montz, Michael E.

    2012-01-01

    Chamber A is the largest thermal vacuum chamber at the Johnson Space Center and is one of the largest space environment chambers in the world. The chamber is 19.8 m (65 ft) in diameter and 36.6 m (120 ft) tall and is equipped with cryogenic liquid nitrogen panels (shrouds) and gaseous helium shrouds to create a simulated space environment. It was originally designed and built in the mid 1960's to test the Apollo Command and Service Module and several manned tests were conducted on that spacecraft, contributing to the success of the program. The chamber has been used since that time to test spacecraft active thermal control systems, Shuttle DTO, DOD, and ESA hardware in simulated Low Earth Orbit (LEO) conditions. NASA is now moving from LEO towards exploration of locations with environments approaching those of deep space. Therefore, Chamber A has undergone major modifications to enable it to simulate these deeper space environments. Environmental requirements were driven, and the modifications were funded, by the James Webb Space Telescope program, and this telescope which will orbit Solar/Earth L2, will be the first test article to benefit from the chamber s new capabilities. To accommodate JWST, the Chamber A high vacuum system has been modernized, additional LN2 shrouds have been installed, the liquid nitrogen system has been modified to remove dependency on electrical power and increase its reliability, a new helium shroud/refrigeration system has been installed to create a colder more stable and uniform heat sink and, the controls have been updated to increase the level of automation and improve operator interfaces. Testing of these major modifications was conducted in August 2012 and this initial test was very successful, with all major systems exceeding their performance requirements. This paper will outline the changes in the overall environmental requirements, discuss the technical design data that was used in the decisions leading to the extensive modifications, and describe the new capabilities of the chamber.

  4. Secondary Pre-Service Teachers' Perceptions of an Ideal Classroom Environment

    ERIC Educational Resources Information Center

    Bartelheim, Frederick J.; Conn, Daniel R.

    2014-01-01

    The classroom environment can impact students' motivation and engagement, and can influence students' academic learning. In some cases, pre-service teachers' influence on the classroom environment may not always be conducive for student learning. This exploratory study investigated pre-service teachers' perceptions of an ideal classroom…

  5. The Relationship between Pre-Service Science Teachers' Epistemological Beliefs and Preferences for Creating a Constructivist Learning Environment

    ERIC Educational Resources Information Center

    Saylan, Asli; Armagan, Fulya Öner; Bektas, Oktay

    2016-01-01

    The present study investigated the relationship between pre-service science teachers' epistemological beliefs and perceptions of a constructivist learning environment. The Turkish version of Constructivist Learning Environment Survey and Schommer's Epistemological Belief Questionnaire were administered to 531 pre-service science teachers attending…

  6. Virtualized Multi-Mission Operations Center (vMMOC) and its Cloud Services

    NASA Technical Reports Server (NTRS)

    Ido, Haisam Kassim

    2017-01-01

    His presentation will cover, the current and future, technical and organizational opportunities and challenges with virtualizing a multi-mission operations center. The full deployment of Goddard Space Flight Centers (GSFC) Virtualized Multi-Mission Operations Center (vMMOC) is nearly complete. The Space Science Mission Operations (SSMO) organizations spacecraft ACE, Fermi, LRO, MMS(4), OSIRIS-REx, SDO, SOHO, Swift, and Wind are in the process of being fully migrated to the vMMOC. The benefits of the vMMOC will be the normalization and the standardization of IT services, mission operations, maintenance, and development as well as ancillary services and policies such as collaboration tools, change management systems, and IT Security. The vMMOC will also provide operational efficiencies regarding hardware, IT domain expertise, training, maintenance and support.The presentation will also cover SSMO's secure Situational Awareness Dashboard in an integrated, fleet centric, cloud based web services fashion. Additionally the SSMO Telemetry as a Service (TaaS) will be covered, which allows authorized users and processes to access telemetry for the entire SSMO fleet, and for the entirety of each spacecrafts history. Both services leverage cloud services in a secure FISMA High and FedRamp environment, and also leverage distributed object stores in order to house and provide the telemetry. The services are also in the process of leveraging the cloud computing services elasticity and horizontal scalability. In the design phase is the Navigation as a Service (NaaS) which will provide a standardized, efficient, and normalized service for the fleet's space flight dynamics operations. Additional future services that may be considered are Ground Segment as a Service (GSaaS), Telemetry and Command as a Service (TCaaS), Flight Software Simulation as a Service, etc.

  7. Are Cloud Environments Ready for Scientific Applications?

    NASA Astrophysics Data System (ADS)

    Mehrotra, P.; Shackleford, K.

    2011-12-01

    Cloud computing environments are becoming widely available both in the commercial and government sectors. They provide flexibility to rapidly provision resources in order to meet dynamic and changing computational needs without the customers incurring capital expenses and/or requiring technical expertise. Clouds also provide reliable access to resources even though the end-user may not have in-house expertise for acquiring or operating such resources. Consolidation and pooling in a cloud environment allow organizations to achieve economies of scale in provisioning or procuring computing resources and services. Because of these and other benefits, many businesses and organizations are migrating their business applications (e.g., websites, social media, and business processes) to cloud environments-evidenced by the commercial success of offerings such as the Amazon EC2. In this paper, we focus on the feasibility of utilizing cloud environments for scientific workloads and workflows particularly of interest to NASA scientists and engineers. There is a wide spectrum of such technical computations. These applications range from small workstation-level computations to mid-range computing requiring small clusters to high-performance simulations requiring supercomputing systems with high bandwidth/low latency interconnects. Data-centric applications manage and manipulate large data sets such as satellite observational data and/or data previously produced by high-fidelity modeling and simulation computations. Most of the applications are run in batch mode with static resource requirements. However, there do exist situations that have dynamic demands, particularly ones with public-facing interfaces providing information to the general public, collaborators and partners, as well as to internal NASA users. In the last few months we have been studying the suitability of cloud environments for NASA's technical and scientific workloads. We have ported several applications to multiple cloud environments including NASA's Nebula environment, Amazon's EC2, Magellan at NERSC, and SGI's Cyclone system. We critically examined the performance of the applications on these systems. We also collected information on the usability of these cloud environments. In this talk we will present the results of our study focusing on the efficacy of using clouds for NASA's scientific applications.

  8. Ground Simulation of an Autonomous Satellite Rendezvous and Tracking System Using Dual Robotic Systems

    NASA Technical Reports Server (NTRS)

    Trube, Matthew J.; Hyslop, Andrew M.; Carignan, Craig R.; Easley, Joseph W.

    2012-01-01

    A hardware-in-the-loop ground system was developed for simulating a robotic servicer spacecraft tracking a target satellite at short range. A relative navigation sensor package "Argon" is mounted on the end-effector of a Fanuc 430 manipulator, which functions as the base platform of the robotic spacecraft servicer. Machine vision algorithms estimate the pose of the target spacecraft, mounted on a Rotopod R-2000 platform, relay the solution to a simulation of the servicer spacecraft running in "Freespace", which performs guidance, navigation and control functions, integrates dynamics, and issues motion commands to a Fanuc platform controller so that it tracks the simulated servicer spacecraft. Results will be reviewed for several satellite motion scenarios at different ranges. Key words: robotics, satellite, servicing, guidance, navigation, tracking, control, docking.

  9. PELS (Planetary Environmental Liquid Simulator): a new type of simulation facility to study extraterrestrial aqueous environments.

    PubMed

    Martin, Derek; Cockell, Charles S

    2015-02-01

    Investigations of other planetary bodies, including Mars and icy moons such as Enceladus and Europa, show that they may have hosted aqueous environments in the past and may do so even today. Therefore, a major challenge in astrobiology is to build facilities that will allow us to study the geochemistry and habitability of these extraterrestrial environments. Here, we describe a simulation facility (PELS: Planetary Environmental Liquid Simulator) with the capability for liquid input and output that allows for the study of such environments. The facility, containing six separate sample vessels, allows for statistical replication of samples. Control of pressure, gas composition, UV irradiation conditions, and temperature allows for the precise replication of aqueous conditions, including subzero brines under martian atmospheric conditions. A sample acquisition system allows for the collection of both liquid and solid samples from within the chamber without breaking the atmospheric conditions, enabling detailed studies of the geochemical evolution and habitability of past and present extraterrestrial environments. The facility we describe represents a new frontier in planetary simulation-continuous flow-through simulation of extraterrestrial aqueous environments.

  10. Methodology for testing infrared focal plane arrays in simulated nuclear radiation environments

    NASA Astrophysics Data System (ADS)

    Divita, E. L.; Mills, R. E.; Koch, T. L.; Gordon, M. J.; Wilcox, R. A.; Williams, R. E.

    1992-07-01

    This paper summarizes test methodology for focal plane array (FPA) testing that can be used for benign (clear) and radiation environments, and describes the use of custom dewars and integrated test equipment in an example environment. The test methodology, consistent with American Society for Testing Materials (ASTM) standards, is presented for the total accumulated gamma dose, transient dose rate, gamma flux, and neutron fluence environments. The merits and limitations of using Cobalt 60 for gamma environment simulations and of using various fast-neutron reactors and neutron sources for neutron simulations are presented. Test result examples are presented to demonstrate test data acquisition and FPA parameter performance under different measurement conditions and environmental simulations.

  11. 40 CFR 63.173 - Standards: Agitators in gas/vapor service and in light liquid service.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Equipment Leaks § 63.173 Standards: Agitators in gas/vapor service and in light liquid service. (a)(1) Each... 40 Protection of Environment 9 2011-07-01 2011-07-01 false Standards: Agitators in gas/vapor service and in light liquid service. 63.173 Section 63.173 Protection of Environment ENVIRONMENTAL...

  12. 40 CFR 63.1028 - Agitators in gas and vapor service and in light liquid service standards.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Standards § 63.1028 Agitators in gas and vapor service and in light liquid service standards. (a) Compliance... 40 Protection of Environment 11 2013-07-01 2013-07-01 false Agitators in gas and vapor service and in light liquid service standards. 63.1028 Section 63.1028 Protection of Environment ENVIRONMENTAL...

  13. 40 CFR 63.1028 - Agitators in gas and vapor service and in light liquid service standards.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Standards § 63.1028 Agitators in gas and vapor service and in light liquid service standards. (a) Compliance... 40 Protection of Environment 11 2014-07-01 2014-07-01 false Agitators in gas and vapor service and in light liquid service standards. 63.1028 Section 63.1028 Protection of Environment ENVIRONMENTAL...

  14. 40 CFR 63.1008 - Connectors in gas and vapor service and in light liquid service standards.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... § 63.1008 Connectors in gas and vapor service and in light liquid service standards. (a) Compliance... 40 Protection of Environment 11 2013-07-01 2013-07-01 false Connectors in gas and vapor service and in light liquid service standards. 63.1008 Section 63.1008 Protection of Environment ENVIRONMENTAL...

  15. 40 CFR 63.173 - Standards: Agitators in gas/vapor service and in light liquid service.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Equipment Leaks § 63.173 Standards: Agitators in gas/vapor service and in light liquid service. (a)(1) Each... 40 Protection of Environment 10 2012-07-01 2012-07-01 false Standards: Agitators in gas/vapor service and in light liquid service. 63.173 Section 63.173 Protection of Environment ENVIRONMENTAL...

  16. 40 CFR 264.1057 - Standards: Valves in gas/vapor service or in light liquid -service.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...: Valves in gas/vapor service or in light liquid -service. (a) Each valve in gas/vapor or light liquid... 40 Protection of Environment 26 2014-07-01 2014-07-01 false Standards: Valves in gas/vapor service or in light liquid -service. 264.1057 Section 264.1057 Protection of Environment ENVIRONMENTAL...

  17. 40 CFR 63.1025 - Valves in gas and vapor service and in light liquid service standards.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Standards § 63.1025 Valves in gas and vapor service and in light liquid service standards. (a) Compliance... 40 Protection of Environment 11 2014-07-01 2014-07-01 false Valves in gas and vapor service and in light liquid service standards. 63.1025 Section 63.1025 Protection of Environment ENVIRONMENTAL...

  18. 40 CFR 264.1057 - Standards: Valves in gas/vapor service or in light liquid -service.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...: Valves in gas/vapor service or in light liquid -service. (a) Each valve in gas/vapor or light liquid... 40 Protection of Environment 26 2011-07-01 2011-07-01 false Standards: Valves in gas/vapor service or in light liquid -service. 264.1057 Section 264.1057 Protection of Environment ENVIRONMENTAL...

  19. 40 CFR 63.1028 - Agitators in gas and vapor service and in light liquid service standards.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Standards § 63.1028 Agitators in gas and vapor service and in light liquid service standards. (a) Compliance... 40 Protection of Environment 10 2011-07-01 2011-07-01 false Agitators in gas and vapor service and in light liquid service standards. 63.1028 Section 63.1028 Protection of Environment ENVIRONMENTAL...

  20. 40 CFR 63.1008 - Connectors in gas and vapor service and in light liquid service standards.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... § 63.1008 Connectors in gas and vapor service and in light liquid service standards. (a) Compliance... 40 Protection of Environment 11 2014-07-01 2014-07-01 false Connectors in gas and vapor service and in light liquid service standards. 63.1008 Section 63.1008 Protection of Environment ENVIRONMENTAL...

  1. 40 CFR 65.106 - Standards: Valves in gas/vapor service and in light liquid service.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Standards: Valves in gas/vapor service and in light liquid service. (a) Compliance schedule. (1) The owner... 40 Protection of Environment 15 2010-07-01 2010-07-01 false Standards: Valves in gas/vapor service and in light liquid service. 65.106 Section 65.106 Protection of Environment ENVIRONMENTAL PROTECTION...

  2. 40 CFR 63.1025 - Valves in gas and vapor service and in light liquid service standards.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Standards § 63.1025 Valves in gas and vapor service and in light liquid service standards. (a) Compliance... 40 Protection of Environment 10 2010-07-01 2010-07-01 false Valves in gas and vapor service and in light liquid service standards. 63.1025 Section 63.1025 Protection of Environment ENVIRONMENTAL...

  3. 40 CFR 65.109 - Standards: Agitators in gas/vapor service and in light liquid service.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Standards: Agitators in gas/vapor service and in light liquid service. (a) Compliance schedule. The owner or... 40 Protection of Environment 16 2012-07-01 2012-07-01 false Standards: Agitators in gas/vapor service and in light liquid service. 65.109 Section 65.109 Protection of Environment ENVIRONMENTAL...

  4. 40 CFR 264.1057 - Standards: Valves in gas/vapor service or in light liquid -service.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...: Valves in gas/vapor service or in light liquid -service. (a) Each valve in gas/vapor or light liquid... 40 Protection of Environment 25 2010-07-01 2010-07-01 false Standards: Valves in gas/vapor service or in light liquid -service. 264.1057 Section 264.1057 Protection of Environment ENVIRONMENTAL...

  5. 40 CFR 63.173 - Standards: Agitators in gas/vapor service and in light liquid service.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Equipment Leaks § 63.173 Standards: Agitators in gas/vapor service and in light liquid service. (a)(1) Each... 40 Protection of Environment 10 2014-07-01 2014-07-01 false Standards: Agitators in gas/vapor service and in light liquid service. 63.173 Section 63.173 Protection of Environment ENVIRONMENTAL...

  6. 40 CFR 63.1009 - Agitators in gas and vapor service and in light liquid service standards.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... § 63.1009 Agitators in gas and vapor service and in light liquid service standards. (a) Compliance... 40 Protection of Environment 11 2014-07-01 2014-07-01 false Agitators in gas and vapor service and in light liquid service standards. 63.1009 Section 63.1009 Protection of Environment ENVIRONMENTAL...

  7. 40 CFR 265.1057 - Standards: Valves in gas/vapor service or in light liquid service.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ....1057 Standards: Valves in gas/vapor service or in light liquid service. (a) Each valve in gas/vapor or... 40 Protection of Environment 26 2011-07-01 2011-07-01 false Standards: Valves in gas/vapor service or in light liquid service. 265.1057 Section 265.1057 Protection of Environment ENVIRONMENTAL...

  8. 40 CFR 264.1057 - Standards: Valves in gas/vapor service or in light liquid -service.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...: Valves in gas/vapor service or in light liquid -service. (a) Each valve in gas/vapor or light liquid... 40 Protection of Environment 27 2012-07-01 2012-07-01 false Standards: Valves in gas/vapor service or in light liquid -service. 264.1057 Section 264.1057 Protection of Environment ENVIRONMENTAL...

  9. 40 CFR 63.1009 - Agitators in gas and vapor service and in light liquid service standards.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... § 63.1009 Agitators in gas and vapor service and in light liquid service standards. (a) Compliance... 40 Protection of Environment 10 2011-07-01 2011-07-01 false Agitators in gas and vapor service and in light liquid service standards. 63.1009 Section 63.1009 Protection of Environment ENVIRONMENTAL...

  10. 40 CFR 265.1057 - Standards: Valves in gas/vapor service or in light liquid service.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ....1057 Standards: Valves in gas/vapor service or in light liquid service. (a) Each valve in gas/vapor or... 40 Protection of Environment 25 2010-07-01 2010-07-01 false Standards: Valves in gas/vapor service or in light liquid service. 265.1057 Section 265.1057 Protection of Environment ENVIRONMENTAL...

  11. 40 CFR 63.1025 - Valves in gas and vapor service and in light liquid service standards.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Standards § 63.1025 Valves in gas and vapor service and in light liquid service standards. (a) Compliance... 40 Protection of Environment 10 2011-07-01 2011-07-01 false Valves in gas and vapor service and in light liquid service standards. 63.1025 Section 63.1025 Protection of Environment ENVIRONMENTAL...

  12. 40 CFR 63.1008 - Connectors in gas and vapor service and in light liquid service standards.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... § 63.1008 Connectors in gas and vapor service and in light liquid service standards. (a) Compliance... 40 Protection of Environment 10 2010-07-01 2010-07-01 false Connectors in gas and vapor service and in light liquid service standards. 63.1008 Section 63.1008 Protection of Environment ENVIRONMENTAL...

  13. 40 CFR 65.109 - Standards: Agitators in gas/vapor service and in light liquid service.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Standards: Agitators in gas/vapor service and in light liquid service. (a) Compliance schedule. The owner or... 40 Protection of Environment 15 2011-07-01 2011-07-01 false Standards: Agitators in gas/vapor service and in light liquid service. 65.109 Section 65.109 Protection of Environment ENVIRONMENTAL...

  14. 40 CFR 63.1028 - Agitators in gas and vapor service and in light liquid service standards.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Standards § 63.1028 Agitators in gas and vapor service and in light liquid service standards. (a) Compliance... 40 Protection of Environment 11 2012-07-01 2012-07-01 false Agitators in gas and vapor service and in light liquid service standards. 63.1028 Section 63.1028 Protection of Environment ENVIRONMENTAL...

  15. 40 CFR 65.109 - Standards: Agitators in gas/vapor service and in light liquid service.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Standards: Agitators in gas/vapor service and in light liquid service. (a) Compliance schedule. The owner or... 40 Protection of Environment 15 2010-07-01 2010-07-01 false Standards: Agitators in gas/vapor service and in light liquid service. 65.109 Section 65.109 Protection of Environment ENVIRONMENTAL...

  16. 40 CFR 265.1057 - Standards: Valves in gas/vapor service or in light liquid service.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ....1057 Standards: Valves in gas/vapor service or in light liquid service. (a) Each valve in gas/vapor or... 40 Protection of Environment 27 2012-07-01 2012-07-01 false Standards: Valves in gas/vapor service or in light liquid service. 265.1057 Section 265.1057 Protection of Environment ENVIRONMENTAL...

  17. 40 CFR 63.173 - Standards: Agitators in gas/vapor service and in light liquid service.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Equipment Leaks § 63.173 Standards: Agitators in gas/vapor service and in light liquid service. (a)(1) Each... 40 Protection of Environment 9 2010-07-01 2010-07-01 false Standards: Agitators in gas/vapor service and in light liquid service. 63.173 Section 63.173 Protection of Environment ENVIRONMENTAL...

  18. 40 CFR 265.1057 - Standards: Valves in gas/vapor service or in light liquid service.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ....1057 Standards: Valves in gas/vapor service or in light liquid service. (a) Each valve in gas/vapor or... 40 Protection of Environment 27 2013-07-01 2013-07-01 false Standards: Valves in gas/vapor service or in light liquid service. 265.1057 Section 265.1057 Protection of Environment ENVIRONMENTAL...

  19. 40 CFR 63.174 - Standards: Connectors in gas/vapor service and in light liquid service.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Equipment Leaks § 63.174 Standards: Connectors in gas/vapor service and in light liquid service. (a) The... 40 Protection of Environment 10 2013-07-01 2013-07-01 false Standards: Connectors in gas/vapor service and in light liquid service. 63.174 Section 63.174 Protection of Environment ENVIRONMENTAL...

  20. 40 CFR 65.106 - Standards: Valves in gas/vapor service and in light liquid service.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Standards: Valves in gas/vapor service and in light liquid service. (a) Compliance schedule. (1) The owner... 40 Protection of Environment 16 2014-07-01 2014-07-01 false Standards: Valves in gas/vapor service and in light liquid service. 65.106 Section 65.106 Protection of Environment ENVIRONMENTAL PROTECTION...

  1. 40 CFR 63.174 - Standards: Connectors in gas/vapor service and in light liquid service.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Equipment Leaks § 63.174 Standards: Connectors in gas/vapor service and in light liquid service. (a) The... 40 Protection of Environment 10 2012-07-01 2012-07-01 false Standards: Connectors in gas/vapor service and in light liquid service. 63.174 Section 63.174 Protection of Environment ENVIRONMENTAL...

  2. 40 CFR 65.106 - Standards: Valves in gas/vapor service and in light liquid service.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Standards: Valves in gas/vapor service and in light liquid service. (a) Compliance schedule. (1) The owner... 40 Protection of Environment 15 2011-07-01 2011-07-01 false Standards: Valves in gas/vapor service and in light liquid service. 65.106 Section 65.106 Protection of Environment ENVIRONMENTAL PROTECTION...

  3. 40 CFR 63.1008 - Connectors in gas and vapor service and in light liquid service standards.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... § 63.1008 Connectors in gas and vapor service and in light liquid service standards. (a) Compliance... 40 Protection of Environment 11 2012-07-01 2012-07-01 false Connectors in gas and vapor service and in light liquid service standards. 63.1008 Section 63.1008 Protection of Environment ENVIRONMENTAL...

  4. 40 CFR 63.1009 - Agitators in gas and vapor service and in light liquid service standards.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... § 63.1009 Agitators in gas and vapor service and in light liquid service standards. (a) Compliance... 40 Protection of Environment 10 2010-07-01 2010-07-01 false Agitators in gas and vapor service and in light liquid service standards. 63.1009 Section 63.1009 Protection of Environment ENVIRONMENTAL...

  5. 40 CFR 65.106 - Standards: Valves in gas/vapor service and in light liquid service.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Standards: Valves in gas/vapor service and in light liquid service. (a) Compliance schedule. (1) The owner... 40 Protection of Environment 16 2013-07-01 2013-07-01 false Standards: Valves in gas/vapor service and in light liquid service. 65.106 Section 65.106 Protection of Environment ENVIRONMENTAL PROTECTION...

  6. 40 CFR 63.1009 - Agitators in gas and vapor service and in light liquid service standards.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... § 63.1009 Agitators in gas and vapor service and in light liquid service standards. (a) Compliance... 40 Protection of Environment 11 2012-07-01 2012-07-01 false Agitators in gas and vapor service and in light liquid service standards. 63.1009 Section 63.1009 Protection of Environment ENVIRONMENTAL...

  7. 40 CFR 63.1028 - Agitators in gas and vapor service and in light liquid service standards.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Standards § 63.1028 Agitators in gas and vapor service and in light liquid service standards. (a) Compliance... 40 Protection of Environment 10 2010-07-01 2010-07-01 false Agitators in gas and vapor service and in light liquid service standards. 63.1028 Section 63.1028 Protection of Environment ENVIRONMENTAL...

  8. 40 CFR 63.1025 - Valves in gas and vapor service and in light liquid service standards.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Standards § 63.1025 Valves in gas and vapor service and in light liquid service standards. (a) Compliance... 40 Protection of Environment 11 2013-07-01 2013-07-01 false Valves in gas and vapor service and in light liquid service standards. 63.1025 Section 63.1025 Protection of Environment ENVIRONMENTAL...

  9. 40 CFR 63.174 - Standards: Connectors in gas/vapor service and in light liquid service.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Equipment Leaks § 63.174 Standards: Connectors in gas/vapor service and in light liquid service. (a) The... 40 Protection of Environment 10 2014-07-01 2014-07-01 false Standards: Connectors in gas/vapor service and in light liquid service. 63.174 Section 63.174 Protection of Environment ENVIRONMENTAL...

  10. 40 CFR 65.109 - Standards: Agitators in gas/vapor service and in light liquid service.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Standards: Agitators in gas/vapor service and in light liquid service. (a) Compliance schedule. The owner or... 40 Protection of Environment 16 2013-07-01 2013-07-01 false Standards: Agitators in gas/vapor service and in light liquid service. 65.109 Section 65.109 Protection of Environment ENVIRONMENTAL...

  11. 40 CFR 63.174 - Standards: Connectors in gas/vapor service and in light liquid service.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Equipment Leaks § 63.174 Standards: Connectors in gas/vapor service and in light liquid service. (a) The... 40 Protection of Environment 9 2011-07-01 2011-07-01 false Standards: Connectors in gas/vapor service and in light liquid service. 63.174 Section 63.174 Protection of Environment ENVIRONMENTAL...

  12. 40 CFR 63.174 - Standards: Connectors in gas/vapor service and in light liquid service.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Equipment Leaks § 63.174 Standards: Connectors in gas/vapor service and in light liquid service. (a) The... 40 Protection of Environment 9 2010-07-01 2010-07-01 false Standards: Connectors in gas/vapor service and in light liquid service. 63.174 Section 63.174 Protection of Environment ENVIRONMENTAL...

  13. 40 CFR 63.1009 - Agitators in gas and vapor service and in light liquid service standards.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... § 63.1009 Agitators in gas and vapor service and in light liquid service standards. (a) Compliance... 40 Protection of Environment 11 2013-07-01 2013-07-01 false Agitators in gas and vapor service and in light liquid service standards. 63.1009 Section 63.1009 Protection of Environment ENVIRONMENTAL...

  14. 40 CFR 265.1057 - Standards: Valves in gas/vapor service or in light liquid service.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ....1057 Standards: Valves in gas/vapor service or in light liquid service. (a) Each valve in gas/vapor or... 40 Protection of Environment 26 2014-07-01 2014-07-01 false Standards: Valves in gas/vapor service or in light liquid service. 265.1057 Section 265.1057 Protection of Environment ENVIRONMENTAL...

  15. 40 CFR 63.173 - Standards: Agitators in gas/vapor service and in light liquid service.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Equipment Leaks § 63.173 Standards: Agitators in gas/vapor service and in light liquid service. (a)(1) Each... 40 Protection of Environment 10 2013-07-01 2013-07-01 false Standards: Agitators in gas/vapor service and in light liquid service. 63.173 Section 63.173 Protection of Environment ENVIRONMENTAL...

  16. 40 CFR 264.1057 - Standards: Valves in gas/vapor service or in light liquid -service.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...: Valves in gas/vapor service or in light liquid -service. (a) Each valve in gas/vapor or light liquid... 40 Protection of Environment 27 2013-07-01 2013-07-01 false Standards: Valves in gas/vapor service or in light liquid -service. 264.1057 Section 264.1057 Protection of Environment ENVIRONMENTAL...

  17. 40 CFR 65.106 - Standards: Valves in gas/vapor service and in light liquid service.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Standards: Valves in gas/vapor service and in light liquid service. (a) Compliance schedule. (1) The owner... 40 Protection of Environment 16 2012-07-01 2012-07-01 false Standards: Valves in gas/vapor service and in light liquid service. 65.106 Section 65.106 Protection of Environment ENVIRONMENTAL PROTECTION...

  18. 40 CFR 63.1008 - Connectors in gas and vapor service and in light liquid service standards.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... § 63.1008 Connectors in gas and vapor service and in light liquid service standards. (a) Compliance... 40 Protection of Environment 10 2011-07-01 2011-07-01 false Connectors in gas and vapor service and in light liquid service standards. 63.1008 Section 63.1008 Protection of Environment ENVIRONMENTAL...

  19. 40 CFR 63.1025 - Valves in gas and vapor service and in light liquid service standards.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Standards § 63.1025 Valves in gas and vapor service and in light liquid service standards. (a) Compliance... 40 Protection of Environment 11 2012-07-01 2012-07-01 false Valves in gas and vapor service and in light liquid service standards. 63.1025 Section 63.1025 Protection of Environment ENVIRONMENTAL...

  20. 40 CFR 65.109 - Standards: Agitators in gas/vapor service and in light liquid service.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Standards: Agitators in gas/vapor service and in light liquid service. (a) Compliance schedule. The owner or... 40 Protection of Environment 16 2014-07-01 2014-07-01 false Standards: Agitators in gas/vapor service and in light liquid service. 65.109 Section 65.109 Protection of Environment ENVIRONMENTAL...

  1. Theory of quantized systems: formal basis for DEVS/HLA distributed simulation environment

    NASA Astrophysics Data System (ADS)

    Zeigler, Bernard P.; Lee, J. S.

    1998-08-01

    In the context of a DARPA ASTT project, we are developing an HLA-compliant distributed simulation environment based on the DEVS formalism. This environment will provide a user- friendly, high-level tool-set for developing interoperable discrete and continuous simulation models. One application is the study of contract-based predictive filtering. This paper presents a new approach to predictive filtering based on a process called 'quantization' to reduce state update transmission. Quantization, which generates state updates only at quantum level crossings, abstracts a sender model into a DEVS representation. This affords an alternative, efficient approach to embedding continuous models within distributed discrete event simulations. Applications of quantization to message traffic reduction are discussed. The theory has been validated by DEVSJAVA simulations of test cases. It will be subject to further test in actual distributed simulations using the DEVS/HLA modeling and simulation environment.

  2. Investigation of Propagation in Foliage Using Simulation Techniques

    DTIC Science & Technology

    2011-12-01

    simulation models provide a rough approximation to radiowave propagation in an actual rainforest environment. Based on the simulated results, the...simulation models provide a rough approximation to radiowave propagation in an actual rainforest environment. Based on the simulated results, the path... Rainforest ...............................2 2. Electrical Properties of a Forest .........................................................3 B. OBJECTIVES OF

  3. Evaluating Implementations of Service Oriented Architecture for Sensor Network via Simulation

    DTIC Science & Technology

    2011-04-01

    Subject: COMPUTER SCIENCE Approved: Boleslaw Szymanski , Thesis Adviser Rensselaer Polytechnic Institute Troy, New York April 2011 (For Graduation May 2011...simulation supports distributed and centralized composition with a type hierarchy and multiple -service statically-located nodes in a 2-dimensional space...distributed and centralized composition with a type hierarchy and multiple -service statically-located nodes in a 2-dimensional space. The second simulation

  4. Three-dimensional simulation and auto-stereoscopic 3D display of the battlefield environment based on the particle system algorithm

    NASA Astrophysics Data System (ADS)

    Ning, Jiwei; Sang, Xinzhu; Xing, Shujun; Cui, Huilong; Yan, Binbin; Yu, Chongxiu; Dou, Wenhua; Xiao, Liquan

    2016-10-01

    The army's combat training is very important now, and the simulation of the real battlefield environment is of great significance. Two-dimensional information has been unable to meet the demand at present. With the development of virtual reality technology, three-dimensional (3D) simulation of the battlefield environment is possible. In the simulation of 3D battlefield environment, in addition to the terrain, combat personnel and the combat tool ,the simulation of explosions, fire, smoke and other effects is also very important, since these effects can enhance senses of realism and immersion of the 3D scene. However, these special effects are irregular objects, which make it difficult to simulate with the general geometry. Therefore, the simulation of irregular objects is always a hot and difficult research topic in computer graphics. Here, the particle system algorithm is used for simulating irregular objects. We design the simulation of the explosion, fire, smoke based on the particle system and applied it to the battlefield 3D scene. Besides, the battlefield 3D scene simulation with the glasses-free 3D display is carried out with an algorithm based on GPU 4K super-multiview 3D video real-time transformation method. At the same time, with the human-computer interaction function, we ultimately realized glasses-free 3D display of the simulated more realistic and immersed 3D battlefield environment.

  5. 48 CFR 237.102-71 - Limitation on service contracts for military flight simulators.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... contracts for military flight simulators. 237.102-71 Section 237.102-71 Federal Acquisition Regulations... flight simulators. (a) Definitions. As used in this subsection— (1) Military flight simulator means any... Law 110-181, DoD is prohibited from entering into a service contract to acquire a military flight...

  6. 48 CFR 237.102-71 - Limitation on service contracts for military flight simulators.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... contracts for military flight simulators. 237.102-71 Section 237.102-71 Federal Acquisition Regulations... flight simulators. (a) Definitions. As used in this subsection— (1) Military flight simulator means any... 110-181, DoD is prohibited from entering into a service contract to acquire a military flight...

  7. 48 CFR 237.102-71 - Limitation on service contracts for military flight simulators.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... contracts for military flight simulators. 237.102-71 Section 237.102-71 Federal Acquisition Regulations... flight simulators. (a) Definitions. As used in this subsection— (1) Military flight simulator means any... 110-181, DoD is prohibited from entering into a service contract to acquire a military flight...

  8. 48 CFR 237.102-71 - Limitation on service contracts for military flight simulators.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... contracts for military flight simulators. 237.102-71 Section 237.102-71 Federal Acquisition Regulations... flight simulators. (a) Definitions. As used in this subsection— (1) Military flight simulator means any... 110-181, DoD is prohibited from entering into a service contract to acquire a military flight...

  9. 48 CFR 237.102-71 - Limitation on service contracts for military flight simulators.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... contracts for military flight simulators. 237.102-71 Section 237.102-71 Federal Acquisition Regulations... flight simulators. (a) Definitions. As used in this subsection— (1) Military flight simulator means any... 110-181, DoD is prohibited from entering into a service contract to acquire a military flight...

  10. A Review of Computer Simulations in Teacher Education

    ERIC Educational Resources Information Center

    Bradley, Elizabeth Gates; Kendall, Brittany

    2014-01-01

    Computer simulations can provide guided practice for a variety of situations that pre-service teachers would not frequently experience during their teacher education studies. Pre-service teachers can use simulations to turn the knowledge they have gained in their coursework into real experience. Teacher simulation training has come a long way over…

  11. Cascaded neural networks for sequenced propagation estimation, multiuser detection, and adaptive radio resource control of third-generation wireless networks for multimedia services

    NASA Astrophysics Data System (ADS)

    Hortos, William S.

    1999-03-01

    A hybrid neural network approach is presented to estimate radio propagation characteristics and multiuser interference and to evaluate their combined impact on throughput, latency and information loss in third-generation (3G) wireless networks. The latter three performance parameters influence the quality of service (QoS) for multimedia services under consideration for 3G networks. These networks, based on a hierarchical architecture of overlaying macrocells on top of micro- and picocells, are planned to operate in mobile urban and indoor environments with service demands emanating from circuit-switched, packet-switched and satellite-based traffic sources. Candidate radio interfaces for these networks employ a form of wideband CDMA in 5-MHz and wider-bandwidth channels, with possible asynchronous operation of the mobile subscribers. The proposed neural network (NN) architecture allocates network resources to optimize QoS metrics. Parameters of the radio propagation channel are estimated, followed by control of an adaptive antenna array at the base station to minimize interference, and then joint multiuser detection is performed at the base station receiver. These adaptive processing stages are implemented as a sequence of NN techniques that provide their estimates as inputs to a final- stage Kohonen self-organizing feature map (SOFM). The SOFM optimizes the allocation of available network resources to satisfy QoS requirements for variable-rate voice, data and video services. As the first stage of the sequence, a modified feed-forward multilayer perceptron NN is trained on the pilot signals of the mobile subscribers to estimate the parameters of shadowing, multipath fading and delays on the uplinks. A recurrent NN (RNN) forms the second stage to control base stations' adaptive antenna arrays to minimize intra-cell interference. The third stage is based on a Hopfield NN (HNN), modified to detect multiple users on the uplink radio channels to mitigate multiaccess interference, control carrier-sense multiple-access (CSMA) protocols, and refine call handoff procedures. In the final stage, the Kohonen SOFM, operating in a hybrid continuous and discrete space, adaptively allocates the resources of antenna-based cell sectorization, activity monitoring, variable-rate coding, power control, handoff and caller admission to meet user demands for various multimedia services at minimum QoS levels. The performance of the NN cascade is evaluated through simulation of a candidate 3G wireless network using W-CDMA parameters in a small-cell environment. The simulated network consists of a representative number of cells. Mobile users with typical movement patterns are assumed. QoS requirements for different classes of multimedia services are considered. The proposed method is shown to provide relatively low probability of new call blocking and handoff dropping, while maintaining efficient use of the network's radio resources.

  12. The Lewis Research Center geomagnetic substorm simulation facility

    NASA Technical Reports Server (NTRS)

    Berkopec, F. D.; Stevens, N. J.; Sturman, J. C.

    1977-01-01

    A simulation facility was established to determine the response of typical spacecraft materials to the geomagnetic substorm environment and to evaluate instrumentation that will be used to monitor spacecraft system response to this environment. Space environment conditions simulated include the thermal-vacuum conditions of space, solar simulation, geomagnetic substorm electron fluxes and energies, and the low energy plasma environment. Measurements for spacecraft material tests include sample currents, sample surface potentials, and the cumulative number of discharges. Discharge transients are measured by means of current probes and oscilloscopes and are verified by a photomultiplier. Details of this facility and typical operating procedures are presented.

  13. Evaluation of the effects of solar radiation on glass. [space environment simulation

    NASA Technical Reports Server (NTRS)

    Firestone, R. F.; Harada, Y.

    1979-01-01

    The degradation of glass used on space structures due to electromagnetic and particulate radiation in a space environment was evaluated. The space environment was defined and a simulated space exposure apparatus was constructed. Four optical materials were exposed to simulated solar and particulate radiation in a space environment. Sapphire and fused silica experienced little change in transmittance, while optical crown glass and ultra low expansion glass darkened appreciably. Specimen selection and preparation, exposure conditions, and the effect of simulated exposure are discussed. A selective bibliography of the effect of radiation on glass is included.

  14. Service-Oriented Security Framework for Remote Medical Services in the Internet of Things Environment

    PubMed Central

    Lee, Jae Dong; Yoon, Tae Sik; Chung, Seung Hyun

    2015-01-01

    Objectives Remote medical services have been expanding globally, and this is expansion is steadily increasing. It has had many positive effects, including medical access convenience, timeliness of service, and cost reduction. The speed of research and development in remote medical technology has been gradually accelerating. Therefore, it is expected to expand to enable various high-tech information and communications technology (ICT)-based remote medical services. However, the current state lacks an appropriate security framework that can resolve security issues centered on the Internet of things (IoT) environment that will be utilized significantly in telemedicine. Methods This study developed a medical service-oriented frame work for secure remote medical services, possessing flexibility regarding new service and security elements through its service-oriented structure. First, the common architecture of remote medical services is defined. Next medical-oriented secu rity threats and requirements within the IoT environment are identified. Finally, we propose a "service-oriented security frame work for remote medical services" based on previous work and requirements for secure remote medical services in the IoT. Results The proposed framework is a secure framework based on service-oriented cases in the medical environment. A com parative analysis focusing on the security elements (confidentiality, integrity, availability, privacy) was conducted, and the analysis results demonstrate the security of the proposed framework for remote medical services with IoT. Conclusions The proposed framework is service-oriented structure. It can support dynamic security elements in accordance with demands related to new remote medical services which will be diversely generated in the IoT environment. We anticipate that it will enable secure services to be provided that can guarantee confidentiality, integrity, and availability for all, including patients, non-patients, and medical staff. PMID:26618034

  15. Service-Oriented Security Framework for Remote Medical Services in the Internet of Things Environment.

    PubMed

    Lee, Jae Dong; Yoon, Tae Sik; Chung, Seung Hyun; Cha, Hyo Soung

    2015-10-01

    Remote medical services have been expanding globally, and this is expansion is steadily increasing. It has had many positive effects, including medical access convenience, timeliness of service, and cost reduction. The speed of research and development in remote medical technology has been gradually accelerating. Therefore, it is expected to expand to enable various high-tech information and communications technology (ICT)-based remote medical services. However, the current state lacks an appropriate security framework that can resolve security issues centered on the Internet of things (IoT) environment that will be utilized significantly in telemedicine. This study developed a medical service-oriented frame work for secure remote medical services, possessing flexibility regarding new service and security elements through its service-oriented structure. First, the common architecture of remote medical services is defined. Next medical-oriented secu rity threats and requirements within the IoT environment are identified. Finally, we propose a "service-oriented security frame work for remote medical services" based on previous work and requirements for secure remote medical services in the IoT. The proposed framework is a secure framework based on service-oriented cases in the medical environment. A com parative analysis focusing on the security elements (confidentiality, integrity, availability, privacy) was conducted, and the analysis results demonstrate the security of the proposed framework for remote medical services with IoT. The proposed framework is service-oriented structure. It can support dynamic security elements in accordance with demands related to new remote medical services which will be diversely generated in the IoT environment. We anticipate that it will enable secure services to be provided that can guarantee confidentiality, integrity, and availability for all, including patients, non-patients, and medical staff.

  16. Virtual operating room for team training in surgery.

    PubMed

    Abelson, Jonathan S; Silverman, Elliott; Banfelder, Jason; Naides, Alexandra; Costa, Ricardo; Dakin, Gregory

    2015-09-01

    We proposed to develop a novel virtual reality (VR) team training system. The objective of this study was to determine the feasibility of creating a VR operating room to simulate a surgical crisis scenario and evaluate the simulator for construct and face validity. We modified ICE STORM (Integrated Clinical Environment; Systems, Training, Operations, Research, Methods), a VR-based system capable of modeling a variety of health care personnel and environments. ICE STORM was used to simulate a standardized surgical crisis scenario, whereby participants needed to correct 4 elements responsible for loss of laparoscopic visualization. The construct and face validity of the environment were measured. Thirty-three participants completed the VR simulation. Attendings completed the simulation in less time than trainees (271 vs 201 seconds, P = .032). Participants felt the training environment was realistic and had a favorable impression of the simulation. All participants felt the workload of the simulation was low. Creation of a VR-based operating room for team training in surgery is feasible and can afford a realistic team training environment. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. An object oriented Python interface for atomistic simulations

    NASA Astrophysics Data System (ADS)

    Hynninen, T.; Himanen, L.; Parkkinen, V.; Musso, T.; Corander, J.; Foster, A. S.

    2016-01-01

    Programmable simulation environments allow one to monitor and control calculations efficiently and automatically before, during, and after runtime. Environments directly accessible in a programming environment can be interfaced with powerful external analysis tools and extensions to enhance the functionality of the core program, and by incorporating a flexible object based structure, the environments make building and analysing computational setups intuitive. In this work, we present a classical atomistic force field with an interface written in Python language. The program is an extension for an existing object based atomistic simulation environment.

  18. Psychological and physiological human responses to simulated and real environments: A comparison between Photographs, 360° Panoramas, and Virtual Reality.

    PubMed

    Higuera-Trujillo, Juan Luis; López-Tarruella Maldonado, Juan; Llinares Millán, Carmen

    2017-11-01

    Psychological research into human factors frequently uses simulations to study the relationship between human behaviour and the environment. Their validity depends on their similarity with the physical environments. This paper aims to validate three environmental-simulation display formats: photographs, 360° panoramas, and virtual reality. To do this we compared the psychological and physiological responses evoked by simulated environments set-ups to those from a physical environment setup; we also assessed the users' sense of presence. Analysis show that 360° panoramas offer the closest to reality results according to the participants' psychological responses, and virtual reality according to the physiological responses. Correlations between the feeling of presence and physiological and other psychological responses were also observed. These results may be of interest to researchers using environmental-simulation technologies currently available in order to replicate the experience of physical environments. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. An integrative assessment of the commercial air transportation system via adaptive agents

    NASA Astrophysics Data System (ADS)

    Lim, Choon Giap

    The overarching research objective is to address the tightly-coupled interactions between the demand-side and supply-side components of the United States Commercial Air Transportation System (CATS) in a time-variant environment. A system-of-system perspective is adopted, where the scope is extended beyond the National Airspace System (NAS) level to the National Transportation System (NTS) level to capture the intermodal and multimodal relationships between the NTS stakeholders. The Agent-Based Modeling and Simulation technique is employed where the NTS/NAS is treated as an integrated Multi-Agent System comprising of consumer and service provider agents, representing the demand-side and supply-side components respectively. Successful calibration and validation of both model components against the observable real world data resulted in a CATS simulation tool where the aviation demand is estimated from socioeconomic and demographic properties of the population instead of merely based on enplanement growth multipliers. This valuable achievement enabled a 20-year outlook simulation study to investigate the implications of a global fuel price hike on the airline industry and the U.S. CATS at large. Simulation outcomes revealed insights into the airline competitive behaviors and the subsequent responses from transportation consumers.

  20. Surgical skills simulation in trauma and orthopaedic training.

    PubMed

    Stirling, Euan R B; Lewis, Thomas L; Ferran, Nicholas A

    2014-12-19

    Changing patterns of health care delivery and the rapid evolution of orthopaedic surgical techniques have made it increasingly difficult for trainees to develop expertise in their craft. Working hour restrictions and a drive towards senior led care demands that proficiency be gained in a shorter period of time whilst requiring a greater skill set than that in the past. The resulting conflict between service provision and training has necessitated the development of alternative methods in order to compensate for the reduction in 'hands-on' experience. Simulation training provides the opportunity to develop surgical skills in a controlled environment whilst minimising risks to patient safety, operating theatre usage and financial expenditure. Many options for simulation exist within orthopaedics from cadaveric or prosthetic models, to arthroscopic simulators, to advanced virtual reality and three-dimensional software tools. There are limitations to this form of training, but it has significant potential for trainees to achieve competence in procedures prior to real-life practice. The evidence for its direct transferability to operating theatre performance is limited but there are clear benefits such as increasing trainee confidence and familiarity with equipment. With progressively improving methods of simulation available, it is likely to become more important in the ongoing and future training and assessment of orthopaedic surgeons.

  1. Using IMPRINT to Guide Experimental Design with Simulated Task Environments

    DTIC Science & Technology

    2015-06-18

    USING IMPRINT TO GUIDE EXPERIMENTAL DESIGN OF SIMULATED TASK ENVIRONMENTS THESIS Gregory...ENG-MS-15-J-052 USING IMPRINT TO GUIDE EXPERIMENTAL DESIGN WITH SIMULATED TASK ENVIRONMENTS THESIS Presented to the Faculty Department...Civilian, USAF June 2015 DISTRIBUTION STATEMENT A. APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED. AFIT-ENG-MS-15-J-052 USING IMPRINT

  2. An integrative review and evidence-based conceptual model of the essential components of pre-service education.

    PubMed

    Johnson, Peter; Fogarty, Linda; Fullerton, Judith; Bluestone, Julia; Drake, Mary

    2013-08-28

    With decreasing global resources, a pervasive critical shortage of skilled health workers, and a growing disease burden in many countries, the need to maximize the effectiveness and efficiency of pre-service education in low-and middle-income countries has never been greater. We performed an integrative review of the literature to analyse factors contributing to quality pre-service education and created a conceptual model that shows the links between essential elements of quality pre-service education and desired outcomes. The literature contains a rich discussion of factors that contribute to quality pre-service education, including the following: (1) targeted recruitment of qualified students from rural and low-resource settings appears to be a particularly effective strategy for retaining students in vulnerable communities after graduation; (2) evidence supports a competency-based curriculum, but there is no clear evidence supporting specific curricular models such as problem-based learning; (3) the health workforce must be well prepared to address national health priorities; (4) the role of the preceptor and preceptors' skills in clinical teaching, identifying student learning needs, assessing student learning, and prioritizing and time management are particularly important; (5) modern, Internet-enabled medical libraries, skills and simulation laboratories, and computer laboratories to support computer-aided instruction are elements of infrastructure meriting strong consideration; and (6) all students must receive sufficient clinical practice opportunities in high-quality clinical learning environments in order to graduate with the competencies required for effective practice. Few studies make a link between PSE and impact on the health system. Nevertheless, it is logical that the production of a trained and competent staff through high-quality pre-service education and continuing professional development activities is the foundation required to achieve the desired health outcomes. Professional regulation, deployment practices, workplace environment upon graduation and other service delivery contextual factors were analysed as influencing factors that affect educational outcomes and health impact. Our model for pre-service education reflects the investments that must be made by countries into programmes capable of leading to graduates who are competent for the health occupations and professions at the time of their entry into the workforce.

  3. Around Marshall

    NASA Image and Video Library

    1978-07-21

    Once the United States' space program had progressed from Earth's orbit into outerspace, the prospect of building and maintaining a permanent presence in space was realized. To accomplish this feat, NASA launched a temporary workstation, Skylab, to discover the effects of low gravity and weightlessness on the human body, and also to develop tools and equipment that would be needed in the future to build and maintain a more permanent space station. The structures, techniques, and work schedules had to be carefully designed to fit this unique construction site. The components had to be lightweight for transport into orbit, yet durable. The station also had to be made with removable parts for easy servicing and repairs by astronauts. All of the tools necessary for service and repairs had to be designed for easy manipulation by a suited astronaut. And construction methods had to be efficient due to limited time the astronauts could remain outside their controlled environment. In lieu of all the specific needs for this project, an environment on Earth had to be developed that could simulate a low gravity atmosphere. A Neutral Buoyancy Simulator (NBS) was constructed by NASA Marshall Space Flight Center (MSFC) in 1968. Since then, NASA scientists have used this facility to understand how humans work best in low gravity and also provide information about the different kinds of structures that can be built. Included in the plans for the space station was a space telescope. This telescope would be attached to the space station and directed towards outerspace. Astronomers hoped that the space telescope would provide a look at space that is impossible to see from Earth because of Earth's atmosphere and other man made influences. Pictured is a large structure that is being used as the antenna base for the space telescope.

  4. Around Marshall

    NASA Image and Video Library

    1980-05-06

    Once the United States' space program had progressed from Earth's orbit into outerspace, the prospect of building and maintaining a permanent presence in space was realized. To accomplish this feat, NASA launched a temporary workstation, Skylab, to discover the effects of low gravity and weightlessness on the human body, and also to develop tools and equipment that would be needed in the future to build and maintain a more permanent space station. The structures, techniques, and work schedules had to be carefully designed to fit this unique construction site. The components had to be lightweight for transport into orbit, yet durable. The station also had to be made with removable parts for easy servicing and repairs by astronauts. All of the tools necessary for service and repairs had to be designed for easy manipulation by a suited astronaut. Construction methods had to be efficient due to the limited time the astronauts could remain outside their controlled environment. In lieu of all the specific needs for this project, an environment on Earth had to be developed that could simulate a low gravity atmosphere. A Neutral Buoyancy Simulator (NBS) was constructed by NASA Marshall Space Flight Center (MSFC) in 1968. Since then, NASA scientists have used this facility to understand how humans work best in low gravity and also provide information about the different kinds of structures that can be built. As part of this experimentation, the Experimental Assembly of Structures in Extravehicular Activity (EASE) project was developed as a joint effort between MFSC and the Massachusetts Institute of Technology (MIT). The EASE experiment required that crew members assemble small components to form larger components, working from the payload bay of the space shuttle. Pictured is an entire unit that has been constructed and is sitting in the bottom of a mock-up shuttle cargo bay pallet.

  5. Around Marshall

    NASA Image and Video Library

    1980-01-07

    Once the United States' space program had progressed from Earth's orbit into outerspace, the prospect of building and maintaining a permanent presence in space was realized. To accomplish this feat, NASA launched a temporary workstation, Skylab, to discover the effects of low gravity and weightlessness on the human body, and also to develop tools and equipment that would be needed in the future to build and maintain a more permanent space station. The structures, techniques, and work schedules had to be carefully designed to fit this unique construction site. The components had to be lightweight for transport into orbit, yet durable. The station also had to be made with removable parts for easy servicing and repairs by astronauts. All of the tools necessary for service and repairs had to be designed for easy manipulation by a suited astronaut. Construction methods had to be efficient due to the limited time the astronauts could remain outside their controlled environment. In lieu of all the specific needs for this project, an environment on Earth had to be developed that could simulate a low gravity atmosphere. A Neutral Buoyancy Simulator (NBS) was constructed by NASA's Marshall Space Flight Center (MSFC) in 1968. Since then, NASA scientists have used this facility to understand how humans work best in low gravity and also provide information about the different kinds of structures that can be built. Pictured is a Massachusetts Institute of Technology (MIT) student working in a spacesuit on the Experimental Assembly of Structures in Extravehicular Activity (EASE) project which was developed as a joint effort between MFSC and MIT. The EASE experiment required that crew members assemble small components to form larger components, working from the payload bay of the space shuttle. The MIT student in this photo is assembling two six-beam tetrahedrons.

  6. Around Marshall

    NASA Image and Video Library

    1980-02-27

    Once the United States' space program had progressed from Earth's orbit into outerspace, theprospect of building and maintaining a permanent presence in space was realized. To accomplish this feat, NASA launched a temporary workstation, Skylab, to discover the effects of low gravity and weightlessness on the human body, and also to develop tools and equipment that would be needed in the future to build and maintain a more permanent space station. The structures, techniques, and work schedules had to be carefully designed to fit this unique construction site. The components had to be lightweight for transport into orbit, yet durable. The station also had to be made with removable parts for easy servicing and repairs by astronauts. All of the tools necessary for service and repairs had to be designed for easy manipulation by a suited astronaut. Construction methods had to be efficient due to the limited time the astronauts could remain outside their controlled environment. In lieu of all the specific needs for this project, an environment on Earth had to be developed that could simulate a low gravity atmosphere. A Neutral Buoyancy Simulator (NBS) was constructed by NASA's Marshall Space Flight Center (MSFC) in 1968. Since then, NASA scientists have used this facility to understand how humans work best in low gravity and also provide information about the different kinds of structures that can be built. Pictured is a Massachusetts Institute of Technology (MIT) student working in a spacesuit on the Experimental Assembly of Structures in Extravehicular Activity (EASE) project which was developed as a joint effort between MFSC and MIT. The EASE experiment required that crew members assemble small components to form larger components, working from the payload bay of the space shuttle. The MIT student in this photo is assembling two six-beam tetrahedrons.

  7. Around Marshall

    NASA Image and Video Library

    1980-07-08

    Once the United States' space program had progressed from Earth's orbit into outerspace, the prospect of building and maintaining a permanent presence in space was realized. To accomplish this feat, NASA launched a temporary workstation, Skylab, to discover the effects of low gravity and weightlessness on the human body, and also to develop tools and equipment that would be needed in the future to build and maintain a more permanent space station. The structures, techniques, and work schedules had to be carefully designed to fit this unique construction site. The components had to be lightweight for transport into orbit, yet durable. The station also had to be made with removable parts for easy servicing and repairs by astronauts. All of the tools necessary for service and repairs had to be designed for easy manipulation by a suited astronaut. And construction methods had to be efficient due to limited time the astronauts could remain outside their controlled environment. In lieu of all the specific needs for this project, an environment on Earth had to be developed that could simulate a low gravity atmosphere. A Neutral Buoyancy Simulator (NBS) was constructed by NASA Marshall Space Flight Center (MSFC) in 1968. Since then, NASA scientists have used this facility to understand how humans work best in low gravity and also provide information about the different kinds of structures that can be built. Pictured is a Massachusetts Institute of Technology (MIT) student working in a spacesuit on the Experimental Assembly of Structures in Extravehicular Activity (EASE) project which was developed as a joint effort between MFSC and MIT. The EASE experiment required that crew members assemble small components to form larger components, working from the payload bay of the space shuttle.

  8. Changes in land-uses and ecosystem services under multi-scenarios simulation.

    PubMed

    Liu, Jingya; Li, Jing; Qin, Keyu; Zhou, Zixiang; Yang, Xiaonan; Li, Ting

    2017-05-15

    Social economy of China has been rapidly developing for more than 30years with efficient reforms and policies being issued. Societal developments have resulted in a greater use of many natural resources to the extent that the ecosystem can no longer self-regulate, thus severely damaging the balance of the ecosystem itself. This in turn has led to a deterioration in people's living environments. Our research is based on a combination of climate scenarios presented in the fifth report of the Intergovernmental Panel on Climate Change (IPCC) and policy scenarios, including the one-child policy and carbon tax policy. We adopted Land Change Modeler of IDRISI software to simulate and analyze land-use change under 16 future scenarios in 2050. Carbon sequestration, soil conservation and water yields were quantified, based on those land-use maps and different ecosystem models. We also analyzed trade-offs and synergy among each ecosystem service and discussed why those interactions happened. The results show that: (1) Global climate change has a strong influence on future changes in land-use. (2) Carbon sequestration, water yield and soil conservation have a mutual relationship in the Guanzhong-Tianshui economic region. (3) Climate change and implementation of policy have a conspicuous impact on the changes in ecosystem services in the Guanzhong-Tianshui economic region. This paper can be used as a reference for further related research, and provide a reliable basis for achieving the sustainable development of the ecosystem. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. The Fast Debris Evolution Model

    NASA Astrophysics Data System (ADS)

    Lewis, Hugh G.; Swinerd, Graham; Newland, Rebecca; Saunders, Arrun

    The ‘Particles-in-a-box' (PIB) model introduced by Talent (1992) removed the need for computerintensive Monte Carlo simulation to predict the gross characteristics of an evolving debris environment. The PIB model was described using a differential equation that allows the stability of the low Earth orbit (LEO) environment to be tested by a straightforward analysis of the equation's coefficients. As part of an ongoing research effort to investigate more efficient approaches to evolutionary modelling and to develop a suite of educational tools, a new PIB model has been developed. The model, entitled Fast Debris Evolution (FaDE), employs a first-order differential equation to describe the rate at which new objects (˜ 10 cm) are added and removed from the environment. Whilst Talent (1992) based the collision theory for the PIB approach on collisions between gas particles and adopted specific values for the parameters of the model from a number of references, the form and coefficients of the FaDE model equations can be inferred from the outputs of future projections produced by high-fidelity models, such as the DAMAGE model. The FaDE model has been implemented as a client-side, web-based service using Javascript embedded within a HTML document. Due to the simple nature of the algorithm, FaDE can deliver the results of future projections immediately in a graphical format, with complete user-control over key simulation parameters. Historical and future projections for the ˜ 10 cm low Earth orbit (LEO) debris environment under a variety of different scenarios are possible, including business as usual, no future launches, post-mission disposal and remediation. A selection of results is presented with comparisons with predictions made using the DAMAGE environment model. The results demonstrate that the FaDE model is able to capture comparable time-series of collisions and number of objects as predicted by DAMAGE in several scenarios. Further, and perhaps more importantly, its speed and flexibility allows the user to explore and understand the evolution of the space debris environment.

  10. LHC@Home: a BOINC-based volunteer computing infrastructure for physics studies at CERN

    NASA Astrophysics Data System (ADS)

    Barranco, Javier; Cai, Yunhai; Cameron, David; Crouch, Matthew; Maria, Riccardo De; Field, Laurence; Giovannozzi, Massimo; Hermes, Pascal; Høimyr, Nils; Kaltchev, Dobrin; Karastathis, Nikos; Luzzi, Cinzia; Maclean, Ewen; McIntosh, Eric; Mereghetti, Alessio; Molson, James; Nosochkov, Yuri; Pieloni, Tatiana; Reid, Ivan D.; Rivkin, Lenny; Segal, Ben; Sjobak, Kyrre; Skands, Peter; Tambasco, Claudia; Veken, Frederik Van der; Zacharov, Igor

    2017-12-01

    The LHC@Home BOINC project has provided computing capacity for numerical simulations to researchers at CERN since 2004, and has since 2011 been expanded with a wider range of applications. The traditional CERN accelerator physics simulation code SixTrack enjoys continuing volunteers support, and thanks to virtualisation a number of applications from the LHC experiment collaborations and particle theory groups have joined the consolidated LHC@Home BOINC project. This paper addresses the challenges related to traditional and virtualized applications in the BOINC environment, and how volunteer computing has been integrated into the overall computing strategy of the laboratory through the consolidated LHC@Home service. Thanks to the computing power provided by volunteers joining LHC@Home, numerous accelerator beam physics studies have been carried out, yielding an improved understanding of charged particle dynamics in the CERN Large Hadron Collider (LHC) and its future upgrades. The main results are highlighted in this paper.

  11. Protocol Support for a New Satellite-Based Airspace Communication Network

    NASA Technical Reports Server (NTRS)

    Shang, Yadong; Hadjitheodosiou, Michael; Baras, John

    2004-01-01

    We recommend suitable transport protocols for an aeronautical network supporting Internet and data services via satellite. We study the characteristics of an aeronautical satellite hybrid network and focus on the problems that cause dramatically degraded performance of the Transport Protocol. We discuss various extensions to standard TCP that alleviate some of these performance problems. Through simulation, we identify those TCP implementations that can be expected to perform well. Based on the observation that it is difficult for an end-to-end solution to solve these problems effectively, we propose a new TCP-splitting protocol, termed Aeronautical Transport Control Protocol (AeroTCP). The main idea of this protocol is to use a fixed window for flow control and one duplicated acknowledgement (ACK) for fast recovery. Our simulation results show that AeroTCP can maintain higher utilization for the satellite link than end-to-end TCP, especially in high BER environment.

  12. A study on the relationship between carbon budget and ecosystem service in urban areas according to urbanization

    NASA Astrophysics Data System (ADS)

    Lee, S. J.; Lee, W. K.

    2017-12-01

    The study on the analysis of carbon storage capacity of urban green spaces with increasing urban forest. Modern cities have experienced rapid economic development since Industrial Revolution in the 18th century. The rapid economic growth caused an exponential concentration of population to the cities and decrease of green spaces due to the conversion of forest and agricultural lands to build-up areas with rapid urbanization. As green areas including forests, grasslands, and wetlands provide diverse economic, environmental, and cultural benefits, the decrease of green areas might be a huge loss. Also, the process of urbanization caused pressure on the urban environment more than its natural capacity, which accelerates global climate change. This study tries to see the relations between carbon budget and ecosystem services according to the urbanization. For calculating carbon dynamics, this study used VISIT(Vegetation Integrated Simulator for trace gases) model. And the value that ecosystem provides is explained with the concept of ecosystem service and calculated by InVEST model. Study sites are urban and peri-urban areas in Northeast Asia. From the result of the study, the effect of the urbanization can be understood in regard to carbon storage and ecosystem services.

  13. Soil Carbon Recovery of Degraded Steppe Ecosystems of the Mongolian Plateau

    NASA Astrophysics Data System (ADS)

    Ojima, D. S.; Togtohyn, C.; Qi, J.

    2013-12-01

    Mongolian steppe grassland systems are critical source of ecosystem services to societal groups in temperate East Asia. These systems are characterized by their arid and semiarid environments where rainfall tends to be too variable or evaporative losses reduce water availability to reliably support cropping systems or substantial forest cover. These steppe ecosystems have supported land use practices to accommodate the variable rainfall patterns, and seasonal and spatial patterns of forage production displayed by the nomadic pastoral systems practiced across Asia. These pastoral systems are dependent on grassland ecosystem services, including forage production, wool, skins, meat and dairy products, and in many systems provide critical biodiversity and land and water protection services which serve to maintain pastoral livelihoods. Precipitation variability and associated drought conditions experienced frequently in these grassland systems are key drivers of these systems. However, during the past several decades climate change and grazing and land use conversion have resulted in degradation of ecosystem services and loss of soil organic matter. Recent efforts in China and Mongolia are investigating different grazing management practices to restore soil organic matter in these degraded systems. Simulation modeling is being applied to evaluate the long-term benefits of different grazing management regimes under various climate scenarios.

  14. Quality of service policy control in virtual private networks

    NASA Astrophysics Data System (ADS)

    Yu, Yiqing; Wang, Hongbin; Zhou, Zhi; Zhou, Dongru

    2004-04-01

    This paper studies the QoS of VPN in an environment where the public network prices connection-oriented services based on source, destination and grade of service, and advertises these prices to its VPN customers (users). As different QoS technologies can produce different QoS, there are according different traffic classification rules and priority rules. The internet service provider (ISP) may need to build complex mechanisms separately for each node. In order to reduce the burden of network configuration, we need to design policy control technologies. We considers mainly directory server, policy server, policy manager and policy enforcers. Policy decision point (PDP) decide its control according to policy rules. In network, policy enforce point (PEP) decide its network controlled unit. For InterServ and DiffServ, we will adopt different policy control methods as following: (1) In InterServ, traffic uses resource reservation protocol (RSVP) to guarantee the network resource. (2) In DiffServ, policy server controls the DiffServ code points and per hop behavior (PHB), its PDP distributes information to each network node. Policy server will function as following: information searching; decision mechanism; decision delivering; auto-configuration. In order to prove the effectiveness of QoS policy control, we make the corrective simulation.

  15. JSpOC Mission System Application Development Environment

    NASA Astrophysics Data System (ADS)

    Luce, R.; Reele, P.; Sabol, C.; Zetocha, P.; Echeverry, J.; Kim, R.; Golf, B.

    2012-09-01

    The Joint Space Operations Center (JSpOC) Mission System (JMS) is the program of record tasked with replacing the legacy Space Defense Operations Center (SPADOC) and Astrodynamics Support Workstation (ASW) capabilities by the end of FY2015 as well as providing additional Space Situational Awareness (SSA) and Command and Control (C2) capabilities post-FY2015. To meet the legacy replacement goal, the JMS program is maturing a government Service Oriented Architecture (SOA) infrastructure that supports the integration of mission applications while acquiring mature industry and government mission applications. Future capabilities required by the JSpOC after 2015 will require development of new applications and procedures as well as the exploitation of new SSA data sources. To support the post FY2015 efforts, the JMS program is partnering with the Air Force Research Laboratory (AFRL) to build a JMS application development environment. The purpose of this environment is to: 1) empower the research & development community, through access to relevant tools and data, to accelerate technology development, 2) allow the JMS program to communicate user capability priorities and requirements to the developer community, 3) provide the JMS program with access to state-of-the-art research, development, and computing capabilities, and 4) support market research efforts by identifying outstanding performers that are available to shepherd into the formal transition process. The application development environment will consist of both unclassified and classified environments that can be accessed over common networks (including the Internet) to provide software developers, scientists, and engineers everything they need (e.g., building block JMS services, modeling and simulation tools, relevant test scenarios, documentation, data sources, user priorities/requirements, and SOA integration tools) to develop and test mission applications. The developed applications will be exercised in these relevant environments with representative data sets to help bridge the gap between development and integration into the operational JMS enterprise.

  16. Home care and technology: a case study.

    PubMed

    Stroulia, Eleni; Nikolaidisa, Ioanis; Liua, Lili; King, Sharla; Lessard, Lysanne

    2012-01-01

    Health care aides (HCAs) are the backbone of the home care system and provide a range of services to people who, for various reasons related to chronic conditions and aging, are not able to take care of themselves independently. The demand for HCA services will increase and the current HCA supply will likely not keep up with this increasing demand without fundamental changes in the current environment. Information and communication technology (ICT) can address some of the workflow challenges HCAs face. In this project, we conducted an ethnographic study to document and analyse HCAs' workflows and team interactions. Based on our findings, we designed an ICT tool suite, integrating easily available existing and newly developed (by our team) technologies to address these issues. Finally, we simulated the deployment of our technologies, to assess the potential impact of these technological solutions on the workflow and productivity of HCAs, their healthcare teams and client care.

  17. Internet-Based Laboratory Immersion: When The Real Deal is Not Available

    NASA Astrophysics Data System (ADS)

    Meisner, Gerald; Hoffman, Harol

    2004-11-01

    Do you want all of your students to investigate equilibrium conditions in the physics lab, but don't have time for lab investigations? Do your under-prepared students need basic, careful and detailed remedial work to help them succeed? LAAPhysics provides an answer to these questions by means of robust online physics courseware based on: (1) a sound, research-based pedagogy (2) a rich laboratory environment with skills and operational knowledge transferable to the wet lab' and (3) a paradigm which is economically scalable. LAAPhysics provides both synchronous and asynchronous learning experiences for an introductory, algebra-based course for students (undergraduate, AP High School, seekers of a second degree), those seeking career changes, and pre-service and in-service teachers. We have developed a simulated physics laboratory comprised of virtual lab equipment and instruments, associated curriculum modules and virtual guidance for real time feedback, formative assessment and collaborative learning.

  18. Thin film strain gage development program

    NASA Technical Reports Server (NTRS)

    Grant, H. P.; Przybyszewski, J. S.; Anderson, W. L.; Claing, R. G.

    1983-01-01

    Sputtered thin-film dynamic strain gages of 2 millimeter (0.08 in) gage length and 10 micrometer (0.0004 in) thickness were fabricated on turbojet engine blades and tested in a simulated compressor environment. Four designs were developed, two for service to 600 K (600 F) and two for service to 900 K (1200 F). The program included a detailed study of guidelines for formulating strain-gage alloys to achieve superior dynamic and static gage performance. The tests included gage factor, fatigue, temperature cycling, spin to 100,000 G, and erosion. Since the installations are 30 times thinner than conventional wire strain gage installations, and any alteration of the aerodynamic, thermal, or structural performance of the blade is correspondingly reduced, dynamic strain measurement accuracy higher than that attained with conventional gages is expected. The low profile and good adherence of the thin film elements is expected to result in improved durability over conventional gage elements in engine tests.

  19. A Simulated Learning Environment for Teaching Medicine Dispensing Skills

    PubMed Central

    Styles, Kim; Sewell, Keith; Trinder, Peta; Marriott, Jennifer; Maher, Sheryl; Naidu, Som

    2016-01-01

    Objective. To develop an authentic simulation of the professional practice dispensary context for students to develop their dispensing skills in a risk-free environment. Design. A development team used an Agile software development method to create MyDispense, a web-based simulation. Modeled on virtual learning environments elements, the software employed widely available standards-based technologies to create a virtual community pharmacy environment. Assessment. First-year pharmacy students who used the software in their tutorials, were, at the end of the second semester, surveyed on their prior dispensing experience and their perceptions of MyDispense as a tool to learn dispensing skills. Conclusion. The dispensary simulation is an effective tool for helping students develop dispensing competency and knowledge in a safe environment. PMID:26941437

  20. Environmental and body contamination from cleaning vomitus in a health care setting: A simulation study.

    PubMed

    Phan, Linh; Su, Yu-Min; Weber, Rachel; Fritzen-Pedicini, Charissa; Edomwande, Osayuwamen; Jones, Rachael M

    2018-04-01

    Environmental service workers may be exposed to pathogens during the cleaning of pathogen-containing bodily fluids. Participants with experience cleaning hospital environments were asked to clean simulated, fluorescein-containing vomitus using normal practices in a simulated patient room. Fluorescein was visualized in the environment and on participants under black lights. Fluorescein was quantitatively measured on the floor, in the air, and on gloves and shoe covers. In all 21 trials involving 7 participants, fluorescein was found on the floor after cleaning and on participants' gloves. Lower levels of floor contamination were associated with the use of towels to remove bulk fluid (ρ = -0.56, P = .01). Glove contamination was not associated with the number or frequency of contacts with environmental surfaces, suggesting contamination occurs with specific events, such as picking up contaminated towels. Fluorescein contamination on shoe covers was measured in 19 trials. Fluorescein was not observed on participants' facial personal protective equipment, if worn, or faces. Contamination on other body parts, primarily the legs, was observed in 8 trials. Fluorescein was infrequently quantified in the air. Using towels to remove bulk fluid prior to mopping is part of the recommended cleaning protocol and should be used to minimize residual contamination. Contamination on shoes and the floor may serve as reservoirs for pathogens. Copyright © 2018 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.

  1. EO/IR scene generation open source initiative for real-time hardware-in-the-loop and all-digital simulation

    NASA Astrophysics Data System (ADS)

    Morris, Joseph W.; Lowry, Mac; Boren, Brett; Towers, James B.; Trimble, Darian E.; Bunfield, Dennis H.

    2011-06-01

    The US Army Aviation and Missile Research, Development and Engineering Center (AMRDEC) and the Redstone Test Center (RTC) has formed the Scene Generation Development Center (SGDC) to support the Department of Defense (DoD) open source EO/IR Scene Generation initiative for real-time hardware-in-the-loop and all-digital simulation. Various branches of the DoD have invested significant resources in the development of advanced scene and target signature generation codes. The SGDC goal is to maintain unlimited government rights and controlled access to government open source scene generation and signature codes. In addition, the SGDC provides development support to a multi-service community of test and evaluation (T&E) users, developers, and integrators in a collaborative environment. The SGDC has leveraged the DoD Defense Information Systems Agency (DISA) ProjectForge (https://Project.Forge.mil) which provides a collaborative development and distribution environment for the DoD community. The SGDC will develop and maintain several codes for tactical and strategic simulation, such as the Joint Signature Image Generator (JSIG), the Multi-spectral Advanced Volumetric Real-time Imaging Compositor (MAVRIC), and Office of the Secretary of Defense (OSD) Test and Evaluation Science and Technology (T&E/S&T) thermal modeling and atmospherics packages, such as EOView, CHARM, and STAR. Other utility packages included are the ContinuumCore for real-time messaging and data management and IGStudio for run-time visualization and scenario generation.

  2. Computer Simulation of Human Service Program Evaluations.

    ERIC Educational Resources Information Center

    Trochim, William M. K.; Davis, James E.

    1985-01-01

    Describes uses of computer simulations for the context of human service program evaluation. Presents simple mathematical models for most commonly used human service outcome evaluation designs (pretest-posttest randomized experiment, pretest-posttest nonequivalent groups design, and regression-discontinuity design). Translates models into single…

  3. Exploring Pre-Service Elementary Teachers' Mental Models of the Environment

    ERIC Educational Resources Information Center

    Taskin-Ekici, Fatma; Ekici, Erhan; Cokadar, Hulusi

    2015-01-01

    This study aims to explore pre-service elementary teachers' understandings of the environment. A survey method was carried out in this study. A close-ended questionnaire and Draw-An-Environment Test (DAET) are administered to pre-service teachers (N = 255) after instruction of an Environmental Education course. A rubric (DAET-R) is used for…

  4. Simulation of Smoke-Haze Dispersion from Wildfires in South East Asia with a Lagrangian Particle Model

    NASA Astrophysics Data System (ADS)

    Hertwig, D.; Burgin, L.; Gan, C.; Hort, M.; Jones, A. R.; Shaw, F.; Witham, C. S.; Zhang, K.

    2014-12-01

    Biomass burning, often related to agricultural deforestation, not only affects local pollution levels but periodically deteriorates air quality in many South East Asian megacities due to the transboundary transport of smoke-haze. In June 2013, Singapore experienced the worst wildfire related air-pollution event on record following from the escalation of peatland fires in Sumatra. An extended dry period together with anomalous westerly winds resulted in severe and unhealthy pollution levels in Singapore that lasted for more than two weeks. Reacting to this event, the Met Office and the Meteorological Service Singapore have explored how to adequately simulate haze-pollution dispersion, with the aim to provide a reliable operational forecast for Singapore. Simulations with the Lagrangian particle model NAME (Numerical Atmospheric-dispersion Modelling Environment), running on numerical weather prediction data from the Met Office and Meteorological Service Singapore and emission data derived from satellite observations of the fire radiative power, are validated against PM10 observations in South East Asia. Comparisons of simulated concentrations with hourly averages of PM10 measurements in Singapore show that the model captures well the severe smoke-haze event in June 2013 and a minor episode in March 2014. Different quantitative satellite-derived emissions have been tested, with one source demonstrating a consistent factor of two under-prediction for Singapore. Confidence in the skill of the model system has been substantiated by further comparisons with data from monitoring sites in Malaysia, Brunei and Thailand. Following the validation study, operational smoke-haze pollution forecasts with NAME were launched in Singapore, in time for the 2014 fire season. Real-time bias correction and verification of this forecast will be discussed.

  5. [Environment capacity of eco-tourism resort].

    PubMed

    Sun, Y; Wang, R

    2000-08-01

    The results of quantitative analysis on the amount of tourist, service-environment capacity, eco-environment capacity, and their relations in Five-finger Mountain eco-tourism resort indicate that the amount of tourist in common situation and in its extreme was 1918 and 2301 visitor-hour per day, and the service-environment capacity and eco-environment capacity were 6000 and 2400 visitor-hour per day, respectively. The eco-environment capacity was smaller than its service-environment capacity, and would become the first limiting factor to the increase of tourist amount, which was mainly due to the ecological fragility of resort, the lower resistance of biological communities to the disturbance, and the slower speed of ecosystem restoration after its being destroyed.

  6. Effectiveness of AODV Protocol under Hidden Node Environment

    NASA Astrophysics Data System (ADS)

    Garg, Ruchi; Sharma, Himanshu; Kumar, Sumit

    IEEE 802.11 is a standard for mobile ad hoc networks (MANET), implemented with various different protocols. Ad Hoc on Demand Distance Vector Routing (AODV) is one of the several protocols of IEEE 802.11, intended to provide various Quality of Service (QOS) parameters under acceptable range. To avoid the collision and interference the MAC protocol has only two solutions, one, to sense the physical carrier and second, to use the RTS/CTS handshake mechanism. But with the help of these methods AODV is not free from the problem of hidden nodes like other several protocols. Under the hidden node environment, performance of AODV depends upon various factors. The position of receiver and sender among the other nodes is very crucial and it affects the performance. Under the various situations the AODV is simulated with the help of NS2 and the outcomes are discussed.

  7. Fundamental concepts of problem-based learning for the new facilitator.

    PubMed Central

    Kanter, S L

    1998-01-01

    Problem-based learning (PBL) is a powerful small group learning tool that should be part of the armamentarium of every serious educator. Classic PBL uses ill-structured problems to simulate the conditions that occur in the real environment. Students play an active role and use an iterative process of seeking new information based on identified learning issues, restructuring the information in light of the new knowledge, gathering additional information, and so forth. Faculty play a facilitatory role, not a traditional instructional role, by posing metacognitive questions to students. These questions serve to assist in organizing, generalizing, and evaluating knowledge; to probe for supporting evidence; to explore faulty reasoning; to stimulate discussion of attitudes; and to develop self-directed learning and self-assessment skills. Professional librarians play significant roles in the PBL environment extending from traditional service provider to resource person to educator. Students and faculty usually find the learning experience productive and enjoyable. PMID:9681175

  8. Impact resistance of composite fan blades. [fiber reinforced graphite and boron epoxy blades for STOL operating conditions

    NASA Technical Reports Server (NTRS)

    Premont, E. J.; Stubenrauch, K. R.

    1973-01-01

    The resistance of current-design Pratt and Whitney Aircraft low aspect ratio advanced fiber reinforced epoxy matrix composite fan blades to foreign object damage (FOD) at STOL operating conditions was investigated. Five graphite/epoxy and five boron/epoxy wide chord fan blades with nickel plated stainless steel leading edge sheath protection were fabricated and impact tested. The fan blades were individually tested in a vacuum whirlpit under FOD environments. The FOD environments were typical of those encountered in service operations. The impact objects were ice balls, gravel, stralings and gelatin simulated birds. Results of the damage sustained from each FOD impact are presented for both the graphite boron reinforced blades. Tests showed that the present design composite fan blades, with wrap around leading edge protection have inadequate FOD impact resistance at 244 m/sec (800 ft/sec) tip speed, a possible STOL operating condition.

  9. DMSK: A practical 2400-bps receiver for the mobile satellite service: An MSAT-X Report

    NASA Technical Reports Server (NTRS)

    Davarian, F.; Simon, M. K.; Sumida, J.

    1985-01-01

    The partical aspects of a 2400-bps differential detection minimum-shift-keying (DMSK) receiver are investigated. Fundamental issues relating to hardware precision, Doppler shift, fading, and frequency offset are examined, and it is concluded that the receiver's implementation at baseband is more advantageous both in cost and simplicity than its IF implementation. The DMSK receiver has been fabricated and tested under simulated mobile satellite environment conditions. The measured receiver performance in the presence of anomalies pertinent to the link is presented in this report. Furthermore, the receiver behavior in a band-limited channel (GMSK) is also investigated. The DMSK receiver performs substantially better than a coherent minimum-shift-keying (MSK) receiver in a heavily fading environment. The DMSK radio is simple and robust, and results in a lower error floor than its coherent counterpart. Moreover, this receiver is suitable for burst-type signals, and its recovery from deep fades is fast.

  10. Design and implementation of spatial knowledge grid for integrated spatial analysis

    NASA Astrophysics Data System (ADS)

    Liu, Xiangnan; Guan, Li; Wang, Ping

    2006-10-01

    Supported by spatial information grid(SIG), the spatial knowledge grid (SKG) for integrated spatial analysis utilizes the middleware technology in constructing the spatial information grid computation environment and spatial information service system, develops spatial entity oriented spatial data organization technology, carries out the profound computation of the spatial structure and spatial process pattern on the basis of Grid GIS infrastructure, spatial data grid and spatial information grid (specialized definition). At the same time, it realizes the complex spatial pattern expression and the spatial function process simulation by taking the spatial intelligent agent as the core to establish space initiative computation. Moreover through the establishment of virtual geographical environment with man-machine interactivity and blending, complex spatial modeling, network cooperation work and spatial community decision knowledge driven are achieved. The framework of SKG is discussed systematically in this paper. Its implement flow and the key technology with examples of overlay analysis are proposed as well.

  11. Simulation of Spatial and Temporal Radiation Exposures for ISS in the South Atlantic Anomaly

    NASA Technical Reports Server (NTRS)

    Anderson, Brooke M.; Nealy, John E.; Luetke, Nathan J.; Sandridge, Christopher A.; Qualls, Garry D.

    2004-01-01

    The International Space Station (ISS) living areas receive the preponderance of ionizing radiation exposure from Galactic Cosmic Rays (GCR) and geomagnetically trapped protons. Practically all trapped proton exposure occurs when the ISS passes through the South Atlantic Anomaly (SAA) region. The fact that this region is in proximity to a trapping mirror point indicates that the proton flux is highly directional. The inherent shielding provided by the ISS structure is represented by a recently-developed CAD model of the current 11-A configuration. Using modeled environment and configuration, trapped proton exposures have been analytically estimated at selected target points within the Service and Lab Modules. The results indicate that the directional flux may lead to substantially different exposure characteristics than the more common analyses that assume an isotropic environment. Additionally, predictive capability of the computational procedure should allow sensitive validation with corresponding on-board directional dosimeters.

  12. Preliminary design of CERN Future Circular Collider tunnel: first evaluation of the radiation environment in critical areas for electronics

    NASA Astrophysics Data System (ADS)

    Infantino, Angelo; Alía, Rubén García; Besana, Maria Ilaria; Brugger, Markus; Cerutti, Francesco

    2017-09-01

    As part of its post-LHC high energy physics program, CERN is conducting a study for a new proton-proton collider, called Future Circular Collider (FCC-hh), running at center-of-mass energies of up to 100 TeV in a new 100 km tunnel. The study includes a 90-350 GeV lepton collider (FCC-ee) as well as a lepton-hadron option (FCC-he). In this work, FLUKA Monte Carlo simulation was extensively used to perform a first evaluation of the radiation environment in critical areas for electronics in the FCC-hh tunnel. The model of the tunnel was created based on the original civil engineering studies already performed and further integrated in the existing FLUKA models of the beam line. The radiation levels in critical areas, such as the racks for electronics and cables, power converters, service areas, local tunnel extensions was evaluated.

  13. Procuring interoperability at the expense of usability: a case study of UK National Programme for IT assurance process.

    PubMed

    Krause, Paul; de Lusignan, Simon

    2010-01-01

    The allure of interoperable systems is that they should improve patient safety and make health services more efficient. The UK's National Programme for IT has made great strides in achieving interoperability; through linkage to a national electronic spine. However, there has been criticism of the usability of the applications in the clinical environment. Analysis of the procurement and assurance process to explore whether they predetermine usability. Processes separate developers from users, and test products against theoretical assurance models of use rather than simulate or pilot in a clinical environment. The current process appears to be effective for back office systems and high risk applications, but too inflexible for developing applications for the clinical setting. For clinical applications agile techniques are more appropriate. Usability testing should become an integrated part of the contractual process and be introduced earlier in the development process.

  14. An examination of speech reception thresholds measured in a simulated reverberant cafeteria environment.

    PubMed

    Best, Virginia; Keidser, Gitte; Buchholz, Jörg M; Freeston, Katrina

    2015-01-01

    There is increasing demand in the hearing research community for the creation of laboratory environments that better simulate challenging real-world listening environments. The hope is that the use of such environments for testing will lead to more meaningful assessments of listening ability, and better predictions about the performance of hearing devices. Here we present one approach for simulating a complex acoustic environment in the laboratory, and investigate the effect of transplanting a speech test into such an environment. Speech reception thresholds were measured in a simulated reverberant cafeteria, and in a more typical anechoic laboratory environment containing background speech babble. The participants were 46 listeners varying in age and hearing levels, including 25 hearing-aid wearers who were tested with and without their hearing aids. Reliable SRTs were obtained in the complex environment, but led to different estimates of performance and hearing-aid benefit from those measured in the standard environment. The findings provide a starting point for future efforts to increase the real-world relevance of laboratory-based speech tests.

  15. An examination of speech reception thresholds measured in a simulated reverberant cafeteria environment

    PubMed Central

    Best, Virginia; Keidser, Gitte; Buchholz, J(x004E7)rg M.; Freeston, Katrina

    2016-01-01

    Objective There is increasing demand in the hearing research community for the creation of laboratory environments that better simulate challenging real-world listening environments. The hope is that the use of such environments for testing will lead to more meaningful assessments of listening ability, and better predictions about the performance of hearing devices. Here we present one approach for simulating a complex acoustic environment in the laboratory, and investigate the effect of transplanting a speech test into such an environment. Design Speech reception thresholds were measured in a simulated reverberant cafeteria, and in a more typical anechoic laboratory environment containing background speech babble. Study Sample The participants were 46 listeners varying in age and hearing levels, including 25 hearing-aid wearers who were tested with and without their hearing aids. Results Reliable SRTs were obtained in the complex environment, but led to different estimates of performance and hearing aid benefit from those measured in the standard environment. Conclusions The findings provide a starting point for future efforts to increase the real-world relevance of laboratory-based speech tests. PMID:25853616

  16. 40 CFR 60.482-7a - Standards: Valves in gas/vapor service and in light liquid service.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...-7a Standards: Valves in gas/vapor service and in light liquid service. (a)(1) Each valve shall be... 40 Protection of Environment 6 2010-07-01 2010-07-01 false Standards: Valves in gas/vapor service and in light liquid service. 60.482-7a Section 60.482-7a Protection of Environment ENVIRONMENTAL...

  17. 40 CFR 60.482-7a - Standards: Valves in gas/vapor service and in light liquid service.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...-7a Standards: Valves in gas/vapor service and in light liquid service. (a)(1) Each valve shall be... 40 Protection of Environment 7 2014-07-01 2014-07-01 false Standards: Valves in gas/vapor service and in light liquid service. 60.482-7a Section 60.482-7a Protection of Environment ENVIRONMENTAL...

  18. 40 CFR 60.482-7a - Standards: Valves in gas/vapor service and in light liquid service.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...-7a Standards: Valves in gas/vapor service and in light liquid service. (a)(1) Each valve shall be... 40 Protection of Environment 7 2013-07-01 2013-07-01 false Standards: Valves in gas/vapor service and in light liquid service. 60.482-7a Section 60.482-7a Protection of Environment ENVIRONMENTAL...

  19. 40 CFR 60.482-7a - Standards: Valves in gas/vapor service and in light liquid service.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...-7a Standards: Valves in gas/vapor service and in light liquid service. (a)(1) Each valve shall be... 40 Protection of Environment 6 2011-07-01 2011-07-01 false Standards: Valves in gas/vapor service and in light liquid service. 60.482-7a Section 60.482-7a Protection of Environment ENVIRONMENTAL...

  20. 40 CFR 60.482-7a - Standards: Valves in gas/vapor service and in light liquid service.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...-7a Standards: Valves in gas/vapor service and in light liquid service. (a)(1) Each valve shall be... 40 Protection of Environment 7 2012-07-01 2012-07-01 false Standards: Valves in gas/vapor service and in light liquid service. 60.482-7a Section 60.482-7a Protection of Environment ENVIRONMENTAL...

  1. Simulated Service and Stress Corrosion Cracking Testing for Friction Stir Welded Spun Form Domes

    NASA Technical Reports Server (NTRS)

    Stewart, Thomas J.; Torres, Pablo D.; Caratus, Andrei A.; Curreri, Peter A.

    2010-01-01

    Damage tolerance testing development was required to help qualify a new spin forming dome fabrication process for the Ares 1 program at Marshall Space Flight Center (MSFC). One challenge of the testing was due to the compound curvature of the dome. The testing was developed on a sub-scale dome with a diameter of approximately 40 inches. The simulated service testing performed was based on the EQTP1102 Rev L 2195 Aluminum Lot Acceptance Simulated Service Test and Analysis Procedure generated by Lockheed Martin for the Space Shuttle External Fuel Tank. This testing is performed on a specimen with an induced flaw of elliptical shape generated by Electrical Discharge Machining (EDM) and subsequent fatigue cycling for crack propagation to a predetermined length and depth. The specimen is then loaded in tension at a constant rate of displacement at room temperature until fracture occurs while recording load and strain. An identical specimen with a similar flaw is then proof tested at room temperature to imminent failure based on the critical offset strain achieved by the previous fracture test. If the specimen survives the proof, it is then subjected to cryogenic cycling with loads that are a percentage of the proof load performed at room temperature. If all cryogenic cycles are successful, the specimen is loaded in tension to failure at the end of the test. This standard was generated for flat plate, so a method of translating this to a specimen of compound curvature was required. This was accomplished by fabricating a fixture that maintained the curvature of the specimen rigidly with the exception of approximately one-half inch in the center of the specimen containing the induced flaw. This in conjunction with placing the center of the specimen in the center of the load train allowed for successful testing with a minimal amount of bending introduced into the system. Stress corrosion cracking (SCC) tests were performed using the typical double beam assembly and with 4-point loaded specimens under alternate immersion conditions in a 3.5% NaCl environment for 90 days. In addition, experiments were conducted to determine the threshold stress intensity factor for SCC (K1SCC) of Al-Li 2195 which to our knowledge has not been determined previously. The successful simulated service and stress corrosion testing helped to provide confidence to continue to Ares 1 scale dome fabrication.

  2. Dynamic SLA Negotiation in Autonomic Federated Environments

    NASA Astrophysics Data System (ADS)

    Rubach, Pawel; Sobolewski, Michael

    Federated computing environments offer requestors the ability to dynamically invoke services offered by collaborating providers in the virtual service network. Without an efficient resource management that includes Dynamic SLA Negotiation, however, the assignment of providers to customer's requests cannot be optimized and cannot offer high reliability without relevant SLA guarantees. We propose a new SLA-based SERViceable Metacomputing Environment (SERVME) capable of matching providers based on QoS requirements and performing autonomic provisioning and deprovisioning of services according to dynamic requestor needs. This paper presents the SLA negotiation process that includes on-demand provisioning and uses an object-oriented SLA model for large-scale service-oriented systems supported by SERVME. An initial reference implementation in the SORCER environment is also described.

  3. Using Social Simulations to Assess and Train Potential Leaders to Make Effective Decisions in Turbulent Environments

    ERIC Educational Resources Information Center

    Hunsaker, L. Phillip

    2007-01-01

    Purpose: The purpose of this paper is to describe two social simulations created to assess leadership potential and train leaders to make effective decisions in turbulent environments. One is set in the novel environment of a lunar moon colony and the other is a military combat command. The research generated from these simulations for assessing…

  4. Multidisciplinary research leading to utilization of extraterrestrial resources

    NASA Technical Reports Server (NTRS)

    1972-01-01

    Progress of the research accomplished during fiscal year 1972 is reported. The summaries presented include: (1) background analysis and coordination, (2) surface properties of rock in simulated lunar environment, (3) rock failure processes, strength and elastic properties in simulated lunar environment, (4) thermal fragmentation, and thermophysical and optical properties in simulated lunar environment, and (5) use of explosives on the moon.

  5. Design Patterns for Learning and Assessment: Facilitating the Introduction of a Complex Simulation-Based Learning Environment into a Community of Instructors

    ERIC Educational Resources Information Center

    Frezzo, Dennis C.; Behrens, John T.; Mislevy, Robert J.

    2010-01-01

    Simulation environments make it possible for science and engineering students to learn to interact with complex systems. Putting these capabilities to effective use for learning, and assessing learning, requires more than a simulation environment alone. It requires a conceptual framework for the knowledge, skills, and ways of thinking that are…

  6. Argumentation in Science Teacher Education: The simulated jury as a resource for teaching and learning

    NASA Astrophysics Data System (ADS)

    Drumond Vieira, Rodrigo; da Rocha Bernardo, José Roberto; Evagorou, Maria; Florentino de Melo, Viviane

    2015-05-01

    In this article, we focus on the contributions that a simulated jury-based activity might have for pre-service teachers, especially for their active participation and learning in teacher education. We observed a teacher educator using a series of simulated juries as teaching resources to help pre-service teachers develop their pedagogical knowledge and their argumentation abilities in a physics teacher methods course. For the purposes of this article, we have selected one simulated jury-based activity, comprising two opposed groups of pre-service teachers that presented aspects that hinder the teachers' development of professional knowledge (against group) and aspects that allow this development (favor group). After the groups' presentations, a group of judges was formed to evaluate the discussion. We applied a multi-level method for discourse analysis and the results showed that (1) the simulated jury afforded the pre-service teachers to position themselves as active knowledge producers; (2) the teacher acted as 'animator' of the pre-service teachers' actions, showing responsiveness to the emergence of circumstantial teaching and learning opportunities and (3) the simulated jury culminated in the judges' identification of the pattern 'concrete/obstacles-ideological/possibilities' in the groups' responses, which was elaborated by the teacher for the whole class. Implications from this study include using simulated juries for teaching and learning and for the development of the pre-service teachers' argumentative abilities. The potential of simulated juries to improve teaching and learning needs to be further explored in order to inform the uses and reflections of this resource in science education.

  7. Evaluating Discovery Services Architectures in the Context of the Internet of Things

    NASA Astrophysics Data System (ADS)

    Polytarchos, Elias; Eliakis, Stelios; Bochtis, Dimitris; Pramatari, Katerina

    As the "Internet of Things" is expected to grow rapidly in the following years, the need to develop and deploy efficient and scalable Discovery Services in this context is very important for its success. Thus, the ability to evaluate and compare the performance of different Discovery Services architectures is vital if we want to allege that a given design is better at meeting requirements of a specific application. The purpose of this chapter is to provide a paradigm for the evaluation of different Discovery Services for the Internet of Things in terms of efficiency, scalability and performance through the use of simulations. The methodology presented uses the application of Discovery Services to a supply chain with the Service Lookup Service Discovery Service using OMNeT++, an open source network simulation suite. Then, we delve into the simulation design and the details of our findings.

  8. Real-Time and High-Fidelity Simulation Environment for Autonomous Ground Vehicle Dynamics

    NASA Technical Reports Server (NTRS)

    Cameron, Jonathan; Myint, Steven; Kuo, Calvin; Jain, Abhi; Grip, Havard; Jayakumar, Paramsothy; Overholt, Jim

    2013-01-01

    This paper reports on a collaborative project between U.S. Army TARDEC and Jet Propulsion Laboratory (JPL) to develop a unmanned ground vehicle (UGV) simulation model using the ROAMS vehicle modeling framework. Besides modeling the physical suspension of the vehicle, the sensing and navigation of the HMMWV vehicle are simulated. Using models of urban and off-road environments, the HMMWV simulation was tested in several ways, including navigation in an urban environment with obstacle avoidance and the performance of a lane change maneuver.

  9. 34 CFR 303.126 - Early intervention services in natural environments.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 34 Education 2 2012-07-01 2012-07-01 false Early intervention services in natural environments...) OFFICE OF SPECIAL EDUCATION AND REHABILITATIVE SERVICES, DEPARTMENT OF EDUCATION EARLY INTERVENTION... Statewide System Minimum Components of A Statewide System § 303.126 Early intervention services in natural...

  10. 34 CFR 303.126 - Early intervention services in natural environments.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 34 Education 2 2014-07-01 2013-07-01 true Early intervention services in natural environments. 303...) OFFICE OF SPECIAL EDUCATION AND REHABILITATIVE SERVICES, DEPARTMENT OF EDUCATION EARLY INTERVENTION... Statewide System Minimum Components of A Statewide System § 303.126 Early intervention services in natural...

  11. 34 CFR 303.126 - Early intervention services in natural environments.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 34 Education 2 2013-07-01 2013-07-01 false Early intervention services in natural environments...) OFFICE OF SPECIAL EDUCATION AND REHABILITATIVE SERVICES, DEPARTMENT OF EDUCATION EARLY INTERVENTION... Statewide System Minimum Components of A Statewide System § 303.126 Early intervention services in natural...

  12. Effects of service environments on aluminum-brazed titanium (ABTi)

    NASA Technical Reports Server (NTRS)

    Cotton, W. L.

    1978-01-01

    Aluminum brazed titanium (ABTi) structures were evaluated during prolonged exposure to extreme environments: elevated temperature exposure to airline service fluids, hydraulic fluid, and seawater, followed by laboratory corrosion tests. Solid-face and perforated face honeycomb sandwich panel specimens, stressed panel assemblies, and faying surface brazed joints were tested. The corrosion resistance of ABTi is satisfactory for commercial airline service. Unprotected ABTi proved inherently resistant to attack by all of the extreme service aircraft environments except: seawater at 700 K (800 F) and above, dripping phosphate ester hydraulic fluid at 505 K (450 F), and a marine environment at ambient temperature. The natural oxides and deposits present on titanium surfaces in airline service provide protection against hot salt corrosion pitting. Coatings are required to protect titanium dripping phosphate ester fluid at elevated temperatures and to protect exposed acoustic honeycomb parts against corrosion in a marine environment.

  13. A Model for QoS – Aware Wireless Communication in Hospitals

    PubMed Central

    Alavikia, Zahra; Khadivi, Pejman; Hashemi, Masoud Reza

    2012-01-01

    In the recent decade, research regarding wireless applications in electronic health (e-Health) services has been increasing. The main benefits of using wireless technologies in e-Health applications are simple communications, fast delivery of medical information, reducing treatment cost and also reducing the medical workers’ error rate. However, using wireless communications in sensitive healthcare environment raises electromagnetic interference (EMI). One of the most effective methods to avoid the EMI problem is power management. To this end, some of methods have been proposed in the literature to reduce EMI effects in health care environments. However, using these methods may result in nonaccurate interference avoidance and also may increase network complexity. To overcome these problems, we introduce two approaches based on per-user location and hospital sectoring for power management in sensitive healthcare environments. Although reducing transmission power could avoid EMI, it causes a number of successful message deliveries to the access point to decrease and, hence, the quality of service requirements cannot be meet. In this paper, we propose the use of relays for decreasing the probability of outage in the aforementioned scenario. Relay placement is the main factor to enjoy the usefulness of relay station benefits in the network and, therefore, we use the genetic algorithm to compute the optimum positions of a fixed number of relays. We have considered delay and maximum blind point coverage as two main criteria in relay station problem. The performance of the proposed method in outage reduction is investigated through simulations. PMID:23493832

  14. A Model for QoS - Aware Wireless Communication in Hospitals.

    PubMed

    Alavikia, Zahra; Khadivi, Pejman; Hashemi, Masoud Reza

    2012-01-01

    In the recent decade, research regarding wireless applications in electronic health (e-Health) services has been increasing. The main benefits of using wireless technologies in e-Health applications are simple communications, fast delivery of medical information, reducing treatment cost and also reducing the medical workers' error rate. However, using wireless communications in sensitive healthcare environment raises electromagnetic interference (EMI). One of the most effective methods to avoid the EMI problem is power management. To this end, some of methods have been proposed in the literature to reduce EMI effects in health care environments. However, using these methods may result in nonaccurate interference avoidance and also may increase network complexity. To overcome these problems, we introduce two approaches based on per-user location and hospital sectoring for power management in sensitive healthcare environments. Although reducing transmission power could avoid EMI, it causes a number of successful message deliveries to the access point to decrease and, hence, the quality of service requirements cannot be meet. In this paper, we propose the use of relays for decreasing the probability of outage in the aforementioned scenario. Relay placement is the main factor to enjoy the usefulness of relay station benefits in the network and, therefore, we use the genetic algorithm to compute the optimum positions of a fixed number of relays. We have considered delay and maximum blind point coverage as two main criteria in relay station problem. The performance of the proposed method in outage reduction is investigated through simulations.

  15. NASA Lewis Nickel Alloy being Poured in the Technical Service Building

    NASA Image and Video Library

    1966-04-21

    A nickel alloy developed at the National Aeronautics and Space Administration (NASA) Lewis Research Center being poured in a shop inside the Technical Services Building. Materials technology is an important element in the successful development of both advanced airbreathing and rocket propulsion systems. An array of dependable materials is needed to build different types of engines for operation in diverse environments. NASA Lewis began investigating the characteristics of different materials shortly after World War II. In 1949 the materials research group was expanded into its own division. The Lewis researchers studied and tested materials in environments that simulated the environment in which they would operate. Lewis created two programs in the early 1960s to create materials for new airbreathing engines. One concentrated on high-temperature alloys and the other on cooling turbine blades. William Klopp, Peter Raffo, Lester Rubenstein, and Walter Witzke developed Tungsten RHC, the highest strength metal at temperatures over 3500⁰ F. The men received an IR-100 Award for their efforts. Similarly a cobalt-tungsten alloy was developed by the Fatigue and Alloys Research Branch. The result was a combination of high temperature strength and magnetic properties that were applicable for generator rotor application. John Freche invented and patented a nickel alloy while searching for high temperature metals for aerospace use. NASA agreed to a three-year deal which granted Union Carbide exclusive use of the new alloy before it became public property.

  16. Evaluation of ecosystem service based on scenario simulation of land use in Yunnan Province

    NASA Astrophysics Data System (ADS)

    Zhang, Hong; Liao, Xiaoli; Zhai, Tianlin

    2018-04-01

    Climate change and rapid urbanization are important factors restricting future land use. Situational analysis, as an important foundation for the optimization of land use, needs to focus on the impact of climate factors and socio-economic factors. In this paper, the Markov model and the DLS (Simulation of Land System Dynamics) model are combined for the first time, and the land use pattern in 2020 is simulated based on the data of land use in 2000 and 2010 as well as the climate, soil, topography and socio-economic factors of Yunnan Province. In his paper, we took Yunnan Province as the case study area, and selected 12 driving factors by logistic regression method, then the land use demands and layout of Yunnan Province in 2020 has been forecasted and simulated under business as usual (BAU) scenario and farmland protection (FP) scenario and the changes in ecosystem service value has been calculated. The result shows that: (1) after the regression analysis and ROC (Relative Operating Characteristics) test, the 12 factors selected in this paper have a strong ability to explain the land use change in Yunnan Province. (2) Under the two scenarios, the significant reduction of arable land area is a common feature of land use change in Yunnan Province in the future, and its main land use type will be construction land. However, under FP scenario, the current situation where construction land encroach on arable land will be improved. Compared with the change from 2000 to 2010, the trend of arable land, forest land, water area, construction land and unused land will be the same under the two scenarios, whereas the change trend of grassland was opposite. (3) From 2000 to 2020, the value of ecosystem services in Yunnan Province is on the rise, but the ecosystem service value under FP scenario is higher than that of the ecosystem services under BAU scenario. In general, land use in 2020 in Yunnan Province continues the pattern of 2010, but there are also significant spatial differences. Under the BAU scenario, the construction land is mainly in the south of Lijiang City and the northeastern part of Kunming. Under the FP scenario, the new construction land is concentrated near the Lashi dam in northern Yunnan Province, and the high-quality arable land in the valley will be better protected. The research results can provide reference for the optimization of land use pattern in Yunnan Province, and provide scientific basis for land use management and planning. Based on the value of ecosystem services, we should implement the policy of strict protection of arable land, both to ensure food supply and promote the healthy development of ecological environment.

  17. Simulation Model for DVB-SH Systems Based on OFDM for Analyzing Quasi-error-free Communication over Different Channel Models

    NASA Astrophysics Data System (ADS)

    Bačić, Iva; Malarić, Krešimir; Dumić, Emil

    2014-05-01

    Mobile users today expect wide range of multimedia services to be available in different mobility scenarios, and among the others is mobile TV service. The Digital Video Broadcasting - Satellite services to Handheld (DVB-SH) is designed to provide mobile TV services, supporting a wide range of mobile multimedia services, like audio and data broadcasting as well as file downloading services. In this paper we present our simulation model for the performance evaluation of the DVB-SH system following the ETSI standard EN 302 583. Simulation model includes complete DVB-SH system, supporting all standardized system modes and parameters. From transmitter to receiver, the information may be sent over different channel models, thus simulating real case scenarios. To the best of authors' knowledge, this is the first complete model of DVB-SH system that includes all standardized system parameters and may be used for examining real DVB-SH communication as well as for educational purposes.

  18. Three-dimensional modelling of the hydrodynamics of the Southern Bight of the North Sea: first results

    NASA Astrophysics Data System (ADS)

    Ivanov, Evgeny; Capet, Arthur; Barth, Alexander; Delhez, Eric; Soetaert, Karline; Grégoire, Marilaure

    2017-04-01

    In the frame of the Belgian research project FaCE-It (Functional biodiversity in a Changing sedimentary Environment: Implications for biogeochemistry and food webs in a managerial setting), the impact of dredging activities and offshore wind farm installation on the spatial distribution of sediment grain size, biodiversity and biogeochemistry will be estimated in the Southern Bight of the North Sea (SBNS) with a focus on the Belgian Coastal Zone (BCZ). To reach this goal, the three-dimensional hydrodynamical model ROMS-COAWST is implemented in the SBNS in order to simulate the complex hydrodynamics and sediment transport. Two levels of nesting are used to reach a resolution of 250 m in the BCZ. The model is forced at the air-sea interface by the 6-hourly ECMWF ERA-interim atmospheric dataset and at the open boundaries by the coarse resolution model results available from CMEMS (Copernicus Marine Environment Monitoring Service), and also considers tides and 4 main rivers (Scheldt, Rhine with Maas, Thames and Seine). Two types of simulations have been performed: a 10-years climatological simulation and a simulation over 2003-2013 to investigate the interannual dynamics. The model skills are evaluated by comparing its outputs to historical data (e.g. salinity, temperature and currents) from remote sensing and in-situ. The sediment transport module will then be implemented and its outputs compared to historical and newly collected (in the frame of FaCE-iT) observations on grain size distribution as well as with satellite Suspended Particulate Matter (SPM) images. This will allow assessing the impact of substrate modification due to offshore human activities at local and regional scales.

  19. An analysis of the low-earth-orbit communications environment

    NASA Astrophysics Data System (ADS)

    Diersing, Robert Joseph

    Advances in microprocessor technology and availability of launch opportunities have caused interest in low-earth-orbit satellite based communications systems to increase dramatically during the past several years. In this research the capabilities of two low-cost, store-and-forward LEO communications satellites operating in the public domain are examined--PACSAT-1 (operated by the Radio Amateur Satellite Corporation) and UoSAT-3 (operated by the University of Surrey, England, Electrical Engineering Department). The file broadcasting and file transfer facilities are examined in detail and a simulation model of the downlink traffic pattern is developed. The simulator will aid the assessment of changes in design and implementation for other systems. The development of the downlink traffic simulator is based on three major parts. First, is a characterization of the low-earth-orbit operating environment along with preliminary measurements of the PACSAT-1 and UoSAT-3 systems including: satellite visibility constraints on communications, monitoring equipment configuration, link margin computations, determination of block and bit error rates, and establishing typical data capture rates for ground stations using computer-pointed directional antennas and fixed omni-directional antennas. Second, arrival rates for successful and unsuccessful file server connections are established along with transaction service times. Downlink traffic has been further characterized by measuring: frame and byte counts for all data-link layer traffic; 30-second interval average response time for all traffic and for file server traffic only; file server response time on a per-connection basis; and retry rates for information and supervisory frames. Finally, the model is verified by comparison with measurements of actual traffic not previously used in the model building process. The simulator is then used to predict operation of the PACSAT-1 satellite with modifications to the original design.

  20. Nursing Unit Environment Associated with Provision of Language Services in Pediatric Hospices.

    PubMed

    Lindley, Lisa C; Held, Mary L; Henley, Kristen M; Miller, Kathryn A; Pedziwol, Katherine E; Rumley, Laurie E

    2017-04-01

    Provision of language services in pediatric hospice enables nurses to communicate effectively with patients who have limited English proficiency. Language barriers contribute to ethnic disparities in health care. While language service use corresponds with improved patient comprehension of illness and care options, we lack an understanding of how the nurse work environment affects the provision of these services. Data were obtained from the 2007 National Home and Hospice Care Survey and included a study sample of 1251 pediatric hospice agencies. Variable selection was guided by structural contingency theory, which posits that organizational effectiveness is dependent upon how well an organization's structure relates to its context. Using multivariate logistic regression, we analyzed the extent to which nursing unit environment predicted provision of translation services and interpreter services. The majority of hospices provided translation services (74.9 %) and interpreter services (87.1 %). Four variables predicted translation services: registered nurse (RN) unit size, RN leadership, RN medical expertise, and for-profit status. RN medical expertise and having a safety climate within the hospice corresponded with provision of interpreter services. Findings indicate that nursing unit environment predicts provision of language services. Hospices with more specialized RNs and a stronger safety climate might include staffs who are dedicated to best care provision, including language services. This study provides valuable data on the nurse work environment as a predictor of language services provision, which can better serve patients with limited English proficiency and ultimately reduce ethnic disparities in end-of-life care for children and their families.

  1. Nursing unit environment associated with provision of language services in pediatric hospices

    PubMed Central

    Lindley, Lisa C.; Held, Mary L.; Henley, Kristen M.; Miller, Kathryn A.; Pedziwol, Katherine E.; Rumley, Laurie E.

    2016-01-01

    Background Provision of language services in pediatric hospice enables nurses to communicate effectively with patients who have limited English proficiency. Language barriers contribute to ethnic disparities in health care. While language service use corresponds with improved patient comprehension of illness and care options, we lack an understanding of how the nurse work environment affects the provision of these services. Methods Data were obtained from the 2007 National Home and Hospice Care Survey and included a study sample of 1,251 pediatric hospice agencies. Variable selection was guided by Structural Contingency Theory, which posits that organizational effectiveness is dependent upon how well an organization’s structure relates to its context. Using multivariate logistic regression, we analyzed the extent to which nursing unit environment predicted provision of translation services and interpreter services. Results The majority of hospices provided translation services (74.9%) and interpreter services (87.1%). Four variables predicted translation services: registered nurse (RN) unit size, RN leadership, RN medical expertise, and for-profit status. RN medical expertise and having a safety climate within the hospice corresponded with provision of interpreter services. Conclusions Findings indicate that nursing unit environment predict provision of language services. Hospices with more specialized RNs and a stronger safety climate might include staff who are dedicated to best care provision, including language services. This study provides valuable data on the nurse work environment as a predictor of language services provision, which can better serve patients with limited English proficiency, and ultimately reduce ethnic disparities in end-of-life care for children and their families. PMID:27059050

  2. Virtual environments simulation in research reactor

    NASA Astrophysics Data System (ADS)

    Muhamad, Shalina Bt. Sheik; Bahrin, Muhammad Hannan Bin

    2017-01-01

    Virtual reality based simulations are interactive and engaging. It has the useful potential in improving safety training. Virtual reality technology can be used to train workers who are unfamiliar with the physical layout of an area. In this study, a simulation program based on the virtual environment at research reactor was developed. The platform used for virtual simulation is 3DVia software for which it's rendering capabilities, physics for movement and collision and interactive navigation features have been taken advantage of. A real research reactor was virtually modelled and simulated with the model of avatars adopted to simulate walking. Collision detection algorithms were developed for various parts of the 3D building and avatars to restrain the avatars to certain regions of the virtual environment. A user can control the avatar to move around inside the virtual environment. Thus, this work can assist in the training of personnel, as in evaluating the radiological safety of the research reactor facility.

  3. Hardware-in-the-Loop Rendezvous Tests of a Novel Actuators Command Concept

    NASA Astrophysics Data System (ADS)

    Gomes dos Santos, Willer; Marconi Rocco, Evandro; Boge, Toralf; Benninghoff, Heike; Rems, Florian

    2016-12-01

    Integration, test and validation results, in a real-time environment, of a novel concept for spacecraft control are presented in this paper. The proposed method commands simultaneously a group of actuators optimizing a given set of objective functions based on a multiobjective optimization technique. Since close proximity maneuvers play an important role in orbital servicing missions, the entire GNC system has been integrated and tested at a hardware-in-the-loop (HIL) rendezvous and docking simulator known as European Proximity Operations Simulator (EPOS). During the test campaign at EPOS facility, a visual camera has been used to provide the necessary measurements for calculating the relative position with respect to the target satellite during closed-loop simulations. In addition, two different configurations of spacecraft control have been considered in this paper: a thruster reaction control system and a mixed actuators mode which includes thrusters, reaction wheels, and magnetic torqrods. At EPOS, results of HIL closed-loop tests have demonstrated that a safe and stable rendezvous approach can be achieved with the proposed GNC loop.

  4. Clean assembly and integration techniques for the Hubble Space Telescope High Fidelity Mechanical Simulator

    NASA Technical Reports Server (NTRS)

    Hughes, David W.; Hedgeland, Randy J.

    1994-01-01

    A mechanical simulator of the Hubble Space Telescope (HST) Aft Shroud was built to perform verification testing of the Servicing Mission Scientific Instruments (SI's) and to provide a facility for astronaut training. All assembly, integration, and test activities occurred under the guidance of a contamination control plan, and all work was reviewed by a contamination engineer prior to implementation. An integrated approach was followed in which materials selection, manufacturing, assembly, subsystem integration, and end product use were considered and controlled to ensure that the use of the High Fidelity Mechanical Simulator (HFMS) as a verification tool would not contaminate mission critical hardware. Surfaces were cleaned throughout manufacturing, assembly, and integration, and reverification was performed following major activities. Direct surface sampling was the preferred method of verification, but access and material constraints led to the use of indirect methods as well. Although surface geometries and coatings often made contamination verification difficult, final contamination sampling and monitoring demonstrated the ability to maintain a class M5.5 environment with surface levels less than 400B inside the HFMS.

  5. A novel method for characterizing the impact response of functionally graded plates

    NASA Astrophysics Data System (ADS)

    Larson, Reid A.

    Functionally graded material (FGM) plates are advanced composites with properties that vary continuously through the thickness of the plate. Metal-ceramic FGM plates have been proposed for use in thermal protection systems where a metal-rich interior surface of the plate gradually transitions to a ceramic-rich exterior surface of the plate. The ability of FGMs to resist impact loads must be demonstrated before using them in high-temperature environments in service. This dissertation presents a novel technique by which the impact response of FGM plates is characterized for low-velocity, low- to medium-energy impact loads. An experiment was designed where strain histories in FGM plates were collected during impact events. These strain histories were used to validate a finite element simulation of the test. A parameter estimation technique was developed to estimate local material properties in the anisotropic, non-homogenous FGM plates to optimize the finite element simulations. The optimized simulations captured the physics of the impact events. The method allows research & design engineers to make informed decisions necessary to implement FGM plates in aerospace platforms.

  6. A new Scheme for ATLAS Trigger Simulation using Legacy Code

    NASA Astrophysics Data System (ADS)

    Galster, Gorm; Stelzer, Joerg; Wiedenmann, Werner

    2014-06-01

    Analyses at the LHC which search for rare physics processes or determine with high precision Standard Model parameters require accurate simulations of the detector response and the event selection processes. The accurate determination of the trigger response is crucial for the determination of overall selection efficiencies and signal sensitivities. For the generation and the reconstruction of simulated event data, the most recent software releases are usually used to ensure the best agreement between simulated data and real data. For the simulation of the trigger selection process, however, ideally the same software release that was deployed when the real data were taken should be used. This potentially requires running software dating many years back. Having a strategy for running old software in a modern environment thus becomes essential when data simulated for past years start to present a sizable fraction of the total. We examined the requirements and possibilities for such a simulation scheme within the ATLAS software framework and successfully implemented a proof-of-concept simulation chain. One of the greatest challenges was the choice of a data format which promises long term compatibility with old and new software releases. Over the time periods envisaged, data format incompatibilities are also likely to emerge in databases and other external support services. Software availability may become an issue, when e.g. the support for the underlying operating system might stop. In this paper we present the encountered problems and developed solutions, and discuss proposals for future development. Some ideas reach beyond the retrospective trigger simulation scheme in ATLAS as they also touch more generally aspects of data preservation.

  7. MEETING REPORT: OMG Technical Committee Meeting in Orlando, FL, sees significant enhancement to CORBA

    NASA Astrophysics Data System (ADS)

    1998-06-01

    The Object Management Group (OMG) Platform Technology Committee (PTC) ratified its support for a new asynchronous messaging service for CORBA at OMG's recent Technical Committee Meeting in Orlando, FL. The meeting, held from 8 - 12 June, saw the PTC send the Messaging Service out for a final vote among the OMG membership. The Messaging Service, which will integrate Message Oriented Middleware (MOM) with CORBA, will give CORBA a true asynchronous messaging capability - something of great interest to users and developers. Formal adoption of the specification will most likely occur by the end of the year. The Messaging Service The Messaging Service, when adopted, will be the world's first standard for Message Oriented Middleware and will give CORBA a true asynchronous messaging capability. Asynchronous messaging allows developers to build simpler, richer client environments. With asynchronous messaging there is less need for multi-threaded clients because the Asynchronous Method Invocation is non-blocking, meaning the client thread can continue work while the application waits for a reply. David Curtis, Director of Platform Technology for OMG, said: `This messaging service is one of the more valuable additions to CORBA. It enhances CORBA's existing asynchronous messaging capabilities which is a feature of many popular message oriented middleware products. This service will allow better integration between ORBs and MOM products. This enhanced messaging capability will only make CORBA more valuable for builders of distributed object systems.' The Messaging Service is one of sixteen technologies currently being worked on by the PTC. Additionally, seventeen Revision Task Forces (RTFs) are working on keeping OMG specifications up to date. The purpose of these Revision Task Forces is to take input from the implementors of OMG specifications and clarify or make necessary changes based on the implementor's input. The RTFs also ensure that the specifications remain up to date with changes in the OMA and with industry advances in general. Domain work Thirty-eight technology processes are ongoing in the Domain Technology Committee (DTC). These range over a wide variety of industries, including healthcare, telecommunications, life sciences, manufacturing, business objects, electronic commerce, finance, transportation, utilities, and distributed simulation. These processes aim to enhance CORBA's value and provide interoperability for specific vertical industries. At the Orlando meeting, the Domain Technology Committee issued the following requests to industry: Telecom Wireless Access Request For Information (RFI); Statistics RFI; Clinical Image Access Service Request For Proposal (RFP); Distributed Simulation Request For Comment (RFC). The newly-formed Statistics group at OMG plans to standarize interfaces for Statistical Services in CORBA, and their RFI, to which any person or company can respond, asks for input and guidance as they start this work which will impact the broad spectrum of industries and processes which use statistics. The Clinical Image Access Service will standarize access to important medical images including digital x-rays, MRI scans, and other formats. The Distributed Simulation RFC, when complete, will establish the Distributed Simulation High-Level Architecture of the US Defense Military Simulation Office as an OMG standard. For the next 90 days any person or company, not only OMG members, may submit their comments on the submission. The OMG looks forward to its next meeting to be held in Helsinki, Finland, on 27 - 31 July and hosted by Nokia. OMG encourages anyone considering OMG membership to attend the meeting as a guest. For more information on attending call +1-508-820-4300 or e-mail info@omg.org. Note: descriptions for all RFPs, RFIs and RFCs in progress are available for viewing on the OMG Website at http://www.omg.org/schedule.htm, or contact OMG for a copy of the `Work in Progress' document. For more information on the OMG Technology Process please call Jeurgen Boldt, OMG Process Manager, at +1-508-820-4300 or email jeurgen@omg.org.

  8. Low earth orbit satellite/terrestrial mobile service compatibility

    NASA Technical Reports Server (NTRS)

    Sheriff, R. E.; Gardiner, J. G.

    1993-01-01

    Digital cellular mobile 'second generation' systems are now gradually being introduced into service; one such example is GSM, which will provide a digital voice and data service throughout Europe. Total coverage is not expected to be achieved until the mid '90's, which has resulted in several proposals for the integration of GSM with a geostationary satellite service. Unfortunately, because terrestrial and space systems have been designed to optimize their performance for their particular environment, integration between a satellite and terrestrial system is unlikely to develop further than the satellite providing a back-up service. This lack of system compatibility is now being addressed by system designers of third generation systems. The next generation of mobile systems, referred to as FPLMTS (future public land mobile telecommunication systems) by CCIR and UMTS (universal mobile telecommunication system) in European research programs, are intended to provide inexpensive, hand-held terminals that can operate in either satellite, cellular, or cordless environments. This poses several challenges for system designers, not least in terms of the choice of multiple access technique and power requirements. Satellite mobile services have been dominated by the geostationary orbital type. Recently, however, a number of low earth orbit configurations have been proposed, for example Iridium. These systems are likely to be fully operational by the turn of the century, in time for the implementation of FPLMTS. The developments in LEO mobile satellite service technology were recognized at WARC-92 with the allocation of specific frequency bands for 'big' LEO's, as well as a frequency allocation for FPLMTS which included a specific satellite allocation. When considering integrating a space service into the terrestrial network, LEO's certainly appear to have their attractions: they can provide global coverage, the round trip delay is of the order of tens of milliseconds, and good visibility to the satellite is usually possible. This has resulted in their detailed investigation in the European COST 227 program and in the work program of the European Telecommunications Standards Institute (ETSI). This paper will consider the system implications of integrating a LEO mobile service with a terrestrial service. Results will be presented from simulation software to show how a particular orbital configuration affects the performance of the system in terms of area coverage and visibility to a terminal for various locations and minimum elevation angle. Possible network topologies are then proposed for an integrated satellite/terrestrial network.

  9. A novel downlink scheduling strategy for traffic communication system based on TD-LTE technology.

    PubMed

    Chen, Ting; Zhao, Xiangmo; Gao, Tao; Zhang, Licheng

    2016-01-01

    There are many existing classical scheduling algorithms which can obtain better system throughput and user equality, however, they are not designed for traffic transportation environment, which cannot consider whether the transmission performance of various information flows could meet comprehensive requirements of traffic safety and delay tolerance. This paper proposes a novel downlink scheduling strategy for traffic communication system based on TD-LTE technology, which can perform two classification mappings for various information flows in the eNodeB: firstly, associate every information flow packet with traffic safety importance weight according to its relevance to the traffic safety; secondly, associate every traffic information flow with service type importance weight according to its quality of service (QoS) requirements. Once the connection is established, at every scheduling moment, scheduler would decide the scheduling order of all buffers' head of line packets periodically according to the instant value of scheduling importance weight function, which calculated by the proposed algorithm. From different scenario simulations, it can be verified that the proposed algorithm can provide superior differentiated transmission service and reliable QoS guarantee to information flows with different traffic safety levels and service types, which is more suitable for traffic transportation environment compared with the existing popularity PF algorithm. With the limited wireless resource, information flow closed related to traffic safety will always obtain priority scheduling right timely, which can help the passengers' journey more safe. Moreover, the proposed algorithm cannot only obtain good flow throughput and user fairness which are almost equal to those of the PF algorithm without significant differences, but also provide better realtime transmission guarantee to realtime information flow.

  10. A new airborne laser rangefinder dynamic target simulator for non-stationary environment

    NASA Astrophysics Data System (ADS)

    Ma, Pengge; Pang, Dongdong; Yi, Yang

    2017-11-01

    For the non-stationary environment simulation in laser range finder product testing, a new dynamic target simulation system is studied. First of all, the three-pulsed laser ranging principle, laser target signal composition and mathematical representation are introduced. Then, the actual nonstationary working environment of laser range finder is analyzed, and points out that the real sunshine background light clutter and target shielding effect in laser echo become the main influencing factors. After that, the dynamic laser target signal simulation method is given. Eventlly, the implementation of automatic test system based on arbitrary waveform generator is described. Practical application shows that the new echo signal automatic test system can simulate the real laser ranging environment of laser range finder, and is suitable for performance test of products.

  11. Interim Service ISDN Satellite (ISIS) simulator development for advanced satellite designs and experiments

    NASA Technical Reports Server (NTRS)

    Pepin, Gerard R.

    1992-01-01

    The simulation development associated with the network models of both the Interim Service Integrated Services Digital Network (ISDN) Satellite (ISIS) and the Full Service ISDN Satellite (FSIS) architectures is documented. The ISIS Network Model design represents satellite systems like the Advanced Communications Technology Satellite (ACTS) orbiting switch. The FSIS architecture, the ultimate aim of this element of the Satellite Communications Applications Research (SCAR) Program, moves all control and switching functions on-board the next generation ISDN communications satellite. The technical and operational parameters for the advanced ISDN communications satellite design will be obtained from the simulation of ISIS and FSIS engineering software models for their major subsystems. Discrete event simulation experiments will be performed with these models using various traffic scenarios, design parameters, and operational procedures. The data from these simulations will be used to determine the engineering parameters for the advanced ISDN communications satellite.

  12. An Experiential Exercise in Service Environment Design

    ERIC Educational Resources Information Center

    Fowler, Kendra; Bridges, Eileen

    2012-01-01

    A new experiential exercise affords marketing students the opportunity to learn to design service environments. The exercise is appropriate for a variety of marketing courses and is especially beneficial in teaching services marketing because the proposed activity complements two other exercises widely used in this course. Service journal and…

  13. Development of Final Ecosystem Goods and Services Indicators for Estuaries and Coasts

    EPA Science Inventory

    Ecosystem services are those goods and services produced by the environment that benefit people. The concept aims to aid in the assessment of tradeoffs based on goods and services produced by the environment. Over the past seven years EPA has developed a framework for classificat...

  14. LEGEND, a LEO-to-GEO Environment Debris Model

    NASA Technical Reports Server (NTRS)

    Liou, Jer Chyi; Hall, Doyle T.

    2013-01-01

    LEGEND (LEO-to-GEO Environment Debris model) is a three-dimensional orbital debris evolutionary model that is capable of simulating the historical and future debris populations in the near-Earth environment. The historical component in LEGEND adopts a deterministic approach to mimic the known historical populations. Launched rocket bodies, spacecraft, and mission-related debris (rings, bolts, etc.) are added to the simulated environment. Known historical breakup events are reproduced, and fragments down to 1 mm in size are created. The LEGEND future projection component adopts a Monte Carlo approach and uses an innovative pair-wise collision probability evaluation algorithm to simulate the future breakups and the growth of the debris populations. This algorithm is based on a new "random sampling in time" approach that preserves characteristics of the traditional approach and captures the rapidly changing nature of the orbital debris environment. LEGEND is a Fortran 90-based numerical simulation program. It operates in a UNIX/Linux environment.

  15. Enabling Big Geoscience Data Analytics with a Cloud-Based, MapReduce-Enabled and Service-Oriented Workflow Framework

    PubMed Central

    Li, Zhenlong; Yang, Chaowei; Jin, Baoxuan; Yu, Manzhu; Liu, Kai; Sun, Min; Zhan, Matthew

    2015-01-01

    Geoscience observations and model simulations are generating vast amounts of multi-dimensional data. Effectively analyzing these data are essential for geoscience studies. However, the tasks are challenging for geoscientists because processing the massive amount of data is both computing and data intensive in that data analytics requires complex procedures and multiple tools. To tackle these challenges, a scientific workflow framework is proposed for big geoscience data analytics. In this framework techniques are proposed by leveraging cloud computing, MapReduce, and Service Oriented Architecture (SOA). Specifically, HBase is adopted for storing and managing big geoscience data across distributed computers. MapReduce-based algorithm framework is developed to support parallel processing of geoscience data. And service-oriented workflow architecture is built for supporting on-demand complex data analytics in the cloud environment. A proof-of-concept prototype tests the performance of the framework. Results show that this innovative framework significantly improves the efficiency of big geoscience data analytics by reducing the data processing time as well as simplifying data analytical procedures for geoscientists. PMID:25742012

  16. Information Management for Unmanned Systems: Combining DL-Reasoning with Publish/Subscribe

    NASA Astrophysics Data System (ADS)

    Moser, Herwig; Reichelt, Toni; Oswald, Norbert; Förster, Stefan

    Sharing capabilities and information between collaborating entities by using modem information- and communication-technology is a core principle in complex distributed civil or military mission scenarios. Previous work proved the suitability of Service-oriented Architectures for modelling and sharing the participating entities' capabilities. Albeit providing a satisfactory model for capabilities sharing, pure service-orientation curtails expressiveness for information exchange as opposed to dedicated data-centric communication principles. In this paper we introduce an Information Management System which combines OWL-Ontologies and automated reasoning with Publish/Subscribe-Systems, providing for a shared but decoupled data model. While confirming existing related research results, we emphasise the novel application and lack of practical experience of using Semantic Web technologies in areas other than originally intended. That is, aiding decision support and software design in the context of a mission scenario for an unmanned system. Experiments within a complex simulation environment show the immediate benefits of a semantic information-management and -dissemination platform: Clear separation of concerns in code and data model, increased service re-usability and extensibility as well as regulation of data flow and respective system behaviour through declarative rules.

  17. Ambulatory Healthcare Utilization in the United States: A System Dynamics Approach

    NASA Technical Reports Server (NTRS)

    Diaz, Rafael; Behr, Joshua G.; Tulpule, Mandar

    2011-01-01

    Ambulatory health care needs within the United States are served by a wide range of hospitals, clinics, and private practices. The Emergency Department (ED) functions as an important point of supply for ambulatory healthcare services. Growth in our aging populations as well as changes stemming from broader healthcare reform are expected to continue trend in congestion and increasing demand for ED services. While congestion is, in part, a manifestation of unmatched demand, the state of the alignment between the demand for, and supply of, emergency department services affects quality of care and profitability. The central focus of this research is to provide an explanation of the salient factors at play within the dynamic demand-supply tensions within which ambulatory care is provided within an Emergency Department. A System Dynamics (SO) simulation model is used to capture the complexities among the intricate balance and conditional effects at play within the demand-supply emergency department environment. Conceptual clarification of the forces driving the elements within the system , quantifying these elements, and empirically capturing the interaction among these elements provides actionable knowledge for operational and strategic decision-making.

  18. Enabling big geoscience data analytics with a cloud-based, MapReduce-enabled and service-oriented workflow framework.

    PubMed

    Li, Zhenlong; Yang, Chaowei; Jin, Baoxuan; Yu, Manzhu; Liu, Kai; Sun, Min; Zhan, Matthew

    2015-01-01

    Geoscience observations and model simulations are generating vast amounts of multi-dimensional data. Effectively analyzing these data are essential for geoscience studies. However, the tasks are challenging for geoscientists because processing the massive amount of data is both computing and data intensive in that data analytics requires complex procedures and multiple tools. To tackle these challenges, a scientific workflow framework is proposed for big geoscience data analytics. In this framework techniques are proposed by leveraging cloud computing, MapReduce, and Service Oriented Architecture (SOA). Specifically, HBase is adopted for storing and managing big geoscience data across distributed computers. MapReduce-based algorithm framework is developed to support parallel processing of geoscience data. And service-oriented workflow architecture is built for supporting on-demand complex data analytics in the cloud environment. A proof-of-concept prototype tests the performance of the framework. Results show that this innovative framework significantly improves the efficiency of big geoscience data analytics by reducing the data processing time as well as simplifying data analytical procedures for geoscientists.

  19. Designing Optical Spreadsheets-Technological Pedagogical Content Knowledge Simulation (S-TPACK): A Case Study of Pre-Service Teachers Course

    ERIC Educational Resources Information Center

    Thohir, M. Anas

    2018-01-01

    In the 21st century, the competence of instructional technological design is important for pre-service physics teachers. This case study described the pre-service physics teachers' design of optical spreadsheet simulation and evaluated teaching and learning the task in the classroom. The case study chose three of thirty pre-service teacher's…

  20. Functional resilience of microbial ecosystems in soil: How important is a spatial analysis?

    NASA Astrophysics Data System (ADS)

    König, Sara; Banitz, Thomas; Centler, Florian; Frank, Karin; Thullner, Martin

    2015-04-01

    Microbial life in soil is exposed to fluctuating environmental conditions influencing the performance of microbially mediated ecosystem services such as biodegradation of contaminants. However, as this environment is typically very heterogeneous, spatial aspects can be expected to play a major role for the ability to recover from a stress event. To determine key processes for functional resilience, simple scenarios with varying stress intensities were simulated within a microbial simulation model and the biodegradation rate in the recovery phase monitored. Parameters including microbial growth and dispersal rates were varied over a typical range to consider microorganisms with varying properties. Besides an aggregated temporal monitoring, the explicit observation of the spatio-temporal dynamics proved essential to understand the recovery process. For a mechanistic understanding of the model system, scenarios were also simulated with selected processes being switched-off. Results of the mechanistic and the spatial view show that the key factors for functional recovery with respect to biodegradation after a simple stress event depend on the location of the observed habitats. The limiting factors near unstressed areas are spatial processes - the mobility of the bacteria as well as substrate diffusion - the longer the distance to the unstressed region the more important becomes the process growth. Furthermore, recovery depends on the stress intensity - after a low stress event the spatial configuration has no influence on the key factors for functional resilience. To confirm these results, we repeated the stress scenarios but this time including an additional dispersal network representing a fungal network in soil. The system benefits from an increased spatial performance due to the higher mobility of the degrading microorganisms. However, this effect appears only in scenarios where the spatial distribution of the stressed area plays a role. With these simulations we show that spatial aspects play a main role for recovering after a severe stress event in a highly heterogeneous environment such as soil, and thus the relevance of the exact distribution of the stressed area. In consequence a spatial-mechanistic view is necessary for examining the functional resilience as the aggregated temporal view alone could not have led to these conclusions. Further research should explore the importance of a spatial view for quantifying the recovery of the ecosystem service also after more complex stress regimes.

Top