Sample records for operating system running

  1. Comparing Two Tools for Mobile-Device Forensics

    DTIC Science & Technology

    2017-09-01

    baseline standard. 2.4 Mobile Operating Systems "A mobile operating system is an operating system that is specifically designed to run on mobile devices... run on mobile devices" [7]. There are many different types of mobile operating systems and they are constantly changing, which means an operating...to this is that the security features make forensic analysis more difficult [11]. 2.4.2 iPhone "The iPhone runs an operating system called iOS. It is a

  2. Implementation of an Intelligent Control System

    DTIC Science & Technology

    1992-05-01

    there- fore implemented in a portable equipment rack. The controls computer consists of a microcomputer running a real time operating system , interface...circuit boards are mounted in an industry standard Multibus I chassis. The microcomputer runs the iRMX real time operating system . This operating system

  3. First Operational Experience With a High-Energy Physics Run Control System Based on Web Technologies

    NASA Astrophysics Data System (ADS)

    Bauer, Gerry; Beccati, Barbara; Behrens, Ulf; Biery, Kurt; Branson, James; Bukowiec, Sebastian; Cano, Eric; Cheung, Harry; Ciganek, Marek; Cittolin, Sergio; Coarasa Perez, Jose Antonio; Deldicque, Christian; Erhan, Samim; Gigi, Dominique; Glege, Frank; Gomez-Reino, Robert; Gulmini, Michele; Hatton, Derek; Hwong, Yi Ling; Loizides, Constantin; Ma, Frank; Masetti, Lorenzo; Meijers, Frans; Meschi, Emilio; Meyer, Andreas; Mommsen, Remigius K.; Moser, Roland; O'Dell, Vivian; Oh, Alexander; Orsini, Luciano; Paus, Christoph; Petrucci, Andrea; Pieri, Marco; Racz, Attila; Raginel, Olivier; Sakulin, Hannes; Sani, Matteo; Schieferdecker, Philipp; Schwick, Christoph; Shpakov, Dennis; Simon, Michal; Sumorok, Konstanty; Yoon, Andre Sungho

    2012-08-01

    Run control systems of modern high-energy particle physics experiments have requirements similar to those of today's Internet applications. The Compact Muon Solenoid (CMS) collaboration at CERN's Large Hadron Collider (LHC) therefore decided to build the run control system for its detector based on web technologies. The system is composed of Java Web Applications distributed over a set of Apache Tomcat servlet containers that connect to a database back-end. Users interact with the system through a web browser. The present paper reports on the successful scaling of the system from a small test setup to the production data acquisition system that comprises around 10.000 applications running on a cluster of about 1600 hosts. We report on operational aspects during the first phase of operation with colliding beams including performance, stability, integration with the CMS Detector Control System and tools to guide the operator.

  4. 40 CFR 264.193 - Containment and detection of releases.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...) Designed or operated to prevent run-on or infiltration of precipitation into the secondary containment system unless the collection system has sufficient excess capacity to contain run-on or infiltration... tank within its boundary; (ii) Designed or operated to prevent run-on or infiltration of precipitation...

  5. 40 CFR 264.193 - Containment and detection of releases.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...) Designed or operated to prevent run-on or infiltration of precipitation into the secondary containment system unless the collection system has sufficient excess capacity to contain run-on or infiltration... tank within its boundary; (ii) Designed or operated to prevent run-on or infiltration of precipitation...

  6. 40 CFR 264.193 - Containment and detection of releases.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...) Designed or operated to prevent run-on or infiltration of precipitation into the secondary containment system unless the collection system has sufficient excess capacity to contain run-on or infiltration... tank within its boundary; (ii) Designed or operated to prevent run-on or infiltration of precipitation...

  7. 40 CFR 258.26 - Run-on/run-off control systems.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Run-on/run-off control systems. 258.26 Section 258.26 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES CRITERIA FOR MUNICIPAL SOLID WASTE LANDFILLS Operating Criteria § 258.26 Run-on/run-off control systems. (a...

  8. 40 CFR 258.26 - Run-on/run-off control systems.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 25 2011-07-01 2011-07-01 false Run-on/run-off control systems. 258.26 Section 258.26 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES CRITERIA FOR MUNICIPAL SOLID WASTE LANDFILLS Operating Criteria § 258.26 Run-on/run-off control systems. (a...

  9. Analyses of requirements for computer control and data processing experiment subsystems. Volume 2: ATM experiment S-056 image data processing system software development

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The IDAPS (Image Data Processing System) is a user-oriented, computer-based, language and control system, which provides a framework or standard for implementing image data processing applications, simplifies set-up of image processing runs so that the system may be used without a working knowledge of computer programming or operation, streamlines operation of the image processing facility, and allows multiple applications to be run in sequence without operator interaction. The control system loads the operators, interprets the input, constructs the necessary parameters for each application, and cells the application. The overlay feature of the IBSYS loader (IBLDR) provides the means of running multiple operators which would otherwise overflow core storage.

  10. A Concurrent Implementation of the Cascade-Correlation Algorithm, Using the Time Warp Operating System

    NASA Technical Reports Server (NTRS)

    Springer, P.

    1993-01-01

    This paper discusses the method in which the Cascade-Correlation algorithm was parallelized in such a way that it could be run using the Time Warp Operating System (TWOS). TWOS is a special purpose operating system designed to run parellel discrete event simulations with maximum efficiency on parallel or distributed computers.

  11. Operating System Abstraction Layer (OSAL)

    NASA Technical Reports Server (NTRS)

    Yanchik, Nicholas J.

    2007-01-01

    This viewgraph presentation reviews the concept of the Operating System Abstraction Layer (OSAL) and its benefits. The OSAL is A small layer of software that allows programs to run on many different operating systems and hardware platforms It runs independent of the underlying OS & hardware and it is self-contained. The benefits of OSAL are that it removes dependencies from any one operating system, promotes portable, reusable flight software. It allows for Core Flight software (FSW) to be built for multiple processors and operating systems. The presentation discusses the functionality, the various OSAL releases, and describes the specifications.

  12. 40 CFR 267.196 - What are the required devices for secondary containment and what are their design, operating and...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... operated to prevent run-on or infiltration of precipitation into the secondary containment system unless the collection system has sufficient excess capacity to contain run-on or infiltration. The additional...

  13. 40 CFR 267.196 - What are the required devices for secondary containment and what are their design, operating and...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... operated to prevent run-on or infiltration of precipitation into the secondary containment system unless the collection system has sufficient excess capacity to contain run-on or infiltration. The additional...

  14. 40 CFR 267.196 - What are the required devices for secondary containment and what are their design, operating and...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... operated to prevent run-on or infiltration of precipitation into the secondary containment system unless the collection system has sufficient excess capacity to contain run-on or infiltration. The additional...

  15. 40 CFR 267.196 - What are the required devices for secondary containment and what are their design, operating and...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... operated to prevent run-on or infiltration of precipitation into the secondary containment system unless the collection system has sufficient excess capacity to contain run-on or infiltration. The additional...

  16. 40 CFR 267.196 - What are the required devices for secondary containment and what are their design, operating and...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... operated to prevent run-on or infiltration of precipitation into the secondary containment system unless the collection system has sufficient excess capacity to contain run-on or infiltration. The additional...

  17. New operator assistance features in the CMS Run Control System

    NASA Astrophysics Data System (ADS)

    Andre, J.-M.; Behrens, U.; Branson, J.; Brummer, P.; Chaze, O.; Cittolin, S.; Contescu, C.; Craigs, B. G.; Darlea, G.-L.; Deldicque, C.; Demiragli, Z.; Dobson, M.; Doualot, N.; Erhan, S.; Fulcher, J. R.; Gigi, D.; Gładki, M.; Glege, F.; Gomez-Ceballos, G.; Hegeman, J.; Holzner, A.; Janulis, M.; Jimenez-Estupiñán, R.; Masetti, L.; Meijers, F.; Meschi, E.; Mommsen, R. K.; Morovic, S.; O'Dell, V.; Orsini, L.; Paus, C.; Petrova, P.; Pieri, M.; Racz, A.; Reis, T.; Sakulin, H.; Schwick, C.; Simelevicius, D.; Vougioukas, M.; Zejdl, P.

    2017-10-01

    During Run-1 of the LHC, many operational procedures have been automated in the run control system of the Compact Muon Solenoid (CMS) experiment. When detector high voltages are ramped up or down or upon certain beam mode changes of the LHC, the DAQ system is automatically partially reconfigured with new parameters. Certain types of errors such as errors caused by single-event upsets may trigger an automatic recovery procedure. Furthermore, the top-level control node continuously performs cross-checks to detect sub-system actions becoming necessary because of changes in configuration keys, changes in the set of included front-end drivers or because of potential clock instabilities. The operator is guided to perform the necessary actions through graphical indicators displayed next to the relevant command buttons in the user interface. Through these indicators, consistent configuration of CMS is ensured. However, manually following the indicators can still be inefficient at times. A new assistant to the operator has therefore been developed that can automatically perform all the necessary actions in a streamlined order. If additional problems arise, the new assistant tries to automatically recover from these. With the new assistant, a run can be started from any state of the sub-systems with a single click. An ongoing run may be recovered with a single click, once the appropriate recovery action has been selected. We review the automation features of CMS Run Control and discuss the new assistant in detail including first operational experience.

  18. New Operator Assistance Features in the CMS Run Control System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andre, J.M.; et al.

    During Run-1 of the LHC, many operational procedures have been automated in the run control system of the Compact Muon Solenoid (CMS) experiment. When detector high voltages are ramped up or down or upon certain beam mode changes of the LHC, the DAQ system is automatically partially reconfigured with new parameters. Certain types of errors such as errors caused by single-event upsets may trigger an automatic recovery procedure. Furthermore, the top-level control node continuously performs cross-checks to detect sub-system actions becoming necessary because of changes in configuration keys, changes in the set of included front-end drivers or because of potentialmore » clock instabilities. The operator is guided to perform the necessary actions through graphical indicators displayed next to the relevant command buttons in the user interface. Through these indicators, consistent configuration of CMS is ensured. However, manually following the indicators can still be inefficient at times. A new assistant to the operator has therefore been developed that can automatically perform all the necessary actions in a streamlined order. If additional problems arise, the new assistant tries to automatically recover from these. With the new assistant, a run can be started from any state of the sub-systems with a single click. An ongoing run may be recovered with a single click, once the appropriate recovery action has been selected. We review the automation features of CMS Run Control and discuss the new assistant in detail including first operational experience.« less

  19. Main improvements of LHC Cryogenics Operation during Run 2 (2015-2018)

    NASA Astrophysics Data System (ADS)

    Delprat, L.; Bradu, B.; Brodzinski, K.; Ferlin, G.; Hafi, K.; Herblin, L.; Rogez, E.; Suraci, A.

    2017-12-01

    After the successful Run 1 (2010-2012), the LHC entered its first Long Shutdown period (LS1, 2013-2014). During LS1 the LHC cryogenic system went under a complete maintenance and consolidation program. The LHC resumed operation in 2015 with an increased beam energy from 4 TeV to 6.5 TeV. Prior to the new physics Run 2 (2015-2018), the LHC was progressively cooled down from ambient to the 1.9 K operation temperature. The LHC has resumed operation with beams in April 2015. Operational margins on the cryogenic capacity were reduced compared to Run 1, mainly due to the observed higher than expected electron-cloud heat load coming from increased beam energy and intensity. Maintaining and improving the cryogenic availability level required the implementation of a series of actions in order to deal with the observed heat loads. This paper describes the results from the process optimization and update of the control system, thus allowing the adjustment of the non-isothermal heat load at 4.5 - 20 K and the optimized dynamic behaviour of the cryogenic system versus the electron-cloud thermal load. Effects from the new regulation settings applied for operation on the electrical distribution feed-boxes and inner triplets will be discussed. The efficiency of the preventive and corrective maintenance, as well as the benefits and issues of the present cryogenic system configuration for Run 2 operational scenario will be described. Finally, the overall availability results and helium management of the LHC cryogenic system during the 2015-2016 operational period will be presented.

  20. A Novel Technique for Running the NASA Legacy Code LAPIN Synchronously With Simulations Developed Using Simulink

    NASA Technical Reports Server (NTRS)

    Vrnak, Daniel R.; Stueber, Thomas J.; Le, Dzu K.

    2012-01-01

    This report presents a method for running a dynamic legacy inlet simulation in concert with another dynamic simulation that uses a graphical interface. The legacy code, NASA's LArge Perturbation INlet (LAPIN) model, was coded using the FORTRAN 77 (The Portland Group, Lake Oswego, OR) programming language to run in a command shell similar to other applications that used the Microsoft Disk Operating System (MS-DOS) (Microsoft Corporation, Redmond, WA). Simulink (MathWorks, Natick, MA) is a dynamic simulation that runs on a modern graphical operating system. The product of this work has both simulations, LAPIN and Simulink, running synchronously on the same computer with periodic data exchanges. Implementing the method described in this paper avoided extensive changes to the legacy code and preserved its basic operating procedure. This paper presents a novel method that promotes inter-task data communication between the synchronously running processes.

  1. Expert Systems on Multiprocessor Architectures. Phase 1

    DTIC Science & Technology

    1988-08-01

    great rate) as early experience indicates what alternative aspect of system operation should have been monitored in any given completed run. The... system operation should have been monitored in any given completed run. The design goals that emerged then were (1) that the simulation system should...ORGANIZATION 6b OFFICE SYMBOL 7a. NAME OF MONITORING ORGANIZATION Stanford University (If applicable) Knowledge Systems Laboratory Rome Air Development

  2. Five-centimeter diameter ion thruster development

    NASA Technical Reports Server (NTRS)

    Weigand, A. J.

    1972-01-01

    All system components were tested for endurance and steady state and cyclic operation. The following results were obtained: acceleration system (electrostatic type), 3100 hours continuous running; acceleration system (translation type), 2026 hours continuous running; cathode-isolator-vaporizer assembly, 5000 hours continuous operation and 190 restart cycles with 1750 hours operation; mercury expulsion system, 5000 hours continuous running; and neutralizer, 5100 hours continuous operation. The results of component optimization studies such as neutralizer position, neutralizer keeper hole, and screen grid geometry are included. Extensive mapping of the magnet field within and immediately outside the thruster are shown. A technique of electroplating the molybdenum accelerator grid with copper to study erosion patterns is described. Results of tests being conducted to more fully understand the operation of the hollow cathode are also given. This type of 5-cm thruster will be space tested on the Communication Technology Satellite in 1975.

  3. Operating system for a real-time multiprocessor propulsion system simulator

    NASA Technical Reports Server (NTRS)

    Cole, G. L.

    1984-01-01

    The success of the Real Time Multiprocessor Operating System (RTMPOS) in the development and evaluation of experimental hardware and software systems for real time interactive simulation of air breathing propulsion systems was evaluated. The Real Time Multiprocessor Operating System (RTMPOS) provides the user with a versatile, interactive means for loading, running, debugging and obtaining results from a multiprocessor based simulator. A front end processor (FEP) serves as the simulator controller and interface between the user and the simulator. These functions are facilitated by the RTMPOS which resides on the FEP. The RTMPOS acts in conjunction with the FEP's manufacturer supplied disk operating system that provides typical utilities like an assembler, linkage editor, text editor, file handling services, etc. Once a simulation is formulated, the RTMPOS provides for engineering level, run time operations such as loading, modifying and specifying computation flow of programs, simulator mode control, data handling and run time monitoring. Run time monitoring is a powerful feature of RTMPOS that allows the user to record all actions taken during a simulation session and to receive advisories from the simulator via the FEP. The RTMPOS is programmed mainly in PASCAL along with some assembly language routines. The RTMPOS software is easily modified to be applicable to hardware from different manufacturers.

  4. WinHPC System | High-Performance Computing | NREL

    Science.gov Websites

    System WinHPC System NREL's WinHPC system is a computing cluster running the Microsoft Windows operating system. It allows users to run jobs requiring a Windows environment such as ANSYS and MATLAB

  5. Applications of advanced data analysis and expert system technologies in the ATLAS Trigger-DAQ Controls framework

    NASA Astrophysics Data System (ADS)

    Avolio, G.; Corso Radu, A.; Kazarov, A.; Lehmann Miotto, G.; Magnoni, L.

    2012-12-01

    The Trigger and Data Acquisition (TDAQ) system of the ATLAS experiment is a very complex distributed computing system, composed of more than 20000 applications running on more than 2000 computers. The TDAQ Controls system has to guarantee the smooth and synchronous operations of all the TDAQ components and has to provide the means to minimize the downtime of the system caused by runtime failures. During data taking runs, streams of information messages sent or published by running applications are the main sources of knowledge about correctness of running operations. The huge flow of operational monitoring data produced is constantly monitored by experts in order to detect problems or misbehaviours. Given the scale of the system and the rates of data to be analyzed, the automation of the system functionality in the areas of operational monitoring, system verification, error detection and recovery is a strong requirement. To accomplish its objective, the Controls system includes some high-level components which are based on advanced software technologies, namely the rule-based Expert System and the Complex Event Processing engines. The chosen techniques allow to formalize, store and reuse the knowledge of experts and thus to assist the shifters in the ATLAS control room during the data-taking activities.

  6. Characteristics of Operational Space Weather Forecasting: Observations and Models

    NASA Astrophysics Data System (ADS)

    Berger, Thomas; Viereck, Rodney; Singer, Howard; Onsager, Terry; Biesecker, Doug; Rutledge, Robert; Hill, Steven; Akmaev, Rashid; Milward, George; Fuller-Rowell, Tim

    2015-04-01

    In contrast to research observations, models and ground support systems, operational systems are characterized by real-time data streams and run schedules, with redundant backup systems for most elements of the system. We review the characteristics of operational space weather forecasting, concentrating on the key aspects of ground- and space-based observations that feed models of the coupled Sun-Earth system at the NOAA/Space Weather Prediction Center (SWPC). Building on the infrastructure of the National Weather Service, SWPC is working toward a fully operational system based on the GOES weather satellite system (constant real-time operation with back-up satellites), the newly launched DSCOVR satellite at L1 (constant real-time data network with AFSCN backup), and operational models of the heliosphere, magnetosphere, and ionosphere/thermosphere/mesophere systems run on the Weather and Climate Operational Super-computing System (WCOSS), one of the worlds largest and fastest operational computer systems that will be upgraded to a dual 2.5 Pflop system in 2016. We review plans for further operational space weather observing platforms being developed in the context of the Space Weather Operations Research and Mitigation (SWORM) task force in the Office of Science and Technology Policy (OSTP) at the White House. We also review the current operational model developments at SWPC, concentrating on the differences between the research codes and the modified real-time versions that must run with zero fault tolerance on the WCOSS systems. Understanding the characteristics and needs of the operational forecasting community is key to producing research into the coupled Sun-Earth system with maximal societal benefit.

  7. ATLAS trigger operations: Upgrades to ``Xmon'' rate prediction system

    NASA Astrophysics Data System (ADS)

    Myers, Ava; Aukerman, Andrew; Hong, Tae Min; Atlas Collaboration

    2017-01-01

    We present ``Xmon,'' a tool to monitor trigger rates in the Control Room of the ATLAS Experiment. We discuss Xmon's recent (1) updates, (2) upgrades, and (3) operations. (1) Xmon was updated to modify the tool written for the three-level trigger architecture in Run-1 (2009-2012) to adapt to the new two-level system for Run-2 (2015-current). The tool takes as input the beam luminosity to make a rate prediction, which is compared with incoming rates to detect anomalies that occur both globally throughout a run and locally within a run. Global offsets are more commonly caught by the predictions based upon past runs, where offline processing allows for function adjustments and fit quality through outlier rejection. (2) Xmon was upgraded to detect local offsets using on-the-fly predictions, which uses a sliding window of in-run rates to make predictions. (3) Xmon operations examples are given. Future work involves further automation of the steps to provide the predictive functions and for alerting shifters.

  8. Ada Run Time Support Environments and a common APSE Interface Set. [Ada Programming Support Environment

    NASA Technical Reports Server (NTRS)

    Mckay, C. W.; Bown, R. L.

    1985-01-01

    The paper discusses the importance of linking Ada Run Time Support Environments to the Common Ada Programming Support Environment (APSE) Interface Set (CAIS). A non-stop network operating systems scenario is presented to serve as a forum for identifying the important issues. The network operating system exemplifies the issues involved in the NASA Space Station data management system.

  9. The Aerospace Energy Systems Laboratory: A BITBUS networking application

    NASA Technical Reports Server (NTRS)

    Glover, Richard D.; Oneill-Rood, Nora

    1989-01-01

    The NASA Ames-Dryden Flight Research Facility developed a computerized aircraft battery servicing facility called the Aerospace Energy Systems Laboratory (AESL). This system employs distributed processing with communications provided by a 2.4-megabit BITBUS local area network. Customized handlers provide real time status, remote command, and file transfer protocols between a central system running the iRMX-II operating system and ten slave stations running the iRMX-I operating system. The hardware configuration and software components required to implement this BITBUS application are required.

  10. Store-operate-coherence-on-value

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Dong; Heidelberger, Philip; Kumar, Sameer

    A system, method and computer program product for performing various store-operate instructions in a parallel computing environment that includes a plurality of processors and at least one cache memory device. A queue in the system receives, from a processor, a store-operate instruction that specifies under which condition a cache coherence operation is to be invoked. A hardware unit in the system runs the received store-operate instruction. The hardware unit evaluates whether a result of the running the received store-operate instruction satisfies the condition. The hardware unit invokes a cache coherence operation on a cache memory address associated with the receivedmore » store-operate instruction if the result satisfies the condition. Otherwise, the hardware unit does not invoke the cache coherence operation on the cache memory device.« less

  11. Workstation-Based Real-Time Mesoscale Modeling Designed for Weather Support to Operations at the Kennedy Space Center and Cape Canaveral Air Station

    NASA Technical Reports Server (NTRS)

    Manobianco, John; Zack, John W.; Taylor, Gregory E.

    1996-01-01

    This paper describes the capabilities and operational utility of a version of the Mesoscale Atmospheric Simulation System (MASS) that has been developed to support operational weather forecasting at the Kennedy Space Center (KSC) and Cape Canaveral Air Station (CCAS). The implementation of local, mesoscale modeling systems at KSC/CCAS is designed to provide detailed short-range (less than 24 h) forecasts of winds, clouds, and hazardous weather such as thunderstorms. Short-range forecasting is a challenge for daily operations, and manned and unmanned launches since KSC/CCAS is located in central Florida where the weather during the warm season is dominated by mesoscale circulations like the sea breeze. For this application, MASS has been modified to run on a Stardent 3000 workstation. Workstation-based, real-time numerical modeling requires a compromise between the requirement to run the system fast enough so that the output can be used before expiration balanced against the desire to improve the simulations by increasing resolution and using more detailed physical parameterizations. It is now feasible to run high-resolution mesoscale models such as MASS on local workstations to provide timely forecasts at a fraction of the cost required to run these models on mainframe supercomputers. MASS has been running in the Applied Meteorology Unit (AMU) at KSC/CCAS since January 1994 for the purpose of system evaluation. In March 1995, the AMU began sending real-time MASS output to the forecasters and meteorologists at CCAS, Spaceflight Meteorology Group (Johnson Space Center, Houston, Texas), and the National Weather Service (Melbourne, Florida). However, MASS is not yet an operational system. The final decision whether to transition MASS for operational use will depend on a combination of forecaster feedback, the AMU's final evaluation results, and the life-cycle costs of the operational system.

  12. Parabolic dish test site: History and operating experience

    NASA Technical Reports Server (NTRS)

    Selcuk, M. K. (Compiler)

    1985-01-01

    The parabolic dish test site (PDTS) was established for testing point-focusing solar concentrator systems operating at temperatures approaching 1650 C. Among tests run were evaluation and performance characterization of parabolic dish concentrators, receivers, power conversion units, and solar/fossil-fuel hybrid systems. The PDTS was fully operational until its closure in June, 1984. The evolution of the test program, a chronological listing of the experiments run, and data summaries for most of the tests conducted are presented.

  13. X-LUNA: Extending Free/Open Source Real Time Executive for On-Board Space Applications

    NASA Astrophysics Data System (ADS)

    Braga, P.; Henriques, L.; Zulianello, M.

    2008-08-01

    In this paper we present xLuna, a system based on the RTEMS [1] Real-Time Operating System that is able to run on demand a GNU/Linux Operating System [2] as RTEMS' lowest priority task. Linux runs in user-mode and in a different memory partition. This allows running Hard Real-Time tasks and Linux applications on the same system sharing the Hardware resources while keeping a safe isolation and the Real-Time characteristics of RTEMS. Communication between both Systems is possible through a loose coupled mechanism based on message queues. Currently only SPARC LEON2 processor with Memory Management Unit (MMU) is supported. The advantage in having two isolated systems is that non critical components are quickly developed or simply ported reducing time-to-market and budget.

  14. 40 CFR 63.848 - Emission monitoring requirements.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... primary control system to determine compliance with the applicable emission limit. The owner or operator... with the applicable emission limit. The owner or operator must include all valid runs in the quarterly... from at least three runs to determine compliance with the applicable emission limits. The owner or...

  15. Pyrolaser Operating System

    NASA Technical Reports Server (NTRS)

    Roberts, Floyd E., III

    1994-01-01

    Software provides for control and acquisition of data from optical pyrometer. There are six individual programs in PYROLASER package. Provides quick and easy way to set up, control, and program standard Pyrolaser. Temperature and emisivity measurements either collected as if Pyrolaser in manual operating mode or displayed on real-time strip charts and stored in standard spreadsheet format for posttest analysis. Shell supplied to allow macros, which are test-specific, added to system easily. Written using Labview software for use on Macintosh-series computers running System 6.0.3 or later, Sun Sparc-series computers running Open-Windows 3.0 or MIT's X Window System (X11R4 or X11R5), and IBM PC or compatible computers running Microsoft Windows 3.1 or later.

  16. Towards Run-time Assurance of Advanced Propulsion Algorithms

    NASA Technical Reports Server (NTRS)

    Wong, Edmond; Schierman, John D.; Schlapkohl, Thomas; Chicatelli, Amy

    2014-01-01

    This paper covers the motivation and rationale for investigating the application of run-time assurance methods as a potential means of providing safety assurance for advanced propulsion control systems. Certification is becoming increasingly infeasible for such systems using current verification practices. Run-time assurance systems hold the promise of certifying these advanced systems by continuously monitoring the state of the feedback system during operation and reverting to a simpler, certified system if anomalous behavior is detected. The discussion will also cover initial efforts underway to apply a run-time assurance framework to NASA's model-based engine control approach. Preliminary experimental results are presented and discussed.

  17. Payload Operations

    NASA Technical Reports Server (NTRS)

    Cissom, R. D.; Melton, T. L.; Schneider, M. P.; Lapenta, C. C.

    1999-01-01

    The objective of this paper is to provide the future ISS scientist and/or engineer a sense of what ISS payload operations are expected to be. This paper uses a real-time operations scenario to convey this message. The real-time operations scenario begins at the initiation of payload operations and runs through post run experiment analysis. In developing this scenario, it is assumed that the ISS payload operations flight and ground capabilities are fully available for use by the payload user community. Emphasis is placed on telescience operations whose main objective is to enable researchers to utilize experiment hardware onboard the International Space Station as if it were located in their terrestrial laboratory. An overview of the Payload Operations Integration Center (POIC) systems and user ground system options is included to provide an understanding of the systems and interfaces users will utilize to perform payload operations. Detailed information regarding POIC capabilities can be found in the POIC Capabilities Document, SSP 50304.

  18. Enhanced Component Performance Study: Motor-Driven Pumps 1998–2014

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schroeder, John Alton

    2016-02-01

    This report presents an enhanced performance evaluation of motor-driven pumps at U.S. commercial nuclear power plants. The data used in this study are based on the operating experience failure reports from fiscal year 1998 through 2014 for the component reliability as reported in the Institute of Nuclear Power Operations (INPO) Consolidated Events Database (ICES). The motor-driven pump failure modes considered for standby systems are failure to start, failure to run less than or equal to one hour, and failure to run more than one hour; for normally running systems, the failure modes considered are failure to start and failure tomore » run. An eight hour unreliability estimate is also calculated and trended. The component reliability estimates and the reliability data are trended for the most recent 10-year period while yearly estimates for reliability are provided for the entire active period. Statistically significant increasing trends were identified in pump run hours per reactor year. Statistically significant decreasing trends were identified for standby systems industry-wide frequency of start demands, and run hours per reactor year for runs of less than or equal to one hour.« less

  19. JAX Colony Management System (JCMS): an extensible colony and phenotype data management system.

    PubMed

    Donnelly, Chuck J; McFarland, Mike; Ames, Abigail; Sundberg, Beth; Springer, Dave; Blauth, Peter; Bult, Carol J

    2010-04-01

    The Jackson Laboratory Colony Management System (JCMS) is a software application for managing data and information related to research mouse colonies, associated biospecimens, and experimental protocols. JCMS runs directly on computers that run one of the PC Windows operating systems, but can be accessed via web browser interfaces from any computer running a Windows, Macintosh, or Linux operating system. JCMS can be configured for a single user or multiple users in small- to medium-size work groups. The target audience for JCMS includes laboratory technicians, animal colony managers, and principal investigators. The application provides operational support for colony management and experimental workflows, sample and data tracking through transaction-based data entry forms, and date-driven work reports. Flexible query forms allow researchers to retrieve database records based on user-defined criteria. Recent advances in handheld computers with integrated barcode readers, middleware technologies, web browsers, and wireless networks add to the utility of JCMS by allowing real-time access to the database from any networked computer.

  20. Time warp operating system version 2.7 internals manual

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The Time Warp Operating System (TWOS) is an implementation of the Time Warp synchronization method proposed by David Jefferson. In addition, it serves as an actual platform for running discrete event simulations. The code comprising TWOS can be divided into several different sections. TWOS typically relies on an existing operating system to furnish some very basic services. This existing operating system is referred to as the Base OS. The existing operating system varies depending on the hardware TWOS is running on. It is Unix on the Sun workstations, Chrysalis or Mach on the Butterfly, and Mercury on the Mark 3 Hypercube. The base OS could be an entirely new operating system, written to meet the special needs of TWOS, but, to this point, existing systems have been used instead. The base OS's used for TWOS on various platforms are not discussed in detail in this manual, as they are well covered in their own manuals. Appendix G discusses the interface between one such OS, Mach, and TWOS.

  1. PC vs. Mac--Which Way Should You Go?

    ERIC Educational Resources Information Center

    Wodarz, Nan

    1997-01-01

    Outlines the factors in hardware, software, and administration to consider in developing specifications for choosing a computer operating system. Compares Microsoft Windows 95/NT that runs on PC/Intel-based systems and System 7.5 that runs on the Apple-based systems. Lists reasons why the Microsoft platform clearly stands above the Apple platform.…

  2. Alert Notification System Router

    NASA Technical Reports Server (NTRS)

    Gurganus, Joseph; Carey, Everett; Antonucci, Robert; Hitchener, Peter

    2009-01-01

    The Alert Notification System Router (ANSR) software provides satellite operators with notifications of key events through pagers, cell phones, and e-mail. Written in Java, this application is specifically designed to meet the mission-critical standards for mission operations while operating on a variety of hardware environments. ANSR is a software component that runs inside the Mission Operations Center (MOC). It connects to the mission's message bus using the GMSEC [Goddard Space Flight Center (GSFC) Mission Services Evolution Center (GMSEC)] standard. Other components, such as automation and monitoring components, can use ANSR to send directives to notify users or groups. The ANSR system, in addition to notifying users, can check for message acknowledgements from a user and escalate the notification to another user if there is no acknowledgement. When a firewall prevents ANSR from accessing the Internet directly, proxies can be run on the other side of the wall. These proxies can be configured to access the Internet, notify users, and poll for their responses. Multiple ANSRs can be run in parallel, providing a seamless failover capability in the event that one ANSR system becomes incapacitated.

  3. Urban Districts Compare Notes on Operation

    ERIC Educational Resources Information Center

    Aarons, Dakarai I.

    2009-01-01

    Urban school systems are large businesses, charged with running a wide range of noninstructional functions that typically do not garner them much national notice. Now, thanks to the work of a coalition of big-city districts, their leaders are gathering data on how those operations are run, in the hope of improving their business practices. The…

  4. Power consumption analysis of operating systems for wireless sensor networks.

    PubMed

    Lajara, Rafael; Pelegrí-Sebastiá, José; Perez Solano, Juan J

    2010-01-01

    In this paper four wireless sensor network operating systems are compared in terms of power consumption. The analysis takes into account the most common operating systems--TinyOS v1.0, TinyOS v2.0, Mantis and Contiki--running on Tmote Sky and MICAz devices. With the objective of ensuring a fair evaluation, a benchmark composed of four applications has been developed, covering the most typical tasks that a Wireless Sensor Network performs. The results show the instant and average current consumption of the devices during the execution of these applications. The experimental measurements provide a good insight into the power mode in which the device components are running at every moment, and they can be used to compare the performance of different operating systems executing the same tasks.

  5. The engineering design integration (EDIN) system. [digital computer program complex

    NASA Technical Reports Server (NTRS)

    Glatt, C. R.; Hirsch, G. N.; Alford, G. E.; Colquitt, W. N.; Reiners, S. J.

    1974-01-01

    A digital computer program complex for the evaluation of aerospace vehicle preliminary designs is described. The system consists of a Univac 1100 series computer and peripherals using the Exec 8 operating system, a set of demand access terminals of the alphanumeric and graphics types, and a library of independent computer programs. Modification of the partial run streams, data base maintenance and construction, and control of program sequencing are provided by a data manipulation program called the DLG processor. The executive control of library program execution is performed by the Univac Exec 8 operating system through a user established run stream. A combination of demand and batch operations is employed in the evaluation of preliminary designs. Applications accomplished with the EDIN system are described.

  6. ANTP Protocol Suite Software Implementation Architecture in Python

    DTIC Science & Technology

    2011-06-03

    a popular platform of networking programming, an area in which C has traditionally dominated. 2 NetController AeroRP AeroNP AeroNP API AeroTP...visualisation of the running system. For example using the Google Maps API , the main logging web page can show all the running nodes in the system. By...communication between AeroNP and AeroRP and runs on the operating system as daemon. Furthermore, it creates an API interface to mange the communication between

  7. The LHCb Run Control

    NASA Astrophysics Data System (ADS)

    Alessio, F.; Barandela, M. C.; Callot, O.; Duval, P.-Y.; Franek, B.; Frank, M.; Galli, D.; Gaspar, C.; Herwijnen, E. v.; Jacobsson, R.; Jost, B.; Neufeld, N.; Sambade, A.; Schwemmer, R.; Somogyi, P.

    2010-04-01

    LHCb has designed and implemented an integrated Experiment Control System. The Control System uses the same concepts and the same tools to control and monitor all parts of the experiment: the Data Acquisition System, the Timing and the Trigger Systems, the High Level Trigger Farm, the Detector Control System, the Experiment's Infrastructure and the interaction with the CERN Technical Services and the Accelerator. LHCb's Run Control, the main interface used by the experiment's operator, provides access in a hierarchical, coherent and homogeneous manner to all areas of the experiment and to all its sub-detectors. It allows for automated (or manual) configuration and control, including error recovery, of the full experiment in its different running modes. Different instances of the same Run Control interface are used by the various sub-detectors for their stand-alone activities: test runs, calibration runs, etc. The architecture and the tools used to build the control system, the guidelines and components provided to the developers, as well as the first experience with the usage of the Run Control will be presented

  8. Run II of the LHC: The Accelerator Science

    NASA Astrophysics Data System (ADS)

    Redaelli, Stefano

    2015-04-01

    In 2015 the Large Hadron Collider (LHC) at the European Organization for Nuclear Research (CERN) starts its Run II operation. After the successful Run I at 3.5 TeV and 4 TeV in the 2010-2013 period, a first long shutdown (LS1) was mainly dedicated to the consolidation of the LHC magnet interconnections, to allow the LHC to operate at its design beam energy of 7 TeV. Other key accelerator systems have also been improved to optimize the performance reach at higher beam energies. After a review of the LS1 activities, the status of the LHC start-up progress is reported, addressing in particular the status of the LHC hardware commissioning and of the training campaign of superconducting magnets that will determine the operation beam energy in 2015. Then, the plans for the Run II operation are reviewed in detail, covering choice of initial machine parameters and strategy to improve the Run II performance. Future prospects of the LHC and its upgrade plans are also presented.

  9. Pathways to designing and running an operational flood forecasting system: an adventure game!

    NASA Astrophysics Data System (ADS)

    Arnal, Louise; Pappenberger, Florian; Ramos, Maria-Helena; Cloke, Hannah; Crochemore, Louise; Giuliani, Matteo; Aalbers, Emma

    2017-04-01

    In the design and building of an operational flood forecasting system, a large number of decisions have to be taken. These include technical decisions related to the choice of the meteorological forecasts to be used as input to the hydrological model, the choice of the hydrological model itself (its structure and parameters), the selection of a data assimilation procedure to run in real-time, the use (or not) of a post-processor, and the computing environment to run the models and display the outputs. Additionally, a number of trans-disciplinary decisions are also involved in the process, such as the way the needs of the users will be considered in the modelling setup and how the forecasts (and their quality) will be efficiently communicated to ensure usefulness and build confidence in the forecasting system. We propose to reflect on the numerous, alternative pathways to designing and running an operational flood forecasting system through an adventure game. In this game, the player is the protagonist of an interactive story driven by challenges, exploration and problem-solving. For this presentation, you will have a chance to play this game, acting as the leader of a forecasting team at an operational centre. Your role is to manage the actions of your team and make sequential decisions that impact the design and running of the system in preparation to and during a flood event, and that deal with the consequences of the forecasts issued. Your actions are evaluated by how much they cost you in time, money and credibility. Your aim is to take decisions that will ultimately lead to a good balance between time and money spent, while keeping your credibility high over the whole process. This game was designed to highlight the complexities behind decision-making in an operational forecasting and emergency response context, in terms of the variety of pathways that can be selected as well as the timescale, cost and timing of effective actions.

  10. Commanding and Controlling Satellite Clusters (IEEE Intelligent Systems, November/December 2000)

    DTIC Science & Technology

    2000-01-01

    real - time operating system , a message-passing OS well suited for distributed...ground Flight processors ObjectAgent RTOS SCL RTOS RDMS Space command language Real - time operating system Rational database management system TS-21 RDMS...engineer with Princeton Satellite Systems. She is working with others to develop ObjectAgent software to run on the OSE Real Time Operating System .

  11. Data acquisition and control system with a programmable logic controller (PLC) for a pulsed chemical oxygen-iodine laser

    NASA Astrophysics Data System (ADS)

    Yu, Haijun; Li, Guofu; Duo, Liping; Jin, Yuqi; Wang, Jian; Sang, Fengting; Kang, Yuanfu; Li, Liucheng; Wang, Yuanhu; Tang, Shukai; Yu, Hongliang

    2015-02-01

    A user-friendly data acquisition and control system (DACS) for a pulsed chemical oxygen -iodine laser (PCOIL) has been developed. It is implemented by an industrial control computer,a PLC, and a distributed input/output (I/O) module, as well as the valve and transmitter. The system is capable of handling 200 analogue/digital channels for performing various operations such as on-line acquisition, display, safety measures and control of various valves. These operations are controlled either by control switches configured on a PC while not running or by a pre-determined sequence or timings during the run. The system is capable of real-time acquisition and on-line estimation of important diagnostic parameters for optimization of a PCOIL. The DACS system has been programmed using software programmable logic controller (PLC). Using this DACS, more than 200 runs were given performed successfully.

  12. 40 CFR 265.193 - Containment and detection of releases.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... of the largest tank within its boundary; (ii) Designed or operated to prevent run-on or infiltration... excess capacity to contain run-on or infiltration. Such additional capacity must be sufficient to contain... infiltration of precipitation into the secondary containment system unless the collection system has sufficient...

  13. 40 CFR 265.193 - Containment and detection of releases.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... of the largest tank within its boundary; (ii) Designed or operated to prevent run-on or infiltration... excess capacity to contain run-on or infiltration. Such additional capacity must be sufficient to contain... infiltration of precipitation into the secondary containment system unless the collection system has sufficient...

  14. 40 CFR 265.193 - Containment and detection of releases.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... of the largest tank within its boundary; (ii) Designed or operated to prevent run-on or infiltration... excess capacity to contain run-on or infiltration. Such additional capacity must be sufficient to contain... infiltration of precipitation into the secondary containment system unless the collection system has sufficient...

  15. Energy Frontier Research With ATLAS: Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Butler, John; Black, Kevin; Ahlen, Steve

    2016-06-14

    The Boston University (BU) group is playing key roles across the ATLAS experiment: in detector operations, the online trigger, the upgrade, computing, and physics analysis. Our team has been critical to the maintenance and operations of the muon system since its installation. During Run 1 we led the muon trigger group and that responsibility continues into Run 2. BU maintains and operates the ATLAS Northeast Tier 2 computing center. We are actively engaged in the analysis of ATLAS data from Run 1 and Run 2. Physics analyses we have contributed to include Standard Model measurements (W and Z cross sections,more » t\\bar{t} differential cross sections, WWW^* production), evidence for the Higgs decaying to \\tau^+\\tau^-, and searches for new phenomena (technicolor, Z' and W', vector-like quarks, dark matter).« less

  16. A Menu-Driven Interface to Unix-Based Resources

    PubMed Central

    Evans, Elizabeth A.

    1989-01-01

    Unix has often been overlooked in the past as a viable operating system for anyone other than computer scientists. Its terseness, non-mnemonic nature of the commands, and the lack of user-friendly software to run under it are but a few of the user-related reasons which have been cited. It is, nevertheless, the operating system of choice in many cases. This paper describes a menu-driven interface to Unix which provides user-friendlier access to the software resources available on the computers running under Unix.

  17. Rotary Kiln Gasification of Solid Waste for Base Camps

    DTIC Science & Technology

    2017-10-02

    cup after full day run 3.3 Feedstock Handling System Garbage bags containing waste feedstock are placed into feed bin FB-101. Ram feeder RF-102...Environmental Science and Technology using the Factory Talk SCADA software running on a laptop computer. A wireless Ethernet router that is located within the...pyrolysis oil produced required consistent draining from the system during operation and became a liquid waste disposal problem. A 5-hour test run could

  18. Power Consumption Analysis of Operating Systems for Wireless Sensor Networks

    PubMed Central

    Lajara, Rafael; Pelegrí-Sebastiá, José; Perez Solano, Juan J.

    2010-01-01

    In this paper four wireless sensor network operating systems are compared in terms of power consumption. The analysis takes into account the most common operating systems—TinyOS v1.0, TinyOS v2.0, Mantis and Contiki—running on Tmote Sky and MICAz devices. With the objective of ensuring a fair evaluation, a benchmark composed of four applications has been developed, covering the most typical tasks that a Wireless Sensor Network performs. The results show the instant and average current consumption of the devices during the execution of these applications. The experimental measurements provide a good insight into the power mode in which the device components are running at every moment, and they can be used to compare the performance of different operating systems executing the same tasks. PMID:22219688

  19. Planning And Reasoning For A Telerobot

    NASA Technical Reports Server (NTRS)

    Peters, Stephen F.; Mittman, David S.; Collins, Carol E.; O'Meara Callahan, Jacquelyn S.; Rokey, Mark J.

    1992-01-01

    Document discusses research and development of Telerobot Interactive Planning System (TIPS). Goal in development of TIPS is to enable it to accept instructions from operator, then command run-time controller to execute operations to execute instructions. Challenges in transferring technology from testbed to operational system discussed.

  20. Instrument front-ends at Fermilab during Run II

    NASA Astrophysics Data System (ADS)

    Meyer, T.; Slimmer, D.; Voy, D.

    2011-11-01

    The optimization of an accelerator relies on the ability to monitor the behavior of the beam in an intelligent and timely fashion. The use of processor-driven front-ends allowed for the deployment of smart systems in the field for improved data collection and analysis during Run II. This paper describes the implementation of the two main systems used: National Instruments LabVIEW running on PCs, and WindRiver's VxWorks real-time operating system running in a VME crate processor. Work supported by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the United States Department of Energy.

  1. Screwworm Eradication Data System (SEDS) operational manual, part 3

    NASA Technical Reports Server (NTRS)

    1976-01-01

    All phases of SEDS operation as well as utility routines, error messages, and system disk maintenance procedures are described. Display layouts and examples of runs are included as additional explanation to SEDS program procedures.

  2. Improving capacity to monitor and support sustainability of mental health peer-run organizations.

    PubMed

    Ostrow, Laysha; Leaf, Philip J

    2014-02-01

    Peer-run mental health organizations are managed and staffed by people with lived experience of the mental health system. These understudied organizations are increasingly recognized as an important component of the behavioral health care and social support systems. This Open Forum describes the National Survey of Peer-Run Organizations, which was conducted in 2012 to gather information about peer-run organizations and programs, organizational operations, policy perspectives, and service systems. A total of 895 entities were identified and contacted as potential peer-run organizations. Information was obtained for 715 (80%) entities, and 380 of the 715 responding entities met the criteria for a peer-run organization. Implementation of the Affordable Care Act may entail benefits and unintended consequences for peer-run organizations. It is essential that we understand this population of organizations and continue to monitor changes associated with policies intended to provide better access to care that promotes wellness and recovery.

  3. Study on the Effect of a Cogeneration System Capacity on its CO2 Emissions

    NASA Astrophysics Data System (ADS)

    Fonseca, J. G. S., Jr.; Asano, Hitoshi; Fujii, Terushige; Hirasawa, Shigeki

    With the global warming problem aggravating and subsequent implementation of the Kyoto Protocol, CO2 emissions are becoming an important factor when verifying the usability of cogeneration systems. Considering this, the purpose of this work is to study the effect of the capacity of a cogeneration system on its CO2 emissions under two kinds of operation strategies: one focused on exergetic efficiency and another on running cost. The system meets the demand pattern typical of a hospital in Japan, operating during one year with an average heat-to-power ratio of 1.3. The main equipments of the cogeneration system are: a gas turbine with waste heat boiler, a main boiler and an auxiliary steam turbine. Each of these equipments was characterized with partial load models, and the turbine efficiencies at full load changed according to the system capacity. Still, it was assumed that eventual surplus of electricity generated could be sold. The main results showed that for any of the capacities simulated, an exergetic efficiency-focused operational strategy always resulted in higher CO2 emissions reduction when compared to the running cost-focused strategy. Furthermore, the amount of reduction in emissions decreased when the system capacity decreased, reaching a value of 1.6% when the system capacity was 33% of the maximum electricity demand with a heat-to-power ratio of 4.1. When the system operated focused on running cost, the economic savings increased with the capacity and reached 42% for a system capacity of 80% of maximum electricity demand and with a heat-to-power ratio of 2.3. In such conditions however, there was an increase in emissions of 8.5%. Still for the same capacity, an exergetic efficiency operation strategy presented the best balance between cost and emissions, generating economic savings of 29% with a decrease in CO2 emissions of 7.1%. The results found showed the importance of an exergy-focused operational strategy and also indicated that lower capacities resulted in lesser gains of both CO2 emissions and running cost reduction.

  4. Implementing Audio Digital Feedback Loop Using the National Instruments RIO System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, G.; Byrd, J. M.

    2006-11-20

    Development of system for high precision RF distribution and laser synchronization at Berkeley Lab has been ongoing for several years. Successful operation of these systems requires multiple audio bandwidth feedback loops running at relatively high gains. Stable operation of the feedback loops requires careful design of the feedback transfer function. To allow for flexible and compact implementation, we have developed digital feedback loops on the National Instruments Reconfigurable Input/Output (RIO) platform. This platform uses an FPGA and multiple I/Os that can provide eight parallel channels running different filters. We present the design and preliminary experimental results of this system.

  5. 242A Distributed Control System Year 2000 Acceptance Test Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    TEATS, M.C.

    1999-08-31

    This report documents acceptance test results for the 242-A Evaporator distributive control system upgrade to D/3 version 9.0-2 for year 2000 compliance. This report documents the test results obtained by acceptance testing as directed by procedure HNF-2695. This verification procedure will document the initial testing and evaluation of the potential 242-A Distributed Control System (DCS) operating difficulties across the year 2000 boundary and the calendar adjustments needed for the leap year. Baseline system performance data will be recorded using current, as-is operating system software. Data will also be collected for operating system software that has been modified to correct yearmore » 2000 problems. This verification procedure is intended to be generic such that it may be performed on any D/3{trademark} (GSE Process Solutions, Inc.) distributed control system that runs with the VMSTM (Digital Equipment Corporation) operating system. This test may be run on simulation or production systems depending upon facility status. On production systems, DCS outages will occur nine times throughout performance of the test. These outages are expected to last about 10 minutes each.« less

  6. Ada 9X Project Revision Request Report. Supplement 1

    DTIC Science & Technology

    1990-01-01

    Non-portable use of operating system primitives or of Ada run time system internals. POSSIBLE SOLUTIONS: Mandate that compilers recognize tasks that...complex than a simple operating system file, the compiler vendor must provide routines to manipulate it (create, copy, move etc .) as a single entity... system , to support fault tolerance, load sharing, change of system operating mode etc . It is highly desirable that such important software be written in

  7. Simplified programming and control of automated radiosynthesizers through unit operations.

    PubMed

    Claggett, Shane B; Quinn, Kevin M; Lazari, Mark; Moore, Melissa D; van Dam, R Michael

    2013-07-15

    Many automated radiosynthesizers for producing positron emission tomography (PET) probes provide a means for the operator to create custom synthesis programs. The programming interfaces are typically designed with the engineer rather than the radiochemist in mind, requiring lengthy programs to be created from sequences of low-level, non-intuitive hardware operations. In some cases, the user is even responsible for adding steps to update the graphical representation of the system. In light of these unnecessarily complex approaches, we have created software to perform radiochemistry on the ELIXYS radiosynthesizer with the goal of being intuitive and easy to use. Radiochemists were consulted, and a wide range of radiosyntheses were analyzed to determine a comprehensive set of basic chemistry unit operations. Based around these operations, we created a software control system with a client-server architecture. In an attempt to maximize flexibility, the client software was designed to run on a variety of portable multi-touch devices. The software was used to create programs for the synthesis of several 18F-labeled probes on the ELIXYS radiosynthesizer, with [18F]FDG detailed here. To gauge the user-friendliness of the software, program lengths were compared to those from other systems. A small sample group with no prior radiosynthesizer experience was tasked with creating and running a simple protocol. The software was successfully used to synthesize several 18F-labeled PET probes, including [18F]FDG, with synthesis times and yields comparable to literature reports. The resulting programs were significantly shorter and easier to debug than programs from other systems. The sample group of naive users created and ran a simple protocol within a couple of hours, revealing a very short learning curve. The client-server architecture provided reliability, enabling continuity of the synthesis run even if the computer running the client software failed. The architecture enabled a single user to control the hardware while others observed the run in progress or created programs for other probes. We developed a novel unit operation-based software interface to control automated radiosynthesizers that reduced the program length and complexity and also exhibited a short learning curve. The client-server architecture provided robustness and flexibility.

  8. Simplified programming and control of automated radiosynthesizers through unit operations

    PubMed Central

    2013-01-01

    Background Many automated radiosynthesizers for producing positron emission tomography (PET) probes provide a means for the operator to create custom synthesis programs. The programming interfaces are typically designed with the engineer rather than the radiochemist in mind, requiring lengthy programs to be created from sequences of low-level, non-intuitive hardware operations. In some cases, the user is even responsible for adding steps to update the graphical representation of the system. In light of these unnecessarily complex approaches, we have created software to perform radiochemistry on the ELIXYS radiosynthesizer with the goal of being intuitive and easy to use. Methods Radiochemists were consulted, and a wide range of radiosyntheses were analyzed to determine a comprehensive set of basic chemistry unit operations. Based around these operations, we created a software control system with a client–server architecture. In an attempt to maximize flexibility, the client software was designed to run on a variety of portable multi-touch devices. The software was used to create programs for the synthesis of several 18F-labeled probes on the ELIXYS radiosynthesizer, with [18F]FDG detailed here. To gauge the user-friendliness of the software, program lengths were compared to those from other systems. A small sample group with no prior radiosynthesizer experience was tasked with creating and running a simple protocol. Results The software was successfully used to synthesize several 18F-labeled PET probes, including [18F]FDG, with synthesis times and yields comparable to literature reports. The resulting programs were significantly shorter and easier to debug than programs from other systems. The sample group of naive users created and ran a simple protocol within a couple of hours, revealing a very short learning curve. The client–server architecture provided reliability, enabling continuity of the synthesis run even if the computer running the client software failed. The architecture enabled a single user to control the hardware while others observed the run in progress or created programs for other probes. Conclusions We developed a novel unit operation-based software interface to control automated radiosynthesizers that reduced the program length and complexity and also exhibited a short learning curve. The client–server architecture provided robustness and flexibility. PMID:23855995

  9. ATD-1 Operational Integration Assessment Final Report

    NASA Technical Reports Server (NTRS)

    Witzberger, Kevin E.; Sharma, Shivanjli; Martin, Lynn Hazel; Wynnyk, Mitch; McGarry, Katie

    2015-01-01

    The FAA and NASA conducted an Operational Integration Assessment (OIA) of a prototype Terminal Sequencing and Spacing (formerly TSS, now TSAS) system at the FAA's William J. Hughes Technical Center (WJHTC). The OIA took approximately one year to plan and execute, culminating in a formal data collection, referred to as the Run for Record, from May 12-21, 2015. This report presents quantitative and qualitative results from the Run for Record.

  10. Development and Design of a User Interface for a Computer Automated Heating, Ventilation, and Air Conditioning System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, B.; /Fermilab

    1999-10-08

    A user interface is created to monitor and operate the heating, ventilation, and air conditioning system. The interface is networked to the system's programmable logic controller. The controller maintains automated control of the system. The user through the interface is able to see the status of the system and override or adjust the automatic control features. The interface is programmed to show digital readouts of system equipment as well as visual queues of system operational statuses. It also provides information for system design and component interaction. The interface is made easier to read by simple designs, color coordination, and graphics.more » Fermi National Accelerator Laboratory (Fermi lab) conducts high energy particle physics research. Part of this research involves collision experiments with protons, and anti-protons. These interactions are contained within one of two massive detectors along Fermilab's largest particle accelerator the Tevatron. The D-Zero Assembly Building houses one of these detectors. At this time detector systems are being upgraded for a second experiment run, titled Run II. Unlike the previous run, systems at D-Zero must be computer automated so operators do not have to continually monitor and adjust these systems during the run. Human intervention should only be necessary for system start up and shut down, and equipment failure. Part of this upgrade includes the heating, ventilation, and air conditioning system (HVAC system). The HVAC system is responsible for controlling two subsystems, the air temperatures of the D-Zero Assembly Building and associated collision hall, as well as six separate water systems used in the heating and cooling of the air and detector components. The BYAC system is automated by a programmable logic controller. In order to provide system monitoring and operator control a user interface is required. This paper will address methods and strategies used to design and implement an effective user interface. Background material pertinent to the BYAC system will cover the separate water and air subsystems and their purposes. In addition programming and system automation will also be covered.« less

  11. Trigger Menu-aware Monitoring for the ATLAS experiment

    NASA Astrophysics Data System (ADS)

    Hoad, Xanthe; ATLAS Collaboration

    2017-10-01

    We present a“trigger menu-aware” monitoring system designed for the Run-2 data-taking of the ATLAS experiment at the LHC. Unlike Run-1, where a change in the trigger menu had to be matched by the installation of a new software release at Tier-0, the new monitoring system aims to simplify the ATLAS operational workflows. This is achieved by integrating monitoring updates in a quick and flexible manner via an Oracle DB interface. We present the design and the implementation of the menu-aware monitoring, along with lessons from the operational experience of the new system with the 2016 collision data.

  12. Operating a wide-area remote observing system for the W. M. Keck Observatory

    NASA Astrophysics Data System (ADS)

    Wirth, Gregory D.; Kibrick, Robert I.; Goodrich, Robert W.; Lyke, James E.

    2008-07-01

    For over a decade, the W. M. Keck Observatory's two 10-meter telescopes have been operated remotely from its Waimea headquarters. Over the last 6 years, WMKO remote observing has expanded to allow teams at dedicated sites in California to observe either in collaboration with colleagues in Waimea or entirely from the U.S. mainland. Once an experimental effort, the Observatory's mainland observing capability is now fully operational, supported on all science instruments (except the interferometer) and regularly used by astronomers at eight mainland sites. Establishing a convenient and secure observing capability from those sites required careful planning to ensure that they are properly equipped and configured. It also entailed a significant investment in hardware and software, including both custom scripts to simplify launching the instrument interface at remote sites and automated routers employing ISDN backup lines to ensure continuation of observing during Internet outages. Observers often wait until shortly before their runs to request use of the mainland facilities. Scheduling these requests and ensuring proper system operation prior to observing requires close coordination between personnel at WMKO and the mainland sites. An established protocol for approving requests and carrying out pre-run checkout has proven useful in ensuring success. The Observatory anticipates enhancing and expanding its remote observing system. Future plans include deploying dedicated summit computers for running VNC server software, implementing a web-based tracking system for mainland-based observing requests, expanding the system to additional mainland sites, and converting to full-time VNC operation for all instruments.

  13. Hardware-In-The-Loop Power Extraction Using Different Real-Time Platforms (Postprint)

    DTIC Science & Technology

    2008-11-01

    each real - time operating system . However, discrepancies in test results obtained from the NI system can be resolved. This paper briefly details...same model in native Simulink. These results show that each real - time operating system can be configured to accurately run transient Simulink models

  14. From Operating-System Correctness to Pervasively Verified Applications

    NASA Astrophysics Data System (ADS)

    Daum, Matthias; Schirmer, Norbert W.; Schmidt, Mareike

    Though program verification is known and has been used for decades, the verification of a complete computer system still remains a grand challenge. Part of this challenge is the interaction of application programs with the operating system, which is usually entrusted with retrieving input data from and transferring output data to peripheral devices. In this scenario, the correct operation of the applications inherently relies on operating-system correctness. Based on the formal correctness of our real-time operating system Olos, this paper describes an approach to pervasively verify applications running on top of the operating system.

  15. The Data Acquisition System for the AAO 2-Degree Field Project

    NASA Astrophysics Data System (ADS)

    Shortridge, K.; Farrell, T. J.; Bailey, J. A.

    1993-01-01

    The software system being produced by AAO to control the new 2-degree field fibre positioner and spectrographs is described. The system has to mesh cleanly with the ADAM systems used at AAO for CCD data acquisition, and has to run on a network of disparate machines including VMS Vaxes, UNIX workstations, and VME systems running VxWorks. The basis of the new system is a task control layer that operates by sending self-defining hierarchically-structured and machine-independent messages.

  16. Preliminary Findings of the South Africa Power System Capacity Expansion and Operational Modelling Study: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reber, Timothy J; Chartan, Erol Kevin; Brinkman, Gregory L

    Wind and solar power contract prices have recently become cheaper than many conventional new-build alternatives in South Africa and trends suggest a continued increase in the share of variable renewable energy (vRE) on South Africa's power system with coal technology seeing the greatest reduction in capacity, see 'Figure 6: Percentage share by Installed Capacity (MW)' in [1]. Hence it is essential to perform a state-of-the-art grid integration study examining the effects of these high penetrations of vRE on South Africa's power system. Under the 21st Century Power Partnership (21CPP), funded by the U.S. Department of Energy, the National Renewable Energymore » Laboratory (NREL) has significantly augmented existing models of the South African power system to investigate future vRE scenarios. NREL, in collaboration with Eskom's Planning Department, further developed, tested and ran a combined capacity expansion and operational model of the South African power system including spatially disaggregated detail and geographical representation of system resources. New software to visualize and interpret modelling outputs has been developed, and scenario analysis of stepwise vRE build targets reveals new insight into associated planning and operational impacts and costs. The model, built using PLEXOS, is split into two components, firstly a capacity expansion model and secondly a unit commitment and economic dispatch model. The capacity expansion model optimizes new generation decisions to achieve the lowest cost, with a full understanding of capital cost and an approximated understanding of operational costs. The operational model has a greater set of detailed operational constraints and is run at daily resolutions. Both are run from 2017 through 2050. This investigation suggests that running both models in tandem may be the most effective means to plan the least cost South African power system as build plans seen to be more expensive than optimal by the capacity expansion model can produce greater operational cost savings seen only in the operational model.« less

  17. Client-Server: What Is It and Are We There Yet?

    ERIC Educational Resources Information Center

    Gershenfeld, Nancy

    1995-01-01

    Discusses client-server architecture in dumb terminals, personal computers, local area networks, and graphical user interfaces. Focuses on functions offered by client personal computers: individualized environments; flexibility in running operating systems; advanced operating system features; multiuser environments; and centralized data…

  18. A brief opportunity to run does not function as a reinforcer for mice selected for high daily wheel-running rates.

    PubMed

    Belke, Terry W; Garland, Theodore

    2007-09-01

    Mice from replicate lines, selectively bred based on high daily wheel-running rates, run more total revolutions and at higher average speeds than do mice from nonselected control lines. Based on this difference it was assumed that selected mice would find the opportunity to run in a wheel a more efficacious consequence. To assess this assumption within an operant paradigm, mice must be trained to make a response to produce the opportunity to run as a consequence. In the present study an autoshaping procedure was used to compare the acquisition of lever pressing reinforced by the opportunity to run for a brief opportunity (i.e., 90 s) between selected and control mice and then, using an operant procedure, the effect of the duration of the opportunity to run on lever pressing was assessed by varying reinforcer duration over values of 90 s, 30 min, and 90 s. The reinforcement schedule was a ratio schedule (FR 1 or VR 3). Results from the autoshaping phase showed that more control mice met a criterion of responses on 50% of trials. During the operant phase, when reinforcer duration was 90 s, almost all control, but few selected mice completed a session of 20 reinforcers; however, when reinforcer duration was increased to 30 min almost all selected and control mice completed a session of 20 reinforcers. Taken together, these results suggest that selective breeding based on wheel-running rates over 24 hr may have altered the motivational system in a way that reduces the reinforcing value of shorter running durations. The implications of this finding for these mice as a model for attention deficit hyperactivity disorder (ADHD) are discussed. It also is proposed that there may be an inherent trade-off in the motivational system for activities of short versus long duration.

  19. A Brief Opportunity to Run Does Not Function as a Reinforcer for Mice Selected for High Daily Wheel-running Rates

    PubMed Central

    Belke, Terry W; GarlandJr, Theodore

    2007-01-01

    Mice from replicate lines, selectively bred based on high daily wheel-running rates, run more total revolutions and at higher average speeds than do mice from nonselected control lines. Based on this difference it was assumed that selected mice would find the opportunity to run in a wheel a more efficacious consequence. To assess this assumption within an operant paradigm, mice must be trained to make a response to produce the opportunity to run as a consequence. In the present study an autoshaping procedure was used to compare the acquisition of lever pressing reinforced by the opportunity to run for a brief opportunity (i.e., 90 s) between selected and control mice and then, using an operant procedure, the effect of the duration of the opportunity to run on lever pressing was assessed by varying reinforcer duration over values of 90 s, 30 min, and 90 s. The reinforcement schedule was a ratio schedule (FR 1 or VR 3). Results from the autoshaping phase showed that more control mice met a criterion of responses on 50% of trials. During the operant phase, when reinforcer duration was 90 s, almost all control, but few selected mice completed a session of 20 reinforcers; however, when reinforcer duration was increased to 30 min almost all selected and control mice completed a session of 20 reinforcers. Taken together, these results suggest that selective breeding based on wheel-running rates over 24 hr may have altered the motivational system in a way that reduces the reinforcing value of shorter running durations. The implications of this finding for these mice as a model for attention deficit hyperactivity disorder (ADHD) are discussed. It also is proposed that there may be an inherent trade-off in the motivational system for activities of short versus long duration. PMID:17970415

  20. The Real-Time ObjectAgent Software Architecture for Distributed Satellite Systems

    DTIC Science & Technology

    2001-01-01

    real - time operating system selection are also discussed. The fourth section describes a simple demonstration of real-time ObjectAgent. Finally, the...experience with C++. After selecting the programming language, it was necessary to select a target real - time operating system (RTOS) and embedded...ObjectAgent software to run on the OSE Real Time Operating System . In addition, she is responsible for the integration of ObjectAgent

  1. Enhanced Component Performance Study: Turbine-Driven Pumps 1998–2014

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schroeder, John Alton

    2015-11-01

    This report presents an enhanced performance evaluation of turbine-driven pumps (TDPs) at U.S. commercial nuclear power plants. The data used in this study are based on the operating experience failure reports from fiscal year 1998 through 2014 for the component reliability as reported in the Institute of Nuclear Power Operations (INPO) Consolidated Events Database (ICES). The TDP failure modes considered are failure to start (FTS), failure to run less than or equal to one hour (FTR=1H), failure to run more than one hour (FTR>1H), and normally running systems FTS and failure to run (FTR). The component reliability estimates and themore » reliability data are trended for the most recent 10-year period while yearly estimates for reliability are provided for the entire active period. Statistically significant increasing trends were identified for TDP unavailability, for frequency of start demands for standby TDPs, and for run hours in the first hour after start. Statistically significant decreasing trends were identified for start demands for normally running TDPs, and for run hours per reactor critical year for normally running TDPs.« less

  2. Some design constraints required for the use of generic software in embedded systems: Packages which manage abstract dynamic structures without the need for garbage collection

    NASA Technical Reports Server (NTRS)

    Johnson, Charles S.

    1986-01-01

    The embedded systems running real-time applications, for which Ada was designed, require their own mechanisms for the management of dynamically allocated storage. There is a need for packages which manage their own internalo structures to control their deallocation as well, due to the performance implications of garbage collection by the KAPSE. This places a requirement upon the design of generic packages which manage generically structured private types built-up from application-defined input types. These kinds of generic packages should figure greatly in the development of lower-level software such as operating systems, schedulers, controllers, and device driver; and will manage structures such as queues, stacks, link-lists, files, and binary multary (hierarchical) trees. Controlled to prevent inadvertent de-designation of dynamic elements, which is implicit in the assignment operation A study was made of the use of limited private type, in solving the problems of controlling the accumulation of anonymous, detached objects in running systems. The use of deallocator prodecures for run-down of application-defined input types during deallocation operations during satellites.

  3. Robotics On-Board Trainer (ROBoT)

    NASA Technical Reports Server (NTRS)

    Johnson, Genevieve; Alexander, Greg

    2013-01-01

    ROBoT is an on-orbit version of the ground-based Dynamics Skills Trainer (DST) that astronauts use for training on a frequent basis. This software consists of two primary software groups. The first series of components is responsible for displaying the graphical scenes. The remaining components are responsible for simulating the Mobile Servicing System (MSS), the Japanese Experiment Module Remote Manipulator System (JEMRMS), and the H-II Transfer Vehicle (HTV) Free Flyer Robotics Operations. The MSS simulation software includes: Robotic Workstation (RWS) simulation, a simulation of the Space Station Remote Manipulator System (SSRMS), a simulation of the ISS Command and Control System (CCS), and a portion of the Portable Computer System (PCS) software necessary for MSS operations. These components all run under the CentOS4.5 Linux operating system. The JEMRMS simulation software includes real-time, HIL, dynamics, manipulator multi-body dynamics, and a moving object contact model with Tricks discrete time scheduling. The JEMRMS DST will be used as a functional proficiency and skills trainer for flight crews. The HTV Free Flyer Robotics Operations simulation software adds a functional simulation of HTV vehicle controllers, sensors, and data to the MSS simulation software. These components are intended to support HTV ISS visiting vehicle analysis and training. The scene generation software will use DOUG (Dynamic On-orbit Ubiquitous Graphics) to render the graphical scenes. DOUG runs on a laptop running the CentOS4.5 Linux operating system. DOUG is an Open GL-based 3D computer graphics rendering package. It uses pre-built three-dimensional models of on-orbit ISS and space shuttle systems elements, and provides realtime views of various station and shuttle configurations.

  4. New Process Controls for the Hera Cryogenic Plant

    NASA Astrophysics Data System (ADS)

    Böckmann, T.; Clausen, M.; Gerke, Chr.; Prüß, K.; Schoeneburg, B.; Urbschat, P.

    2010-04-01

    The cryogenic plant built for the HERA accelerator at DESY in Hamburg (Germany) is now in operation for more than two decades. The commercial process control system for the cryogenic plant is in operation for the same time period. Ever since the operator stations, the control network and the CPU boards in the process controllers went through several upgrade stages. Only the centralized Input/Output system was kept unchanged. Many components have been running beyond the expected lifetime. The control system for one at the three parts of the cryogenic plant has been replaced recently by a distributed I/O system. The I/O nodes are connected to several Profibus-DP field busses. Profibus provides the infrastructure to attach intelligent sensors and actuators directly to the process controllers which run the open source process control software EPICS. This paper describes the modification process on all levels from cabling through I/O configuration, the process control software up to the operator displays.

  5. Development of advanced Czochralski growth process to produce low-cost 150 kG silicon ingots from a single crucible for technology readiness

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The modified CG2000 crystal grower construction, installation, and machine check out was completed. The process development check out proceeded with several dry runs and one growth run. Several machine calibrations and functional problems were discovered and corrected. Exhaust gas analysis system alternatives were evaluated and an integrated system approved and ordered. Several growth runs on a development CG2000 RC grower show that complete neck, crown, and body automated growth can be achieved with only one operator input.

  6. Scaling NS-3 DCE Experiments on Multi-Core Servers

    DTIC Science & Technology

    2016-06-15

    that work well together. 3.2 Simulation Server Details We ran the simulations on a Dell® PowerEdge M520 blade server[8] running Ubuntu Linux 14.04...To minimize the amount of time needed to complete all of the simulations, we planned to run multiple simulations at the same time on a blade server...MacBook was running the simulation inside a virtual machine (Ubuntu 14.04), while the blade server was running the same operating system directly on

  7. Performance Evaluation of a Firm Real-Time DataBase System

    DTIC Science & Technology

    1995-01-01

    after its deadline has passed. StarBase differs from previous real-time database work in that a) it relies on a real - time operating system which...StarBase, running on a real - time operating system kernel, RT-Mach. We discuss how performance was evaluated in StarBase using the StarBase workload

  8. The Design and Implementation of INGRES.

    ERIC Educational Resources Information Center

    Stonebraker, Michael; And Others

    The currently operational version of the INGRES data base management system gives a relational view of data, supports two high level, non-procedural data sublanguages, and runs as a collection of user processes on top of a UNIX operating system. The authors stress the design decisions and tradeoffs in relation to (1) structuring the system into…

  9. Digital ultrasonics signal processing: Flaw data post processing use and description

    NASA Technical Reports Server (NTRS)

    Buel, V. E.

    1981-01-01

    A modular system composed of two sets of tasks which interprets the flaw data and allows compensation of the data due to transducer characteristics is described. The hardware configuration consists of two main units. A DEC LSI-11 processor running under the RT-11 sngle job, version 2C-02 operating system, controls the scanner hardware and the ultrasonic unit. A DEC PDP-11/45 processor also running under the RT-11, version 2C-02, operating system, stores, processes and displays the flaw data. The software developed the Ultrasonics Evaluation System, is divided into two catagories; transducer characterization and flaw classification. Each category is divided further into two functional tasks: a data acquisition and a postprocessor ask. The flaw characterization collects data, compresses its, and writes it to a disk file. The data is then processed by the flaw classification postprocessing task. The use and operation of a flaw data postprocessor is described.

  10. Virtual time and time warp on the JPL hypercube. [operating system implementation for distributed simulation

    NASA Technical Reports Server (NTRS)

    Jefferson, David; Beckman, Brian

    1986-01-01

    This paper describes the concept of virtual time and its implementation in the Time Warp Operating System at the Jet Propulsion Laboratory. Virtual time is a distributed synchronization paradigm that is appropriate for distributed simulation, database concurrency control, real time systems, and coordination of replicated processes. The Time Warp Operating System is targeted toward the distributed simulation application and runs on a 32-node JPL Mark II Hypercube.

  11. Experiences running NASTRAN on the Microvax 2 computer

    NASA Technical Reports Server (NTRS)

    Butler, Thomas G.; Mitchell, Reginald S.

    1987-01-01

    The MicroVAX operates NASTRAN so well that the only detectable difference in its operation compared to an 11/780 VAX is in the execution time. On the modest installation described here, the engineer has all of the tools he needs to do an excellent job of analysis. System configuration decisions, system sizing, preparation of the system disk, definition of user quotas, installation, monitoring of system errors, and operation policies are discussed.

  12. Joint Precision Approach and Landing System Nunn-McCurdy Breach Root Cause Analysis and Portfolio Assessment Metrics for DoD Weapons Systems. Volume 8

    DTIC Science & Technology

    2015-01-01

    system that would help in adverse weather conditions. U.S. operations in Bosnia, which were run from a relatively austere airfield with limited air... operations beginning in 2013 (CVN21, Joint Strike Fighter, Joint Unmanned Combat Air System ). cAccording to multiple FAA ofcial planning documents...Positioning System Next Generation Operational Control System HMS Handheld, Manpack and Small Form Fit HUD Head up Display IAMD Integrated Air and

  13. HS06 Benchmark for an ARM Server

    NASA Astrophysics Data System (ADS)

    Kluth, Stefan

    2014-06-01

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  14. Operation of the intensity monitors in beam transport lines at Fermilab during Run II¹

    DOE PAGES

    Crisp, J.; Fellenz, B.; Fitzgerald, J.; ...

    2011-10-06

    The intensity of charged particle beams at Fermilab must be kept within pre-determined safety and operational envelopes in part by assuring all beam within a few percent has been transported from any source to destination. Beam instensity monitors with toroidial pickups provide such beam intensity measurements in the transport lines between accelerators at FNAL. With Run II, much effort was made to continually improve the resolution and accuracy of the system.

  15. Effect of sucrose availability and pre-running on the intrinsic value of wheel running as an operant and a reinforcing consequence.

    PubMed

    Belke, Terry W; Pierce, W David

    2014-03-01

    The current study investigated the effect of motivational manipulations on operant wheel running for sucrose reinforcement and on wheel running as a behavioral consequence for lever pressing, within the same experimental context. Specifically, rats responded on a two-component multiple schedule of reinforcement in which lever pressing produced the opportunity to run in a wheel in one component of the schedule (reinforcer component) and wheel running produced the opportunity to consume sucrose solution in the other component (operant component). Motivational manipulations involved removal of sucrose contingent on wheel running and providing 1h of pre-session wheel running. Results showed that, in opposition to a response strengthening view, sucrose did not maintain operant wheel running. The motivational operations of withdrawing sucrose or providing pre-session wheel running, however, resulted in different wheel-running rates in the operant and reinforcer components of the multiple schedule; this rate discrepancy revealed the extrinsic reinforcing effects of sucrose on operant wheel running, but also indicated the intrinsic reinforcement value of wheel running across components. Differences in wheel-running rates between components were discussed in terms of arousal, undermining of intrinsic motivation, and behavioral contrast. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. Memory Forensics: Review of Acquisition and Analysis Techniques

    DTIC Science & Technology

    2013-11-01

    Management Overview Processes running on modern multitasking operating systems operate on an abstraction of RAM, called virtual memory [7]. In these systems...information such as user names, email addresses and passwords [7]. Analysts also use tools such as WinHex to identify headers or other suspicious data within

  17. HDM/PASCAL Verification System User's Manual

    NASA Technical Reports Server (NTRS)

    Hare, D.

    1983-01-01

    The HDM/Pascal verification system is a tool for proving the correctness of programs written in PASCAL and specified in the Hierarchical Development Methodology (HDM). This document assumes an understanding of PASCAL, HDM, program verification, and the STP system. The steps toward verification which this tool provides are parsing programs and specifications, checking the static semantics, and generating verification conditions. Some support functions are provided such as maintaining a data base, status management, and editing. The system runs under the TOPS-20 and TENEX operating systems and is written in INTERLISP. However, no knowledge is assumed of these operating systems or of INTERLISP. The system requires three executable files, HDMVCG, PARSE, and STP. Optionally, the editor EMACS should be on the system in order for the editor to work. The file HDMVCG is invoked to run the system. The files PARSE and STP are used as lower forks to perform the functions of parsing and proving.

  18. Real-Time Imaging with a Pulsed Coherent CO, Laser Radar

    DTIC Science & Technology

    1997-01-01

    30 joule) transmitted energy levels has just begun. The FLD program will conclude in 1997 with the demonstration of a full-up, real - time operating system . This...The master system and VMEbus controller is an off-the-shelf controller based on the Motorola 68040 processor running the VxWorks real time operating system . Application

  19. The battle between Unix and Windows NT.

    PubMed

    Anderson, H J

    1997-02-01

    For more than a decade, Unix has been the dominant back-end operating system in health care. But that prominent position is being challenged by Windows NT, touted by its developer, Microsoft Corp., as the operating system of the future. CIOs and others are attempting to figure out which system is the best choice in the long run.

  20. Performance of high intensity fed-batch mammalian cell cultures in disposable bioreactor systems.

    PubMed

    Smelko, John Paul; Wiltberger, Kelly Rae; Hickman, Eric Francis; Morris, Beverly Janey; Blackburn, Tobias James; Ryll, Thomas

    2011-01-01

    The adoption of disposable bioreactor technology as an alternate to traditional nondisposable technology is gaining momentum in the biotechnology industry. Evaluation of current disposable bioreactors systems to sustain high intensity fed-batch mammalian cell culture processes needs to be explored. In this study, an assessment was performed comparing single-use bioreactors (SUBs) systems of 50-, 250-, and 1,000-L operating scales with traditional stainless steel (SS) and glass vessels using four distinct mammalian cell culture processes. This comparison focuses on expansion and production stage performance. The SUB performance was evaluated based on three main areas: operability, process scalability, and process performance. The process performance and operability aspects were assessed over time and product quality performance was compared at the day of harvest. Expansion stage results showed disposable bioreactors mirror traditional bioreactors in terms of cellular growth and metabolism. Set-up and disposal times were dramatically reduced using the SUB systems when compared with traditional systems. Production stage runs for both Chinese hamster ovary and NS0 cell lines in the SUB system were able to model SS bioreactors runs at 100-, 200-, 2,000-, and 15,000-L scales. A single 1,000-L SUB run applying a high intensity fed-batch process was able to generate 7.5 kg of antibody with comparable product quality. Copyright © 2011 American Institute of Chemical Engineers (AIChE).

  1. Ada (Tradename) Compiler Validation Summary Report. International Business Machines Corporation. IBM Development System for the Ada Language for VM/CMS, Version 1.0. IBM 4381 (IBM System/370) under VM/CMS.

    DTIC Science & Technology

    1986-04-29

    COMPILER VALIDATION SUMMARY REPORT: International Business Machines Corporation IBM Development System for the Ada Language for VM/CMS, Version 1.0 IBM 4381...tested using command scripts provided by International Business Machines Corporation. These scripts were reviewed by the validation team. Test.s were run...s): IBM 4381 (System/370) Operating System: VM/CMS, release 3.6 International Business Machines Corporation has made no deliberate extensions to the

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Preston, Leiph

    Although using standard Taylor series coefficients for finite-difference operators is optimal in the sense that in the limit of infinitesimal space and time discretization, the solution approaches the correct analytic solution to the acousto-dynamic system of differential equations, other finite-difference operators may provide optimal computational run time given certain error bounds or source bandwidth constraints. This report describes the results of investigation of alternative optimal finite-difference coefficients based on several optimization/accuracy scenarios and provides recommendations for minimizing run time while retaining error within given error bounds.

  3. Integrating a Trusted Computing Base Extension Server and Secure Session Server into the LINUX Operating System

    DTIC Science & Technology

    2001-09-01

    Readily Available Linux has been copyrighted under the terms of the GNU General Public 5 License (GPL)1. This is a license written by the Free...GNOME and KDE . d. Portability Linux is highly compatible with many common operating systems. For...using suitable libraries, Linux is able to run programs written for other operating systems. [Ref. 8] 1 The GNU Project is coordinated by the

  4. The vacuum platform

    NASA Astrophysics Data System (ADS)

    McNab, A.

    2017-10-01

    This paper describes GridPP’s Vacuum Platform for managing virtual machines (VMs), which has been used to run production workloads for WLCG and other HEP experiments. The platform provides a uniform interface between VMs and the sites they run at, whether the site is organised as an Infrastructure-as-a-Service cloud system such as OpenStack, or an Infrastructure-as-a-Client system such as Vac. The paper describes our experience in using this platform, in developing and operating VM lifecycle managers Vac and Vcycle, and in interacting with VMs provided by LHCb, ATLAS, ALICE, CMS, and the GridPP DIRAC service to run production workloads.

  5. A Multi-Season Study of the Effects of MODIS Sea-Surface Temperatures on Operational WRF Forecasts at NWS Miami, FL

    NASA Technical Reports Server (NTRS)

    Case, Jonathan L.; Santos, Pablo; Lazarus, Steven M.; Splitt, Michael E.; Haines, Stephanie L.; Dembek, Scott R.; Lapenta, William M.

    2008-01-01

    Studies at the Short-term Prediction Research and Transition (SPORT) Center have suggested that the use of Moderate Resolution Imaging Spectroradiometer (MODIS) sea-surface temperature (SST) composites in regional weather forecast models can have a significant positive impact on short-term numerical weather prediction in coastal regions. Recent work by LaCasse et al (2007, Monthly Weather Review) highlights lower atmospheric differences in regional numerical simulations over the Florida offshore waters using 2-km SST composites derived from the MODIS instrument aboard the polar-orbiting Aqua and Terra Earth Observing System satellites. To help quantify the value of this impact on NWS Weather Forecast Offices (WFOs), the SPORT Center and the NWS WFO at Miami, FL (MIA) are collaborating on a project to investigate the impact of using the high-resolution MODIS SST fields within the Weather Research and Forecasting (WRF) prediction system. The project's goal is to determine whether more accurate specification of the lower-boundary forcing within WRF will result in improved land/sea fluxes and hence, more accurate evolution of coastal mesoscale circulations and the associated sensible weather elements. The NWS MIA is currently running WRF in real-time to support daily forecast operations, using the National Centers for Environmental Prediction Nonhydrostatic Mesoscale Model dynamical core within the NWS Science and Training Resource Center's Environmental Modeling System (EMS) software. Twenty-seven hour forecasts are run dally initialized at 0300, 0900, 1500, and 2100 UTC on a domain with 4-km grid spacing covering the southern half of Florida and adjacent waters of the Gulf of Mexico and Atlantic Ocean. Each model run is initialized using the Local Analysis and Prediction System (LAPS) analyses available in AWIPS. The SSTs are initialized with the NCEP Real-Time Global (RTG) analyses at 1/12deg resolution (approx.9 km); however, the RTG product does not exhibit fine-scale details consistent with its grid resolution. SPORT is conducting parallel WRF EMS runs identical to the operational runs at NWS MIA except for the use of MODIS SST composites in place of the RTG product as the initial and boundary conditions over water, The MODIS SST composites for initializing the SPORT WRF runs are generated on a 2-km grid four times daily at 0400, 0700, 1600, and 1900 UTC, based on the times of the overhead passes of the Aqua and Terra satellites. The incorporation of the MODIS SST data into the SPORT WRF runs is staggered such that SSTs are updated with a new composite every six hours in each of the WRF runs. From mid-February to July 2007, over 500 parallel WRF simulations have been collected for analysis and verification. This paper will present verification results comparing the NWS MIA operational WRF runs to the SPORT experimental runs, and highlight any substantial differences noted in the predicted mesoscale phenomena for specific cases.

  6. HAL/S-360 compiler system specification

    NASA Technical Reports Server (NTRS)

    Johnson, A. E.; Newbold, P. N.; Schulenberg, C. W.; Avakian, A. E.; Varga, S.; Helmers, P. H.; Helmers, C. T., Jr.; Hotz, R. L.

    1974-01-01

    A three phase language compiler is described which produces IBM 360/370 compatible object modules and a set of simulation tables to aid in run time verification. A link edit step augments the standard OS linkage editor. A comprehensive run time system and library provide the HAL/S operating environment, error handling, a pseudo real time executive, and an extensive set of mathematical, conversion, I/O, and diagnostic routines. The specifications of the information flow and content for this system are also considered.

  7. Thermal Behavior of Aerospace Spur Gears in Normal and Loss-of-Lubrication Conditions

    NASA Technical Reports Server (NTRS)

    Handschuh, Robert F.

    2015-01-01

    Testing of instrumented spur gears operating at aerospace rotorcraft conditions was conducted. The instrumented gears were operated in a normal and in a loss-of-lubrication environment. Thermocouples were utilized to measure the temperature at various locations on the test gears and a test utilized a full-field, high-speed infrared thermal imaging system. Data from thermocouples was recorded during all testing at 1 hertz. One test had the gears shrouded and a second test was run without the shrouds to permit the infrared thermal imaging system to take data during loss-of-lubrication operation. Both tests using instrumented spur gears were run in normal and loss-of-lubrication conditions. Also the result from four other loss-of-lubrication tests will be presented. In these tests two different torque levels were used while operating at the same rotational speed (10000 revolutions per minute).

  8. Thermal Behavior of Aerospace Spur Gears in Normal and Loss-of-Lubrication Conditions

    NASA Technical Reports Server (NTRS)

    Handschuh, Robert F.

    2015-01-01

    Testing of instrumented spur gears operating at aerospace rotorcraft conditions was conducted. The instrumented gears were operated in a normal and in a loss-of-lubrication environment. Thermocouples were utilized to measure the temperature at various locations on the test gears and a test utilized a full-field, high-speed infrared thermal imaging system. Data from thermocouples was recorded during all testing at 1 Hz. One test had the gears shrouded and a second test was run without the shrouds to permit the infrared thermal imaging system to take date during loss-of-lubrication operation. Both tests using instrumented spur gears were run in normal and loss-of-lubrication conditions. Also the result from four other loss-of-lubrication tests will be presented. In these tests two different torque levels were used while operating at the same rotational speed (10000 rpm).

  9. Methods, media and systems for managing a distributed application running in a plurality of digital processing devices

    DOEpatents

    Laadan, Oren; Nieh, Jason; Phung, Dan

    2012-10-02

    Methods, media and systems for managing a distributed application running in a plurality of digital processing devices are provided. In some embodiments, a method includes running one or more processes associated with the distributed application in virtualized operating system environments on a plurality of digital processing devices, suspending the one or more processes, and saving network state information relating to network connections among the one or more processes. The method further include storing process information relating to the one or more processes, recreating the network connections using the saved network state information, and restarting the one or more processes using the stored process information.

  10. Evaluating Real-Time Platforms for Aircraft Prognostic Health Management Using Hardware-In-The-Loop

    DTIC Science & Technology

    2008-08-01

    obtained when using HIL and a simulated load. Initially, noticeable differences are seen when comparing the results from each real - time operating system . However...same model in native Simulink. These results show that each real - time operating system can be configured to accurately run transient Simulink

  11. Implementation of a dynamic data entry system for the PHENIX gas system

    NASA Astrophysics Data System (ADS)

    Hagiwara, Masako

    2003-10-01

    The PHENIX detector at the BNL RHIC facility uses multiple detector technologies that require a precise gas delivery system, including flammable gases that require additional monitoring. During operation of the detector, it is crucial to maintain stable and safe operating conditions by carefully monitoring flows, pressures, and various other gas properties. These systems are monitored during running periods on a continuous basis. For the most part, these records were kept by hand, filling out a paper logsheet every four hours. A dynamic data entry system was needed to replace the paper logsheets. The solution created was to use a PDA or laptop computer with a wireless connection to enter the data directly into a MySQL database. The system uses PHP to dynamically create and update the data entry pages. The data entered can be viewed in graphs as well as tables. As a result, the data recorded will be easily accessible during PHENIX's next running period. It also allows for long term archiving, making the data available during the analysis phase, providing knowledge of the operating conditions of the gas system.

  12. 77 FR 46185 - United States v. United Technologies Corporation and Goodrich Corporation; Proposed Final...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-02

    ... for generating power for all the in-flight systems that run on electricity, including pumping breathable air into the fuselage, operating the lights, and running the navigation and communication... turning a propeller blade on a turboprop engine, a rotor shaft on a turboshaft engine, or a fan in front...

  13. Silicon web process development

    NASA Technical Reports Server (NTRS)

    Duncan, C. S.; Seidensticker, R. G.; Mchugh, J. P.; Hill, F. E.; Skutch, M. E.; Driggers, J. M.; Hopkins, R. H.

    1980-01-01

    A barrier crucible design which consistently maintains melt stability over long periods of time was successfully tested and used in long growth runs. The pellet feeder for melt replenishment was operated continuously for growth runs of up to 17 hours. The liquid level sensor comprising a laser/sensor system was operated, performed well, and meets the requirements for maintaining liquid level height during growth and melt replenishment. An automated feedback loop connecting the feed mechanism and the liquid level sensing system was designed and constructed and operated successfully for 3.5 hours demonstrating the feasibility of semi-automated dendritic web growth. The sensitivity of the cost of sheet, to variations in capital equipment cost and recycling dendrites was calculated and it was shown that these factors have relatively little impact on sheet cost. Dendrites from web which had gone all the way through the solar cell fabrication process, when melted and grown into web, produce crystals which show no degradation in cell efficiency. Material quality remains high and cells made from web grown at the start, during, and the end of a run from a replenished melt show comparable efficiencies.

  14. Prototype methodology for obtaining cloud seeding guidance from HRRR model data

    NASA Astrophysics Data System (ADS)

    Dawson, N.; Blestrud, D.; Kunkel, M. L.; Waller, B.; Ceratto, J.

    2017-12-01

    Weather model data, along with real time observations, are critical to determine whether atmospheric conditions are prime for super-cooled liquid water during cloud seeding operations. Cloud seeding groups can either use operational forecast models, or run their own model on a computer cluster. A custom weather model provides the most flexibility, but is also expensive. For programs with smaller budgets, openly-available operational forecasting models are the de facto method for obtaining forecast data. The new High-Resolution Rapid Refresh (HRRR) model (3 x 3 km grid size), developed by the Earth System Research Laboratory (ESRL), provides hourly model runs with 18 forecast hours per run. While the model cannot be fine-tuned for a specific area or edited to provide cloud-seeding-specific output, model output is openly available on a near-real-time basis. This presentation focuses on a prototype methodology for using HRRR model data to create maps which aid in near-real-time cloud seeding decision making. The R programming language is utilized to run a script on a Windows® desktop/laptop computer either on a schedule (such as every half hour) or manually. The latest HRRR model run is downloaded from NOAA's Operational Model Archive and Distribution System (NOMADS). A GRIB-filter service, provided by NOMADS, is used to obtain surface and mandatory pressure level data for a subset domain which greatly cuts down on the amount of data transfer. Then, a set of criteria, identified by the Idaho Power Atmospheric Science Group, is used to create guidance maps. These criteria include atmospheric stability (lapse rates), dew point depression, air temperature, and wet bulb temperature. The maps highlight potential areas where super-cooled liquid water may exist, reasons as to why cloud seeding should not be attempted, and wind speed at flight level.

  15. Software Architecture to Support the Evolution of the ISRU RESOLVE Engineering Breadboard Unit 2 (EBU2)

    NASA Technical Reports Server (NTRS)

    Moss, Thomas; Nurge, Mark; Perusich, Stephen

    2011-01-01

    The In-Situ Resource Utilization (ISRU) Regolith & Environmental Science and Oxygen & Lunar Volatiles Extraction (RESOLVE) software provides operation of the physical plant from a remote location with a high-level interface that can access and control the data from external software applications of other subsystems. This software allows autonomous control over the entire system with manual computer control of individual system/process components. It gives non-programmer operators the capability to easily modify the high-level autonomous sequencing while the software is in operation, as well as the ability to modify the low-level, file-based sequences prior to the system operation. Local automated control in a distributed system is also enabled where component control is maintained during the loss of network connectivity with the remote workstation. This innovation also minimizes network traffic. The software architecture commands and controls the latest generation of RESOLVE processes used to obtain, process, and quantify lunar regolith. The system is grouped into six sub-processes: Drill, Crush, Reactor, Lunar Water Resource Demonstration (LWRD), Regolith Volatiles Characterization (RVC) (see example), and Regolith Oxygen Extraction (ROE). Some processes are independent, some are dependent on other processes, and some are independent but run concurrently with other processes. The first goal is to analyze the volatiles emanating from lunar regolith, such as water, carbon monoxide, carbon dioxide, ammonia, hydrogen, and others. This is done by heating the soil and analyzing and capturing the volatilized product. The second goal is to produce water by reducing the soil at high temperatures with hydrogen. This is done by raising the reactor temperature in the range of 800 to 900 C, causing the reaction to progress by adding hydrogen, and then capturing the water product in a desiccant bed. The software needs to run the entire unit and all sub-processes; however, throughout testing, many variables and parameters need to be changed as more is learned about the system operation. The Master Events Controller (MEC) is run on a standard laptop PC using Windows XP. This PC runs in parallel to another laptop that monitors the GC, and a third PC that monitors the drilling/ crushing operation. These three PCs interface to the process through a CompactRIO, OPC Servers, and modems.

  16. EOS: A project to investigate the design and construction of real-time distributed Embedded Operating Systems

    NASA Technical Reports Server (NTRS)

    Campbell, R. H.; Essick, Ray B.; Johnston, Gary; Kenny, Kevin; Russo, Vince

    1987-01-01

    Project EOS is studying the problems of building adaptable real-time embedded operating systems for the scientific missions of NASA. Choices (A Class Hierarchical Open Interface for Custom Embedded Systems) is an operating system designed and built by Project EOS to address the following specific issues: the software architecture for adaptable embedded parallel operating systems, the achievement of high-performance and real-time operation, the simplification of interprocess communications, the isolation of operating system mechanisms from one another, and the separation of mechanisms from policy decisions. Choices is written in C++ and runs on a ten processor Encore Multimax. The system is intended for use in constructing specialized computer applications and research on advanced operating system features including fault tolerance and parallelism.

  17. Transient Turbine Engine Modeling with Hardware-in-the-Loop Power Extraction (PREPRINT)

    DTIC Science & Technology

    2008-07-01

    Furthermore, it must be compatible with a real - time operating system that is capable of running the simulation. For some models, especially those that use...problem of interfacing the engine/control model to a real - time operating system and associated lab hardware becomes a problem of interfacing these...model in real-time. This requires the use of a real - time operating system and a compatible I/O (input/output) board. Figure 1 illustrates the HIL

  18. Model Analyst’s Toolkit User Guide, Version 7.1.0

    DTIC Science & Technology

    2015-08-01

    Help > About)  Environment details ( operating system )  metronome.log file, located in your MAT 7.1.0 installation folder  Any log file that...requirements to run the Model Analyst’s Toolkit:  Windows XP operating system (or higher) with Service Pack 2 and all critical Windows updates installed...application icon on your desktop  Create a Quick Launch icon – Creates a MAT application icon on the taskbar for operating systems released

  19. Establishing and running a trauma and dissociation unit: a contemporary experience.

    PubMed

    Middleton, Warwick; Higson, David

    2004-12-01

    To evaluate the functioning of a trauma and dissociation unit that has run for the past 8 years in a private hospital, with particular regard to operating philosophy, operating parameters, challenges encountered, research and educational initiatives, and the applicability of the treatment model to other settings. Despite the challenges associated with significant difficulties in the corporate management of a private health-care system, it has been possible to operate an inpatient and day hospital programme tailored to the needs of patients in the dissociative spectrum, and the lessons learnt from this experience are valid considerations in the future planning of mental health services overall.

  20. Evaluation of the Tropical Pacific Observing System from the Data Assimilation Perspective

    DTIC Science & Technology

    2014-01-01

    hereafter, SIDA systems) have the capacity to assimilate salinity profiles imposing a multivariate (mainly T-S) balance relationship (summarized in...Fujii et al., 2011). Current SIDA systems in operational centers generally use Ocean General Circulation Models (OGCM) with resolution typically 1...long-term (typically 20-30 years) ocean DA runs are often performed with SIDA systems in operational centers for validation and calibration of SI

  1. Evaluation of Computational Codes for Underwater Hull Analysis Model Applications

    DTIC Science & Technology

    2014-02-05

    desirable that the code can be run on a Windows operating system on the laptop, desktop, or workstation. The focus on Windows machines allows for...transition to such systems as operated on the Navy-Marine Corp Internet (NMCI). For each code the initial cost and yearly maintenance are identified...suggestions for reducing this burden to Department of Defense, Washington Headquarters Services, Directorate for Information Operations and Reports

  2. Semi-Automated Identification of Rocks in Images

    NASA Technical Reports Server (NTRS)

    Bornstein, Benjamin; Castano, Andres; Anderson, Robert

    2006-01-01

    Rock Identification Toolkit Suite is a computer program that assists users in identifying and characterizing rocks shown in images returned by the Mars Explorer Rover mission. Included in the program are components for automated finding of rocks, interactive adjustments of outlines of rocks, active contouring of rocks, and automated analysis of shapes in two dimensions. The program assists users in evaluating the surface properties of rocks and soil and reports basic properties of rocks. The program requires either the Mac OS X operating system running on a G4 (or more capable) processor or a Linux operating system running on a Pentium (or more capable) processor, plus at least 128MB of random-access memory.

  3. Development of advanced Czochralski growth process to produce low cost 150 kg silicon ingots from a single crucible for technology readiness. [crystal growth

    NASA Technical Reports Server (NTRS)

    Lane, R. L.

    1981-01-01

    Six growth runs used the Kayex-Hameo Automatic Games Logic (AGILE) computer based system for growth from larger melts in the Mod CG2000. The implementation of the melt pyrometer sensor allowed for dip temperature monitoring and usage by the operator/AGILE system. Use of AGILE during recharge operations was successfully evaluated. The tendency of crystals to lose cylindrical shape (spiraling) continued to be a problem. The hygrometer was added to the Furnace Gas Analysis System and used on several growth runs. The gas chromatograph, including the integrator, was also used for more accurate carbon monoxide concentration measurements. Efforts continued for completing the automation of the total Gas Analysis System. An economic analysis, based on revised achievable straight growth rate, is presented.

  4. A Distributed Data Base Version of INGRES.

    ERIC Educational Resources Information Center

    Stonebraker, Michael; Neuhold, Eric

    Extensions are required to the currently operational INGRES data base system for it to manage a data base distributed over multiple machines in a computer network running the UNIX operating system. Three possible user views include: (1) each relation in a unique machine, (2) a user interaction with the data base which can only span relations at a…

  5. 40 CFR 86.340-79 - Gasoline-fueled engine dynamometer test run.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...) Observe pre-test procedures; § 86.339; (3) Start cooling system; (4) Start engine and operate in... be 5 minutes ±30 seconds. Sample flow may begin during the warm-up; (5) Read and record all pre-test... test run. 86.340-79 Section 86.340-79 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY...

  6. 40 CFR 86.340-79 - Gasoline-fueled engine dynamometer test run.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...) Observe pre-test procedures; § 86.339; (3) Start cooling system; (4) Start engine and operate in... be 5 minutes ±30 seconds. Sample flow may begin during the warm-up; (5) Read and record all pre-test... test run. 86.340-79 Section 86.340-79 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY...

  7. 40 CFR 86.340-79 - Gasoline-fueled engine dynamometer test run.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) Observe pre-test procedures; § 86.339; (3) Start cooling system; (4) Start engine and operate in... be 5 minutes ±30 seconds. Sample flow may begin during the warm-up; (5) Read and record all pre-test... test run. 86.340-79 Section 86.340-79 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY...

  8. 40 CFR 86.340-79 - Gasoline-fueled engine dynamometer test run.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...) Observe pre-test procedures; § 86.339; (3) Start cooling system; (4) Start engine and operate in... be 5 minutes ±30 seconds. Sample flow may begin during the warm-up; (5) Read and record all pre-test... test run. 86.340-79 Section 86.340-79 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY...

  9. PYROLASER - PYROLASER OPTICAL PYROMETER OPERATING SYSTEM

    NASA Technical Reports Server (NTRS)

    Roberts, F. E.

    1994-01-01

    The PYROLASER package is an operating system for the Pyrometer Instrument Company's Pyrolaser. There are 6 individual programs in the PYROLASER package: two main programs, two lower level subprograms, and two programs which, although independent, function predominantly as macros. The package provides a quick and easy way to setup, control, and program a standard Pyrolaser. Temperature and emissivity measurements may be either collected as if the Pyrolaser were in the manual operations mode, or displayed on real time strip charts and stored in standard spreadsheet format for post-test analysis. A shell is supplied to allow macros, which are test-specific, to be easily added to the system. The Pyrolaser Simple Operation program provides full on-screen remote operation capabilities, thus allowing the user to operate the Pyrolaser from the computer just as it would be operated manually. The Pyrolaser Simple Operation program also allows the use of "quick starts". Quick starts provide an easy way to permit routines to be used as setup macros for specific applications or tests. The specific procedures required for a test may be ordered in a sequence structure and then the sequence structure can be started with a simple button in the cluster structure provided. One quick start macro is provided for continuous Pyrolaser operation. A subprogram, Display Continuous Pyr Data, is used to display and store the resulting data output. Using this macro, the system is set up for continuous operation and the subprogram is called to display the data in real time on strip charts. The data is simultaneously stored in a spreadsheet format. The resulting spreadsheet file can be opened in any one of a number of commercially available spreadsheet programs. The Read Continuous Pyrometer program is provided as a continuously run subprogram for incorporation of the Pyrolaser software into a process control or feedback control scheme in a multi-component system. The program requires the Pyrolaser to be set up using the Pyrometer String Transfer macro. It requires no inputs and provides temperature and emissivity as outputs. The Read Continuous Pyrometer program can be run continuously and the data can be sampled as often or as seldom as updates of temperature and emissivity are required. PYROLASER is written using the Labview software for use on Macintosh series computers running System 6.0.3 or later, Sun Sparc series computers running OpenWindows 3.0 or MIT's X Window System (X11R4 or X11R5), and IBM PC or compatibles running Microsoft Windows 3.1 or later. Labview requires a minimum of 5Mb of RAM on a Macintosh, 24Mb of RAM on a Sun, and 8Mb of RAM on an IBM PC or compatible. The Labview software is a product of National Instruments (Austin,TX; 800-433-3488), and is not included with this program. The standard distribution medium for PYROLASER is a 3.5 inch 800K Macintosh format diskette. It is also available on a 3.5 inch 720K MS-DOS format diskette, a 3.5 inch diskette in UNIX tar format, and a .25 inch streaming magnetic tape cartridge in UNIX tar format. An electronic copy of the documentation in Macintosh WordPerfect version 2.0.4 format is included on the distribution medium. Printed documentation is included in the price of the program. PYROLASER was developed in 1992.

  10. 4273π: Bioinformatics education on low cost ARM hardware

    PubMed Central

    2013-01-01

    Background Teaching bioinformatics at universities is complicated by typical computer classroom settings. As well as running software locally and online, students should gain experience of systems administration. For a future career in biology or bioinformatics, the installation of software is a useful skill. We propose that this may be taught by running the course on GNU/Linux running on inexpensive Raspberry Pi computer hardware, for which students may be granted full administrator access. Results We release 4273π, an operating system image for Raspberry Pi based on Raspbian Linux. This includes minor customisations for classroom use and includes our Open Access bioinformatics course, 4273π Bioinformatics for Biologists. This is based on the final-year undergraduate module BL4273, run on Raspberry Pi computers at the University of St Andrews, Semester 1, academic year 2012–2013. Conclusions 4273π is a means to teach bioinformatics, including systems administration tasks, to undergraduates at low cost. PMID:23937194

  11. 4273π: bioinformatics education on low cost ARM hardware.

    PubMed

    Barker, Daniel; Ferrier, David Ek; Holland, Peter Wh; Mitchell, John Bo; Plaisier, Heleen; Ritchie, Michael G; Smart, Steven D

    2013-08-12

    Teaching bioinformatics at universities is complicated by typical computer classroom settings. As well as running software locally and online, students should gain experience of systems administration. For a future career in biology or bioinformatics, the installation of software is a useful skill. We propose that this may be taught by running the course on GNU/Linux running on inexpensive Raspberry Pi computer hardware, for which students may be granted full administrator access. We release 4273π, an operating system image for Raspberry Pi based on Raspbian Linux. This includes minor customisations for classroom use and includes our Open Access bioinformatics course, 4273π Bioinformatics for Biologists. This is based on the final-year undergraduate module BL4273, run on Raspberry Pi computers at the University of St Andrews, Semester 1, academic year 2012-2013. 4273π is a means to teach bioinformatics, including systems administration tasks, to undergraduates at low cost.

  12. Terascale Cluster for Advanced Turbulent Combustion Simulations

    DTIC Science & Technology

    2008-07-25

    the system We have given the name CATS (for Combustion And Turbulence Simulator) to the terascale system that was obtained through this grant. CATS ...lnfiniBand interconnect. CATS includes an interactive login node and a file server, each holding in excess of 1 terabyte of file storage. The 35 active...compute nodes of CATS enable us to run up to 140-core parallel MPI batch jobs; one node is reserved to run the scheduler. CATS is operated and

  13. How to keep the Grid full and working with ATLAS production and physics jobs

    NASA Astrophysics Data System (ADS)

    Pacheco Pagés, A.; Barreiro Megino, F. H.; Cameron, D.; Fassi, F.; Filipcic, A.; Di Girolamo, A.; González de la Hoz, S.; Glushkov, I.; Maeno, T.; Walker, R.; Yang, W.; ATLAS Collaboration

    2017-10-01

    The ATLAS production system provides the infrastructure to process millions of events collected during the LHC Run 1 and the first two years of Run 2 using grid, clouds and high performance computing. We address in this contribution the strategies and improvements that have been implemented to the production system for optimal performance and to achieve the highest efficiency of available resources from operational perspective. We focus on the recent developments.

  14. Safety management of a complex R and D ground operating system

    NASA Technical Reports Server (NTRS)

    Connors, J. F.; Maurer, R. A.

    1975-01-01

    A perspective on safety program management was developed for a complex R&D operating system, such as the NASA-Lewis Research Center. Using a systems approach, hazardous operations are subjected to third-party reviews by designated-area safety committees and are maintained under safety permit controls. To insure personnel alertness, emergency containment forces and employees are trained in dry-run emergency simulation exercises. The keys to real safety effectiveness are top management support and visibility of residual risks.

  15. Safety management of a complex R&D ground operating system

    NASA Technical Reports Server (NTRS)

    Connors, J. F.; Maurer, R. A.

    1975-01-01

    A perspective on safety program management has been developed for a complex R&D operating system, such as the NASA-Lewis Research Center. Using a systems approach, hazardous operations are subjected to third-party reviews by designated area safety committees and are maintained under safety permit controls. To insure personnel alertness, emergency containment forces and employees are trained in dry-run emergency simulation exercises. The keys to real safety effectiveness are top management support and visibility of residual risks.

  16. Asia Pacific Research Initiative for Sustainable Energy Systems 2011 (APRISES11)

    DTIC Science & Technology

    2017-09-29

    created during a single run , highlighting rapid prototyping capabilities. NRL’s overall goal was to evaluate whether 3D printed metallic bipolar plates...varying the air flow to evaluate the effect on peak power. These runs are displayed in Figure 2.1.17. The reactants were connected in co-flow with the...way valve allows the operator to either run the gas through a humidifier (PermaPure Model FCl 25-240-7) or a bypass loop. On the humidifier side of

  17. Ethanol production in small- to medium-size facilities

    NASA Astrophysics Data System (ADS)

    Hiler, E. A.; Coble, C. G.; Oneal, H. P.; Sweeten, J. M.; Reidenbach, V. G.; Schelling, G. T.; Lawhon, J. T.; Kay, R. D.; Lepori, W. A.; Aldred, W. H.

    1982-04-01

    In early 1980 system design criteria were developed for a small-scale ethanol production plant. The plant was eventually installed on November 1, 1980. It has a production capacity of 30 liters per hour; this can be increased easily (if desired) to 60 liters per hour with additional fermentation tanks. Sixty-six test runs were conducted to date in the alcohol production facility. Feedstocks evaluated in these tests include: corn (28 runs); grain sorghum (33 runs); grain sorghum grits (1 run); half corn/half sorghum (1 run); and sugarcane juice (3 runs). In addition, a small bench-scale fermentation and distillation system was used to evaluate sugarcane and sweet sorghum feedstocks prior to their evaluation in the larger unit. In each of these tests, evaluation of the following items was conducted: preprocessing requirements; operational problems; conversion efficiency (for example, liters of alcohol produced per kilogram of feedstock); energy balance and efficiency; nutritional recovery from stillage; solids separation by screw press; chemical characterization of stillage including liquid and solids fractions; wastewater requirements; and air pollution potential.

  18. Long-run operation of a reverse electrodialysis system fed with wastewaters.

    PubMed

    Luque Di Salvo, Javier; Cosenza, Alessandro; Tamburini, Alessandro; Micale, Giorgio; Cipollina, Andrea

    2018-07-01

    The performance of a Reverse ElectroDialysis (RED) system fed by unconventional wastewater solutions for long operational periods is analysed for the first time. The experimental campaign was divided in a series of five independent long-runs which combined real wastewater solutions with artificial solutions for at least 10 days. The time evolution of electrical variables, gross power output and net power output, considering also pumping losses, was monitored: power density values obtained during the long-runs are comparable to those found in literature with artificial feed solutions of similar salinity. The increase in pressure drops and the development of membrane fouling were the main detrimental factors of system performance. Pressure drops increase was related to the physical obstruction of the feed channels defined by the spacers, while membrane fouling was related to the adsorption of foulants over the membrane surfaces. In order to manage channels partial clogging and fouling, different kinds of easily implemented in situ backwashings (i.e. neutral, acid, alkaline) were adopted, without the need for an abrupt interruption of the RED unit operation. The application of periodic ElectroDialysis (ED) pulses is also tested as fouling prevention strategy. The results collected suggest that RED can be used to produce electric power by unworthy wastewaters, but additional studies are still needed to characterize better membrane fouling and further improve system performance with these solutions. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. InkTag: Secure Applications on an Untrusted Operating System

    PubMed Central

    Hofmann, Owen S.; Kim, Sangman; Dunn, Alan M.; Lee, Michael Z.; Witchel, Emmett

    2014-01-01

    InkTag is a virtualization-based architecture that gives strong safety guarantees to high-assurance processes even in the presence of a malicious operating system. InkTag advances the state of the art in untrusted operating systems in both the design of its hypervisor and in the ability to run useful applications without trusting the operating system. We introduce paraverification, a technique that simplifies the InkTag hypervisor by forcing the untrusted operating system to participate in its own verification. Attribute-based access control allows trusted applications to create decentralized access control policies. InkTag is also the first system of its kind to ensure consistency between secure data and metadata, ensuring recoverability in the face of system crashes. PMID:24429939

  20. InkTag: Secure Applications on an Untrusted Operating System.

    PubMed

    Hofmann, Owen S; Kim, Sangman; Dunn, Alan M; Lee, Michael Z; Witchel, Emmett

    2013-01-01

    InkTag is a virtualization-based architecture that gives strong safety guarantees to high-assurance processes even in the presence of a malicious operating system. InkTag advances the state of the art in untrusted operating systems in both the design of its hypervisor and in the ability to run useful applications without trusting the operating system. We introduce paraverification , a technique that simplifies the InkTag hypervisor by forcing the untrusted operating system to participate in its own verification. Attribute-based access control allows trusted applications to create decentralized access control policies. InkTag is also the first system of its kind to ensure consistency between secure data and metadata, ensuring recoverability in the face of system crashes.

  1. ROBucket: A low cost operant chamber based on the Arduino microcontroller.

    PubMed

    Devarakonda, Kavya; Nguyen, Katrina P; Kravitz, Alexxai V

    2016-06-01

    The operant conditioning chamber is a cornerstone of animal behavioral research. Operant boxes are used to assess learning and motivational behavior in animals, particularly for food and drug reinforcers. However, commercial operant chambers cost several thousands of dollars. We have constructed the Rodent Operant Bucket (ROBucket), an inexpensive and easily assembled open-source operant chamber based on the Arduino microcontroller platform, which can be used to train mice to respond for sucrose solution or other liquid reinforcers. The apparatus contains two nose pokes, a drinking well, and a solenoid-controlled liquid delivery system. ROBucket can run fixed ratio and progressive ratio training schedules, and can be programmed to run more complicated behavioral paradigms. Additional features such as motion sensing and video tracking can be added to the operant chamber through the array of widely available Arduino-compatible sensors. The design files and programming code are open source and available online for others to use.

  2. Portability studies of modular data base managers. Interim reports. [Running CDC's DATATRAN 2 on IBM 360/370 and IBM's JOSHUA on CDC computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kopp, H.J.; Mortensen, G.A.

    1978-04-01

    Approximately 60% of the full CDC 6600/7600 Datatran 2.0 capability was made operational on IBM 360/370 equipment. Sufficient capability was made operational to demonstrate adequate performance for modular program linking applications. Also demonstrated were the basic capabilities and performance required to support moderate-sized data base applications and moderately active scratch input/output applications. Approximately one to two calendar years are required to develop DATATRAN 2.0 capabilities fully for the entire spectrum of applications proposed. Included in the next stage of conversion should be syntax checking and syntax conversion features that would foster greater FORTRAN compatibility between IBM and CDC developed modules.more » The batch portion of the JOSHUA Modular System, which was developed by Savannah River Laboratory to run on an IBM computer, was examined for the feasibility of conversion to run on a Control Data Corporation (CDC) computer. Portions of the JOSHUA Precompiler were changed so as to be operable on the CDC computer. The Data Manager and Batch Monitor were also examined for conversion feasibility, but no changes were made in them. It appears to be feasible to convert the batch portion of the JOSHUA Modular System to run on a CDC computer with an estimated additional two to three man-years of effort. 9 tables.« less

  3. Real-time data acquisition of commercial microwave link networks for hydrometeorological applications

    NASA Astrophysics Data System (ADS)

    Chwala, Christian; Keis, Felix; Kunstmann, Harald

    2016-03-01

    The usage of data from commercial microwave link (CML) networks for scientific purposes is becoming increasingly popular, in particular for rain rate estimation. However, data acquisition and availability is still a crucial problem and limits research possibilities. To overcome this issue, we have developed an open-source data acquisition system based on the Simple Network Management Protocol (SNMP). It is able to record transmitted and received signal levels of a large number of CMLs simultaneously with a temporal resolution of up to 1 s. We operate this system at Ericsson Germany, acquiring data from 450 CMLs with minutely real-time transfer to our database. Our data acquisition system is not limited to a particular CML hardware model or manufacturer, though. We demonstrate this by running the same system for CMLs of a different manufacturer, operated by an alpine ski resort in Germany. There, the data acquisition is running simultaneously for four CMLs with a temporal resolution of 1 s. We present an overview of our system, describe the details of the necessary SNMP requests and show results from its operational application.

  4. Real time data acquisition of commercial microwave link networks for hydrometeorological applications

    NASA Astrophysics Data System (ADS)

    Chwala, C.; Keis, F.; Kunstmann, H.

    2015-11-01

    The usage of data from commercial microwave link (CML) networks for scientific purposes is becoming increasingly popular, in particular for rain rate estimation. However, data acquisition and availability is still a crucial problem and limits research possibilities. To overcome this issue, we have developed an open source data acquisition system based on the Simple Network Management Protocol (SNMP). It is able to record transmitted- and received signal levels of a large number of CMLs simultaneously with a temporal resolution of up to one second. We operate this system at Ericsson Germany, acquiring data from 450 CMLs with minutely real time transfer to our data base. Our data acquisition system is not limited to a particular CML hardware model or manufacturer, though. We demonstrate this by running the same system for CMLs of a different manufacturer, operated by an alpine skiing resort in Germany. There, the data acquisition is running simultaneously for four CMLs with a temporal resolution of one second. We present an overview of our system, describe the details of the necessary SNMP requests and show results from its operational application.

  5. Recent developments of DMI's operational system: Coupled Ecosystem-Circulation-and SPM model.

    NASA Astrophysics Data System (ADS)

    Murawski, Jens; Tian, Tian; Dobrynin, Mikhail

    2010-05-01

    ECOOP is a pan- European project with 72 partners from 29 countries around the Baltic Sea, the North Sea, the Iberia-Biscay-Ireland region, the Mediterranean Sea and the Black Sea. The project aims at the development and the integration of the different coastal and regional observation and forecasting systems. The Danish Meteorological Institute DMI coordinates the project and is responsible for the Baltic Sea regional forecasting System. Over the project period, the Baltic Sea system was developed from a purely hydro dynamical model (version V1), running operationally since summer 2009, to a coupled model platform (version V2), including model components for the simulation of suspended particles, data assimilation and ecosystem variables. The ECOOP V2 model is currently tested and validated, and will replace the V1 version soon. The coupled biogeochemical- and circulation model runs operationally since November 2009. The daily forecasts are presented at DMI's homepage http:/ocean.dmi.dk. The presentation includes a short description of the ECOOP forecasting system, discusses the model results and shows the outcome of the model validation.

  6. Improved programs for DNA and protein sequence analysis on the IBM personal computer and other standard computer systems.

    PubMed Central

    Mount, D W; Conrad, B

    1986-01-01

    We have previously described programs for a variety of types of sequence analysis (1-4). These programs have now been integrated into a single package. They are written in the standard C programming language and run on virtually any computer system with a C compiler, such as the IBM/PC and other computers running under the MS/DOS and UNIX operating systems. The programs are widely distributed and may be obtained from the authors as described below. PMID:3753780

  7. ADAMS: AIRLAB data management system user's guide

    NASA Technical Reports Server (NTRS)

    Conrad, C. L.; Ingogly, W. F.; Lauterbach, L. A.

    1986-01-01

    The AIRLAB Data Management System (ADAMS) is an online environment that supports research at NASA's AIRLAB. ADAMS provides an easy to use interactive interface that eases the task of documenting and managing information about experiments and improves communication among project members. Data managed by ADAMS includes information about experiments, data sets produced, software and hardware available in AIRLAB as well as that used in a particular experiment, and an on-line engineer's notebook. The User's Guide provides an overview of the ADAMS system as well as details of the operations available within ADAMS. A tutorial section takes the user step-by-step through a typical ADAMS session. ADAMS runs under the VAX/VMS operating system and uses the ORACLE database management system and DEC/FMS (the Forms Management System). ADAMS can be run from any VAX connected via DECnet to the ORACLE host VAX. The ADAMS system is designed for simplicity, so interactions within the underlying data management system and communications network are hidden from the user.

  8. MSUSTAT.

    ERIC Educational Resources Information Center

    Mauriello, David

    1984-01-01

    Reviews an interactive statistical analysis package (designed to run on 8- and 16-bit machines that utilize CP/M 80 and MS-DOS operating systems), considering its features and uses, documentation, operation, and performance. The package consists of 40 general purpose statistical procedures derived from the classic textbook "Statistical…

  9. A real-time posture monitoring method for rail vehicle bodies based on machine vision

    NASA Astrophysics Data System (ADS)

    Liu, Dongrun; Lu, Zhaijun; Cao, Tianpei; Li, Tian

    2017-06-01

    Monitoring vehicle operation conditions has become significantly important in modern high-speed railway systems. However, the operational impact of monitoring the roll angle of vehicle bodies has principally been limited to tilting trains, while few studies have focused on monitoring the running posture of vehicle bodies during operation. We propose a real-time posture monitoring method to fulfil real-time monitoring requirements, by taking rail surfaces and centrelines as detection references. In realising the proposed method, we built a mathematical computational model based on space coordinate transformations to calculate attitude angles of vehicles in operation and vertical and lateral vibration displacements of single measuring points. Moreover, comparison and verification of reliability between system and field results were conducted. Results show that monitoring of the roll angles of car bodies obtained through the system exhibit variation trends similar to those converted from the dynamic deflection of bogie secondary air springs. The monitoring results of two identical conditions were basically the same, highlighting repeatability and good monitoring accuracy. Therefore, our monitoring results were reliable in reflecting posture changes in running railway vehicles.

  10. A Machine Learning Method for Power Prediction on the Mobile Devices.

    PubMed

    Chen, Da-Ren; Chen, You-Shyang; Chen, Lin-Chih; Hsu, Ming-Yang; Chiang, Kai-Feng

    2015-10-01

    Energy profiling and estimation have been popular areas of research in multicore mobile architectures. While short sequences of system calls have been recognized by machine learning as pattern descriptions for anomalous detection, power consumption of running processes with respect to system-call patterns are not well studied. In this paper, we propose a fuzzy neural network (FNN) for training and analyzing process execution behaviour with respect to series of system calls, parameters and their power consumptions. On the basis of the patterns of a series of system calls, we develop a power estimation daemon (PED) to analyze and predict the energy consumption of the running process. In the initial stage, PED categorizes sequences of system calls as functional groups and predicts their energy consumptions by FNN. In the operational stage, PED is applied to identify the predefined sequences of system calls invoked by running processes and estimates their energy consumption.

  11. Safety management for polluted confined space with IT system: a running case.

    PubMed

    Hwang, Jing-Jang; Wu, Chien-Hsing; Zhuang, Zheng-Yun; Hsu, Yi-Chang

    2015-01-01

    This study traced a deployed real IT system to enhance occupational safety for a polluted confined space. By incorporating wireless technology, it automatically monitors the status of workers on the site and upon detected anomalous events, managers are notified effectively. The system, with a redefined standard operations process, is running well at one of Formosa Petrochemical Corporation's refineries. Evidence shows that after deployment, the system does enhance the safety level by real-time monitoring the workers and by managing well and controlling the anomalies. Therefore, such technical architecture can be applied to similar scenarios for safety enhancement purposes.

  12. The evolution of the ISOLDE control system

    NASA Astrophysics Data System (ADS)

    Jonsson, O. C.; Catherall, R.; Deloose, I.; Drumm, P.; Evensen, A. H. M.; Gase, K.; Focker, G. J.; Fowler, A.; Kugler, E.; Lettry, J.; Olesen, G.; Ravn, H. L.; Isolde Collaboration

    The ISOLDE on-line mass separator facility is operating on a Personal Computer based control system since spring 1992. Front End Computers accessing the hardware are controlled from consoles running Microsoft Windows ™ through a Novell NetWare4 ™ local area network. The control system is transparently integrated in the CERN wide office network and makes heavy use of the CERN standard office application programs to control and to document the running of the ISOLDE isotope separators. This paper recalls the architecture of the control system, shows its recent developments and gives some examples of its graphical user interface.

  13. The evolution of the ISOLDE control system

    NASA Astrophysics Data System (ADS)

    Jonsson, O. C.; Catherall, R.; Deloose, I.; Evensen, A. H. M.; Gase, K.; Focker, G. J.; Fowler, A.; Kugler, E.; Lettry, J.; Olesen, G.; Ravn, H. L.; Drumm, P.

    1996-04-01

    The ISOLDE on-line mass separator facility is operating on a Personal Computer based control system since spring 1992. Front End Computers accessing the hardware are controlled from consoles running Microsoft Windows® through a Novell NetWare4® local area network. The control system is transparently integrated in the CERN wide office network and makes heavy use of the CERN standard office application programs to control and to document the running of the ISOLDE isotope separators. This paper recalls the architecture of the control system, shows its recent developments and gives some examples of its graphical user interface.

  14. A Multiprocessor Operating System Simulator

    NASA Technical Reports Server (NTRS)

    Johnston, Gary M.; Campbell, Roy H.

    1988-01-01

    This paper describes a multiprocessor operating system simulator that was developed by the authors in the Fall semester of 1987. The simulator was built in response to the need to provide students with an environment in which to build and test operating system concepts as part of the coursework of a third-year undergraduate operating systems course. Written in C++, the simulator uses the co-routine style task package that is distributed with the AT&T C++ Translator to provide a hierarchy of classes that represents a broad range of operating system software and hardware components. The class hierarchy closely follows that of the 'Choices' family of operating systems for loosely- and tightly-coupled multiprocessors. During an operating system course, these classes are refined and specialized by students in homework assignments to facilitate experimentation with different aspects of operating system design and policy decisions. The current implementation runs on the IBM RT PC under 4.3bsd UNIX.

  15. The ATLAS Data Acquisition System in LHC Run 2

    NASA Astrophysics Data System (ADS)

    Panduro Vazquez, William; ATLAS Collaboration

    2017-10-01

    The LHC has been providing pp collisions with record luminosity and energy since the start of Run 2 in 2015. The Trigger and Data Acquisition system of the ATLAS experiment has been upgraded to deal with the increased performance required by this new operational mode. The dataflow system and associated network infrastructure have been reshaped in order to benefit from technological progress and to maximize the flexibility and efficiency of the data selection process. The new design is radically different from the previous implementation both in terms of architecture and performance, with the previous two-level structure merged into a single processing farm, performing incremental data collection and analysis. In addition, logical farm slicing, with each slice managed by a dedicated supervisor, has been dropped in favour of global management by a single farm master operating at 100 kHz. This farm master has also been integrated with a new software-based Region of Interest builder, replacing the previous VMEbus-based system. Finally, the Readout system has been completely refitted with new higher performance, lower footprint server machines housing a new custom front-end interface card. Here we will cover the overall design of the system, along with performance results from the start-up phase of LHC Run 2.

  16. An Operational Configuration of the ARPS Data Analysis System to Initialize WRF in the NM'S Environmental Modeling System

    NASA Technical Reports Server (NTRS)

    Case, Jonathan; Blottman, Pete; Hoeth, Brian; Oram, Timothy

    2006-01-01

    The Weather Research and Forecasting (WRF) model is the next generation community mesoscale model designed to enhance collaboration between the research and operational sectors. The NM'S as a whole has begun a transition toward WRF as the mesoscale model of choice to use as a tool in making local forecasts. Currently, both the National Weather Service in Melbourne, FL (NWS MLB) and the Spaceflight Meteorology Group (SMG) are running the Advanced Regional Prediction System (AIRPS) Data Analysis System (ADAS) every 15 minutes over the Florida peninsula to produce high-resolution diagnostics supporting their daily operations. In addition, the NWS MLB and SMG have used ADAS to provide initial conditions for short-range forecasts from the ARPS numerical weather prediction (NWP) model. Both NM'S MLB and SMG have derived great benefit from the maturity of ADAS, and would like to use ADAS for providing initial conditions to WRF. In order to assist in this WRF transition effort, the Applied Meteorology Unit (AMU) was tasked to configure and implement an operational version of WRF that uses output from ADAS for the model initial conditions. Both agencies asked the AMU to develop a framework that allows the ADAS initial conditions to be incorporated into the WRF Environmental Modeling System (EMS) software. Developed by the NM'S Science Operations Officer (S00) Science and Training Resource Center (STRC), the EMS is a complete, full physics, NWP package that incorporates dynamical cores from both the National Center for Atmospheric Research's Advanced Research WRF (ARW) and the National Centers for Environmental Prediction's Non-Hydrostatic Mesoscale Model (NMM) into a single end-to-end forecasting system. The EMS performs nearly all pre- and postprocessing and can be run automatically to obtain external grid data for WRF boundary conditions, run the model, and convert the data into a format that can be readily viewed within the Advanced Weather Interactive Processing System. The EMS has also incorporated the WRF Standard Initialization (SI) graphical user interface (GUT), which allows the user to set up the domain, dynamical core, resolution, etc., with ease. In addition to the SI GUT, the EMS contains a number of configuration files with extensive documentation to help the user select the appropriate input parameters for model physics schemes, integration timesteps, etc. Therefore, because of its streamlined capability, it is quite advantageous to configure ADAS to provide initial condition data to the EMS software. One of the biggest potential benefits of configuring ADAS for ingest into the EMS is that the analyses could be used to initialize either the ARW or NMM. Currently, the ARPS/ADAS software has a conversion routine only for the ARW dynamical core. However, since the NIvIM runs about 2.5 times faster than the ARW, it is quite advantageous to be able to run an ADAS/NMM configuration operationally due to the increased efficiency.

  17. Image Display And Manipulation System (IDAMS), user's guide

    NASA Technical Reports Server (NTRS)

    Cecil, R. W.

    1972-01-01

    A combination operator's guide and user's handbook for the Image Display and Manipulation System (IDAMS) is reported. Information is presented to define how to operate the computer equipment, how to structure a run deck, and how to select parameters necessary for executing a sequence of IDAMS task routines. If more detailed information is needed on any IDAMS program, see the IDAMS program documentation.

  18. Design of an EEG-based brain-computer interface (BCI) from standard components running in real-time under Windows.

    PubMed

    Guger, C; Schlögl, A; Walterspacher, D; Pfurtscheller, G

    1999-01-01

    An EEG-based brain-computer interface (BCI) is a direct connection between the human brain and the computer. Such a communication system is needed by patients with severe motor impairments (e.g. late stage of Amyotrophic Lateral Sclerosis) and has to operate in real-time. This paper describes the selection of the appropriate components to construct such a BCI and focuses also on the selection of a suitable programming language and operating system. The multichannel system runs under Windows 95, equipped with a real-time Kernel expansion to obtain reasonable real-time operations on a standard PC. Matlab controls the data acquisition and the presentation of the experimental paradigm, while Simulink is used to calculate the recursive least square (RLS) algorithm that describes the current state of the EEG in real-time. First results of the new low-cost BCI show that the accuracy of differentiating imagination of left and right hand movement is around 95%.

  19. Improving Reliability in a Stochastic Communication Network

    DTIC Science & Technology

    1990-12-01

    and GINO. In addition, the following computers were used: a Sun 386i workstation, a Digital Equipment Corporation (DEC) 11/785 miniframe , and a DEC...operating system. The DEC 11/785 miniframe used in the experiment was running Unix Version 4.3 (Berkley System Domain). Maxflo was run on the DEC 11/785...the file was still called Mod- ifyl.for). 4. The Maxflo program was started on the DEC 11/785 miniframe . 5. At this time the Convert.max file, created

  20. MONO FOR CROSS-PLATFORM CONTROL SYSTEM ENVIRONMENT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nishimura, Hiroshi; Timossi, Chris

    2006-10-19

    Mono is an independent implementation of the .NET Frameworkby Novell that runs on multiple operating systems (including Windows,Linux and Macintosh) and allows any .NET compatible application to rununmodified. For instance Mono can run programs with graphical userinterfaces (GUI) developed with the C# language on Windows with VisualStudio (a full port of WinForm for Mono is in progress). We present theresults of tests we performed to evaluate the portability of our controlssystem .NET applications from MS Windows to Linux.

  1. Vulnerability Model. A Simulation System for Assessing Damage Resulting from Marine Spills

    DTIC Science & Technology

    1975-06-01

    used and the scenario simulated. The test runs were made on an IBM 360/65 computer. Running times were generally between 15 and 35 CPU seconds...fect filrthcr north. A petroleum tank-truck operation was located within 600 feet Of L𔃻:- stock pond on which the crude oil had dammred itp . At 5 A-M

  2. The personal receiving document management and the realization of email function in OAS

    NASA Astrophysics Data System (ADS)

    Li, Biqing; Li, Zhao

    2017-05-01

    This software is an independent software system, suitable for small and medium enterprises, contains personal office, scientific research project management and system management functions, independently run in relevant environment, and to solve practical needs. This software is an independent software system, using the current popular B/S (browser/server) structure and ASP.NET technology development, using the Windows 7 operating system, Microsoft SQL Server2005 Visual2008 and database as a development platform, suitable for small and medium enterprises, contains personal office, scientific research project management and system management functions, independently run in relevant environment, and to solve practical needs.

  3. 40 CFR 63.3544 - How do I determine the emission capture system efficiency?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... mass fraction of TVH liquid input from each coating and thinner used in the coating operation during... materials used in the coating operation during the capture efficiency test run, kg. TVHi = Mass fraction of... protocol compares the mass of liquid TVH in materials used in the coating operation to the mass of TVH...

  4. 40 CFR 63.3544 - How do I determine the emission capture system efficiency?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... mass fraction of TVH liquid input from each coating and thinner used in the coating operation during... materials used in the coating operation during the capture efficiency test run, kg. TVHi = Mass fraction of... protocol compares the mass of liquid TVH in materials used in the coating operation to the mass of TVH...

  5. 40 CFR 63.3544 - How do I determine the emission capture system efficiency?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... mass fraction of TVH liquid input from each coating and thinner used in the coating operation during... materials used in the coating operation during the capture efficiency test run, kg. TVHi = Mass fraction of... protocol compares the mass of liquid TVH in materials used in the coating operation to the mass of TVH...

  6. The ATLAS EventIndex: architecture, design choices, deployment and first operation experience

    NASA Astrophysics Data System (ADS)

    Barberis, D.; Cárdenas Zárate, S. E.; Cranshaw, J.; Favareto, A.; Fernández Casaní, Á.; Gallas, E. J.; Glasman, C.; González de la Hoz, S.; Hřivnáč, J.; Malon, D.; Prokoshin, F.; Salt Cairols, J.; Sánchez, J.; Többicke, R.; Yuan, R.

    2015-12-01

    The EventIndex is the complete catalogue of all ATLAS events, keeping the references to all files that contain a given event in any processing stage. It replaces the TAG database, which had been in use during LHC Run 1. For each event it contains its identifiers, the trigger pattern and the GUIDs of the files containing it. Major use cases are event picking, feeding the Event Service used on some production sites, and technical checks of the completion and consistency of processing campaigns. The system design is highly modular so that its components (data collection system, storage system based on Hadoop, query web service and interfaces to other ATLAS systems) could be developed separately and in parallel during LSI. The EventIndex is in operation for the start of LHC Run 2. This paper describes the high-level system architecture, the technical design choices and the deployment process and issues. The performance of the data collection and storage systems, as well as the query services, are also reported.

  7. THE EMISSION PROCESSING SYSTEM FOR THE ETA/CMAQ AIR QUALITY FORECAST SYSTEM

    EPA Science Inventory

    NOAA and EPA have created an Air Quality Forecast (AQF) system. This AQF system links an adaptation of the EPA's Community Multiscale Air Quality Model with the 12 kilometer ETA model running operationally at NOAA's National Center for Environmental Predication (NCEP). One of th...

  8. Simulation of a Real-Time Local Data Integration System over East-Central Florida

    NASA Technical Reports Server (NTRS)

    Case, Jonathan

    1999-01-01

    The Applied Meteorology Unit (AMU) simulated a real-time configuration of a Local Data Integration System (LDIS) using data from 15-28 February 1999. The objectives were to assess the utility of a simulated real-time LDIS, evaluate and extrapolate system performance to identify the hardware necessary to run a real-time LDIS, and determine the sensitivities of LDIS. The ultimate goal for running LDIS is to generate analysis products that enhance short-range (less than 6 h) weather forecasts issued in support of the 45th Weather Squadron, Spaceflight Meteorology Group, and Melbourne National Weather Service operational requirements. The simulation used the Advanced Regional Prediction System (ARPS) Data Analysis System (ADAS) software on an IBM RS/6000 workstation with a 67-MHz processor. This configuration ran in real-time, but not sufficiently fast for operational requirements. Thus, the AMU recommends a workstation with a 200-MHz processor and 512 megabytes of memory to run the AMU's configuration of LDIS in real-time. This report presents results from two case studies and several data sensitivity experiments. ADAS demonstrates utility through its ability to depict high-resolution cloud and wind features in a variety of weather situations. The sensitivity experiments illustrate the influence of disparate data on the resulting ADAS analyses.

  9. Verification and Validation of a Navy ESPC Hindcast with Loosely Coupled Data Assimilation

    NASA Astrophysics Data System (ADS)

    Metzger, E. J.; Barton, N. P.; Smedstad, O. M.; Ruston, B. C.; Wallcraft, A. J.; Whitcomb, T. R.; Ridout, J. A.; Franklin, D. S.; Zamudio, L.; Posey, P. G.; Reynolds, C. A.; Phelps, M.

    2016-12-01

    The US Navy is developing an Earth System Prediction Capability (ESPC) to provide global environmental information to meet Navy and Department of Defense (DoD) operations and planning needs from the upper atmosphere to under the sea. It will be a fully coupled global atmosphere/ocean/ice/wave/land prediction system providing daily deterministic forecasts out to 16 days at high horizontal and vertical resolution, and daily probabilistic forecasts out to 45 days at lower resolution. The system will run at the Navy DoD Supercomputing Resource Center with an initial operational capability scheduled for the end of FY18 and the final operational capability scheduled for FY22. The individual model and data assimilation components include: atmosphere - NAVy Global Environmental Model (NAVGEM) and Naval Research Laboratory (NRL) Atmospheric Variational Data Assimilation System - Accelerated Representer (NAVDAS-AR); ocean - HYbrid Coordinate Ocean Model (HYCOM) and Navy Coupled Ocean Data Assimilation (NCODA); ice - Community Ice CodE (CICE) and NCODA; WAVEWATCH III™ and NCODA; and land - NAVGEM Land Surface Model (LSM). Currently, NAVGEM/HYCOM/CICE are three-way coupled and each model component is cycling with its respective assimilation scheme. The assimilation systems do not communicate with each other, but future plans call for these to be coupled as well. NAVGEM runs with a 6-hour update cycle while HYCOM/CICE run with a 24-hour update cycle. The T359L50 NAVGEM/0.08° HYCOM/0.08° CICE system has been integrated in hindcast mode and verification/validation metrics have been computed against unassimilated observations and against stand-alone versions of NAVGEM and HYCOM/CICE. This presentation will focus on typical operational diagnostics for atmosphere, ocean, and ice analyses including 500 hPa atmospheric height anomalies, low-level winds, temperature/salinity ocean depth profiles, ocean acoustical proxies, sea ice edge, and sea ice drift. Overall, the global coupled ESPC system is performing with comparable skill to the stand-alone systems at the nowcast time.

  10. Measurement and analysis of operating system fault tolerance

    NASA Technical Reports Server (NTRS)

    Lee, I.; Tang, D.; Iyer, R. K.

    1992-01-01

    This paper demonstrates a methodology to model and evaluate the fault tolerance characteristics of operational software. The methodology is illustrated through case studies on three different operating systems: the Tandem GUARDIAN fault-tolerant system, the VAX/VMS distributed system, and the IBM/MVS system. Measurements are made on these systems for substantial periods to collect software error and recovery data. In addition to investigating basic dependability characteristics such as major software problems and error distributions, we develop two levels of models to describe error and recovery processes inside an operating system and on multiple instances of an operating system running in a distributed environment. Based on the models, reward analysis is conducted to evaluate the loss of service due to software errors and the effect of the fault-tolerance techniques implemented in the systems. Software error correlation in multicomputer systems is also investigated.

  11. 40 CFR 761.65 - Storage for disposal.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... conditions: (i) The waste is placed in a pile designed and operated to control dispersal of the waste by wind...) A run-on control system designed, constructed, operated, and maintained such that: (1) It prevents... 761.65 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) TOXIC SUBSTANCES CONTROL...

  12. 40 CFR 761.65 - Storage for disposal.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... conditions: (i) The waste is placed in a pile designed and operated to control dispersal of the waste by wind...) A run-on control system designed, constructed, operated, and maintained such that: (1) It prevents... 761.65 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) TOXIC SUBSTANCES CONTROL...

  13. Impact Assessment of the Virginia Railway Express Commuter Rail on Land Use Development Patterns in Northern Virginia

    DOT National Transportation Integrated Search

    1993-12-01

    A new commuter rail system - the Virginia Railway Express (VRE) - began operations in Northern Virginia in mid-1992. The new VRE operated four trains each over two existing rail lines running through metropolitan fringe areas to downtown Washington, ...

  14. [Operation room management in quality control certification of a mainstream hospital].

    PubMed

    Leidinger, W; Meierhofer, J N; Schüpfer, G

    2006-11-01

    We report the results of our study concerning the organisation of operating room (OR) capacity planned 1 year in advance. The use of OR is controlled using 2 global controlling numbers: a) the actual time difference between the expected optimal and previously calculated OR running time and b) the punctuality of starting the first operation in each OR. The focal point of the presented OR management concept is a consensus-oriented decision-making and steering process led by a coordinator who achieves a high degree of acceptance by means of comprehensive transparency. Based on the accepted running time, the optimal productivity of OR's (OP_A(%) can be calculated. In this way an increase of the overall capacity (actual running time) of ORs was from 40% to over 55% was achieved. Nevertheless, enthusiasm and teamwork from all persons involved in the system are vital for success as well as a completely independent operating theatre manager. Using this concept over 90% of the requirements for the new certification catalogue for hospitals in Germany was achieved.

  15. Reforming results of a novel radial reactor for a solid oxide fuel cell system with anode off-gas recirculation

    NASA Astrophysics Data System (ADS)

    Bosch, Timo; Carré, Maxime; Heinzel, Angelika; Steffen, Michael; Lapicque, François

    2017-12-01

    A novel reactor of a natural gas (NG) fueled, 1 kW net power solid oxide fuel cell (SOFC) system with anode off-gas recirculation (AOGR) is experimentally investigated. The reactor operates as pre-reformer, is of the type radial reactor with centrifugal z-flow, has the shape of a hollow cylinder with a volume of approximately 1 L and is equipped with two different precious metal wire-mesh catalyst packages as well as with an internal electric heater. Reforming investigations of the reactor are done stand-alone but as if the reactor would operate within the total SOFC system with AOGR. For the tests presented here it is assumed that the SOFC system runs on pure CH4 instead of NG. The manuscript focuses on the various phases of reactor operation during the startup process of the SOFC system. Startup process reforming experiments cover reactor operation points at which it runs on an oxygen to carbon ratio at the reactor inlet (ϕRI) of 1.2 with air supplied, up to a ϕRI of 2.4 without air supplied. As confirmed by a Monte Carlo simulation, most of the measured outlet gas concentrations are in or close to equilibrium.

  16. Method for operating a combustor in a fuel cell system

    DOEpatents

    Clingerman, Bruce J.; Mowery, Kenneth D.

    2002-01-01

    In one aspect, the invention provides a method of operating a combustor to heat a fuel processor to a desired temperature in a fuel cell system, wherein the fuel processor generates hydrogen (H.sub.2) from a hydrocarbon for reaction within a fuel cell to generate electricity. More particularly, the invention provides a method and select system design features which cooperate to provide a start up mode of operation and a smooth transition from start-up of the combustor and fuel processor to a running mode.

  17. LABORATORY PROCESS CONTROLLER USING NATURAL LANGUAGE COMMANDS FROM A PERSONAL COMPUTER

    NASA Technical Reports Server (NTRS)

    Will, H.

    1994-01-01

    The complex environment of the typical research laboratory requires flexible process control. This program provides natural language process control from an IBM PC or compatible machine. Sometimes process control schedules require changes frequently, even several times per day. These changes may include adding, deleting, and rearranging steps in a process. This program sets up a process control system that can either run without an operator, or be run by workers with limited programming skills. The software system includes three programs. Two of the programs, written in FORTRAN77, record data and control research processes. The third program, written in Pascal, generates the FORTRAN subroutines used by the other two programs to identify the user commands with the user-written device drivers. The software system also includes an input data set which allows the user to define the user commands which are to be executed by the computer. To set the system up the operator writes device driver routines for all of the controlled devices. Once set up, this system requires only an input file containing natural language command lines which tell the system what to do and when to do it. The operator can make up custom commands for operating and taking data from external research equipment at any time of the day or night without the operator in attendance. This process control system requires a personal computer operating under MS-DOS with suitable hardware interfaces to all controlled devices. The program requires a FORTRAN77 compiler and user-written device drivers. This program was developed in 1989 and has a memory requirement of about 62 Kbytes.

  18. Bibliography On Multiprocessors And Distributed Processing

    NASA Technical Reports Server (NTRS)

    Miya, Eugene N.

    1988-01-01

    Multiprocessor and Distributed Processing Bibliography package consists of large machine-readable bibliographic data base, which in addition to usual keyword searches, used for producing citations, indexes, and cross-references. Data base contains UNIX(R) "refer" -formatted ASCII data and implemented on any computer running under UNIX(R) operating system. Easily convertible to other operating systems. Requires approximately one megabyte of secondary storage. Bibliography compiled in 1985.

  19. Computerized procedures system

    DOEpatents

    Lipner, Melvin H.; Mundy, Roger A.; Franusich, Michael D.

    2010-10-12

    An online data driven computerized procedures system that guides an operator through a complex process facility's operating procedures. The system monitors plant data, processes the data and then, based upon this processing, presents the status of the current procedure step and/or substep to the operator. The system supports multiple users and a single procedure definition supports several interface formats that can be tailored to the individual user. Layered security controls access privileges and revisions are version controlled. The procedures run on a server that is platform independent of the user workstations that the server interfaces with and the user interface supports diverse procedural views.

  20. Preventing Pirates from Boarding Commercial Vessels - A Systems Approach

    DTIC Science & Technology

    2014-09-01

    was developed in MATLAB to run simulations designed to estimate the relative effectiveness of each assessed countermeasure. A cost analysis was...project indicated that the P-Trap countermeasure, designed to entangle the pirate’s propellers with thin lines, is both effective and economically viable...vessels. A model of the operational environment was developed in MATLAB to run simulations designed to estimate the relative effectiveness of each

  1. Resilient Diffusive Clouds

    DTIC Science & Technology

    2017-02-01

    scale blade servers (Dell PowerEdge) [20]. It must be recognized however, that the findings are distributed over this collection of architectures not...current operating system designs run into millions of lines of code. Moreover, they compound the opportunity for compromise by granting device drivers...properties (e.g. IP & MAC address) so as to invalidate an adversary’s surveillance data. The current running and bootstrapping instances of the micro

  2. EMISSIONS PROCESSING FOR THE ETA/ CMAQ AIR QUALITY FORECAST SYSTEM

    EPA Science Inventory

    NOAA and EPA have created an Air Quality Forecast (AQF) system. This AQF system links an adaptation of the EPA's Community Multiscale Air Quality Model with the 12 kilometer ETA model running operationally at NOAA's National Center for Environmental Predication (NCEP). One of the...

  3. Web interfaces to relational databases

    NASA Technical Reports Server (NTRS)

    Carlisle, W. H.

    1996-01-01

    This reports on a project to extend the capabilities of a Virtual Research Center (VRC) for NASA's Advanced Concepts Office. The work was performed as part of NASA's 1995 Summer Faculty Fellowship program and involved the development of a prototype component of the VRC - a database system that provides data creation and access services within a room of the VRC. In support of VRC development, NASA has assembled a laboratory containing the variety of equipment expected to be used by scientists within the VRC. This laboratory consists of the major hardware platforms, SUN, Intel, and Motorola processors and their most common operating systems UNIX, Windows NT, Windows for Workgroups, and Macintosh. The SPARC 20 runs SUN Solaris 2.4, an Intel Pentium runs Windows NT and is installed on a different network from the other machines in the laboratory, a Pentium PC runs Windows for Workgroups, two Intel 386 machines run Windows 3.1, and finally, a PowerMacintosh and a Macintosh IIsi run MacOS.

  4. Run-to-Run Optimization Control Within Exact Inverse Framework for Scan Tracking.

    PubMed

    Yeoh, Ivan L; Reinhall, Per G; Berg, Martin C; Chizeck, Howard J; Seibel, Eric J

    2017-09-01

    A run-to-run optimization controller uses a reduced set of measurement parameters, in comparison to more general feedback controllers, to converge to the best control point for a repetitive process. A new run-to-run optimization controller is presented for the scanning fiber device used for image acquisition and display. This controller utilizes very sparse measurements to estimate a system energy measure and updates the input parameterizations iteratively within a feedforward with exact-inversion framework. Analysis, simulation, and experimental investigations on the scanning fiber device demonstrate improved scan accuracy over previous methods and automatic controller adaptation to changing operating temperature. A specific application example and quantitative error analyses are provided of a scanning fiber endoscope that maintains high image quality continuously across a 20 °C temperature rise without interruption of the 56 Hz video.

  5. Operational Experience with the MICE Spectrometer Solenoid System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feher, Sandor; Bross, Alan; Hanlet, Pierrick

    The Muon Ionization Cooling Experiment located at Rutherford Appleton Laboratory in England utilizes a supercon-ducting solenoid system for the muon cooling channel that also holds particle tracking detectors and muon absorbers inside their bores. The solenoid system installation was completed in summer of 2015 and after commissioning the system it has been running successfully. As a result, this paper summarizes the commissioning results and operational experience with the magnets focusing on the per-formance of the two Spectrometer Solenoids built by the US.

  6. Operational Experience with the MICE Spectrometer Solenoid System

    DOE PAGES

    Feher, Sandor; Bross, Alan; Hanlet, Pierrick

    2018-01-11

    The Muon Ionization Cooling Experiment located at Rutherford Appleton Laboratory in England utilizes a supercon-ducting solenoid system for the muon cooling channel that also holds particle tracking detectors and muon absorbers inside their bores. The solenoid system installation was completed in summer of 2015 and after commissioning the system it has been running successfully. As a result, this paper summarizes the commissioning results and operational experience with the magnets focusing on the per-formance of the two Spectrometer Solenoids built by the US.

  7. Implementation experiences of NASTRAN on CDC CYBER 74 SCOPE 3.4 operating system

    NASA Technical Reports Server (NTRS)

    Go, J. C.; Hill, R. G.

    1973-01-01

    The implementation of the NASTRAN system on the CDC CYBER 74 SCOPE 3.4 Operating System is described. The flexibility of the NASTRAN system made it possible to accomplish the change with no major problems. Various sizes of benchmark and test problems, ranging from two hours to less than one minute CP time were run on the CDC CYBER SCOPE 3.3, Univac EXEC-8, and CDC CYBER SCOPE 3.4. The NASTRAN installation deck is provided.

  8. More Colleges Eye outside Companies to Run Their Computer Operations.

    ERIC Educational Resources Information Center

    DeLoughry, Thomas J.

    1993-01-01

    Increasingly, budget pressures and rapid technological change are causing colleges to consider "outsourcing" for computer operations management, particularly for administrative purposes. Supporters see the trend as similar to hiring experts for other, ancillary services. Critics fear loss of control of the institution's vital computer systems.…

  9. How do I resolve problems reading the binary data?

    Atmospheric Science Data Center

    2014-12-08

    ... affecting compilation would be differing versions of the operating system and compilers the read software are being run on. Big ... Unix machines are Big Endian architecture while Linux systems are Little Endian architecture. Data generated on a Unix machine are ...

  10. Criteria for the Selection and Application of Advanced Traffic Signal Control Systems

    DOT National Transportation Integrated Search

    2012-06-01

    The Oregon Department of Transportation (ODOT) has recently begun changing their standard traffic signal control systems from the 170 controller running the Wapiti W4IKS firmware to 2070 controllers operating the Northwest Signal Supply Corporation...

  11. The Clouds distributed operating system - Functional description, implementation details and related work

    NASA Technical Reports Server (NTRS)

    Dasgupta, Partha; Leblanc, Richard J., Jr.; Appelbe, William F.

    1988-01-01

    Clouds is an operating system in a novel class of distributed operating systems providing the integration, reliability, and structure that makes a distributed system usable. Clouds is designed to run on a set of general purpose computers that are connected via a medium-of-high speed local area network. The system structuring paradigm chosen for the Clouds operating system, after substantial research, is an object/thread model. All instances of services, programs and data in Clouds are encapsulated in objects. The concept of persistent objects does away with the need for file systems, and replaces it with a more powerful concept, namely the object system. The facilities in Clouds include integration of resources through location transparency; support for various types of atomic operations, including conventional transactions; advanced support for achieving fault tolerance; and provisions for dynamic reconfiguration.

  12. A Lunar Surface Operations Simulator

    NASA Technical Reports Server (NTRS)

    Nayar, H.; Balaram, J.; Cameron, J.; Jain, A.; Lim, C.; Mukherjee, R.; Peters, S.; Pomerantz, M.; Reder, L.; Shakkottai, P.; hide

    2008-01-01

    The Lunar Surface Operations Simulator (LSOS) is being developed to support planning and design of space missions to return astronauts to the moon. Vehicles, habitats, dynamic and physical processes and related environment systems are modeled and simulated in LSOS to assist in the visualization and design optimization of systems for lunar surface operations. A parametric analysis tool and a data browser were also implemented to provide an intuitive interface to run multiple simulations and review their results. The simulator and parametric analysis capability are described in this paper.

  13. Successful Validation of Sample Processing and Quantitative Real-Time PCR Capabilities on the International Space Station

    NASA Technical Reports Server (NTRS)

    Parra, Macarena; Jung, Jimmy; Almeida, Eduardo; Boone, Travis; Schonfeld, Julie; Tran, Luan

    2016-01-01

    The WetLab-2 system was developed by NASA Ames Research Center to offer new capabilities to researchers. The system can lyse cells and extract RNA (Ribonucleic Acid) on-orbit from different sample types ranging from microbial cultures to animal tissues. The purified RNA can then either be stabilized for return to Earth or can be used to conduct on-orbit quantitative Reverse Transcriptase PCR (Polymerase Chain Reaction) (qRT-PCR) analysis without the need for sample return. The qRT-PCR results can be downlinked to the ground a few hours after the completion of the run. The validation flight of the WetLab-2 system launched on SpaceX-8 on April 8, 2016. On orbit operations started on April 15th with system setup and was followed by three quantitative PCR runs using an E. coli genomic DNA template pre-loaded at three different concentrations. These runs were designed to discern if quantitative PCR functions correctly in microgravity and if the data is comparable to that from the ground control runs. The flight data showed no significant differences compared to the ground data though there was more variability in the values, this was likely due to the numerous small bubbles observed. The capability of the system to process samples and purify RNA was then validated using frozen samples prepared on the ground. The flight data for both E. coli and mouse liver clearly shows that RNA was successfully purified by our system. The E. coli qRT-PCR run showed successful singleplex, duplex and triplex capability. Data showed high variability in the resulting Cts (Cycle Thresholds [for the PCR]) likely due to bubble formation and insufficient mixing during the procedure run. The mouse liver qRT-PCR run had successful singleplex and duplex reactions and the variability was slightly better as the mixing operation was improved. The ability to purify and stabilize RNA and to conduct qRT-PCR on-orbit is an important step towards utilizing the ISS as a National Laboratory facility. The ability to get on-orbit data will provide investigators with the opportunity to adjust experimental parameters in real time without the need for sample return and re-flight. The WetLab-2 Project is supported by the Research Integration Office in the ISS Program.

  14. Stanford Synchrotron Radiation Laboratory. Activity report for 1989

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1996-01-01

    The April, 1990 SPEAR synchrotron radiation run was one of the two or three best in SSRL`s history. High currents were accumulated, ramping went easily, lifetimes were long, beam dumps were infrequent and the average current was 42.9 milliamps. In the one month of operation, 63 different experiments involving 208 scientists from 50 institutions received beam. The end-of-run summary forms completed by the experimenters indicated high levels of user satisfaction with the beam quality and with the outstanding support received from the SSRL technical and scientific staffs. These fine experimental conditions result largely from the SPEAR repairs and improvements performedmore » during the past year and described in Section I. Also quite significant was Max Cornacchia`s leadership of the SLAG staff. SPEAR`s performance this past April stands in marked contrast to that of the January-March, 1989 run which is also described in Section I. It is, we hope, a harbinger of the operation which will be provided in FY `91, when the SPEAR injector project is completed and SPEAR is fully dedicated to synchrotron radiation research. Over the coming years, SSRL intends to give highest priority to increasing the effectiveness of SPEAR and its various beam lines. The beam line and facility improvements performed during 1989 are described in Section III. In order to concentrate effort on SSRL`s three highest priorities prior to the March-April run: (1) to have a successful run, (2) to complete and commission the injector, and (3) to prepare to operate, maintain and improve the SPEAR/injector system, SSRL was reorganized. In the new organization, all the technical staff is contained in three groups: Accelerator Research and Operations Division, Injector Project and Photon Research and Operations Division, as described in Section IV. In spite of the limited effectiveness of the January-March, 1989 run, SSRL`s users made significant scientific progress, as described in Section V of this report.« less

  15. Operation Status of the J-PARC Negative Hydrogen Ion Source

    NASA Astrophysics Data System (ADS)

    Oguri, H.; Ikegami, K.; Ohkoshi, K.; Namekawa, Y.; Ueno, A.

    2011-09-01

    A cesium-free negative hydrogen ion source driven with a lanthanum hexaboride (LaB6) filament is being operated without any serious trouble for approximately four years in J-PARC. Although the ion source is capable of producing an H- ion current of more than 30 mA, the current is routinely restricted to approximately 16 mA at present for the stable operation of the RFQ linac which has serious discharge problem from September 2008. The beam run is performed during 1 month cycle, which consisted of a 4-5 weeks beam operation and a few days down-period interval. At the recent beam run, approximately 700 h continuous operation was achieved. At every runs, the beam interruption time due to the ion source failure is a few hours, which correspond to the ion source availability of more than 99%. The R&D work is being performed in parallel with the operation in order to enhance the further beam current. As a result, the H- ion current of 61 mA with normalized rms emittance of 0.26 πmm.mrad was obtained by adding a cesium seeding system to a J-PARC test ion source which has the almost same structure with the present J-PARC ion source.

  16. Graphics Processing Unit (GPU) implementation of image processing algorithms to improve system performance of the Control, Acquisition, Processing, and Image Display System (CAPIDS) of the Micro-Angiographic Fluoroscope (MAF).

    PubMed

    Vasan, S N Swetadri; Ionita, Ciprian N; Titus, A H; Cartwright, A N; Bednarek, D R; Rudin, S

    2012-02-23

    We present the image processing upgrades implemented on a Graphics Processing Unit (GPU) in the Control, Acquisition, Processing, and Image Display System (CAPIDS) for the custom Micro-Angiographic Fluoroscope (MAF) detector. Most of the image processing currently implemented in the CAPIDS system is pixel independent; that is, the operation on each pixel is the same and the operation on one does not depend upon the result from the operation on the other, allowing the entire image to be processed in parallel. GPU hardware was developed for this kind of massive parallel processing implementation. Thus for an algorithm which has a high amount of parallelism, a GPU implementation is much faster than a CPU implementation. The image processing algorithm upgrades implemented on the CAPIDS system include flat field correction, temporal filtering, image subtraction, roadmap mask generation and display window and leveling. A comparison between the previous and the upgraded version of CAPIDS has been presented, to demonstrate how the improvement is achieved. By performing the image processing on a GPU, significant improvements (with respect to timing or frame rate) have been achieved, including stable operation of the system at 30 fps during a fluoroscopy run, a DSA run, a roadmap procedure and automatic image windowing and leveling during each frame.

  17. 77 FR 51993 - Western Technical College; Notice of Availability of Environmental Assessment

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-28

    ... hydroelectric generation at the dam. The dam is operated manually in a run-of-river mode (i.e., an operating...) distribution line; and (5) appurtenant facilities. The project would be operated in a run-of-river mode using... could otherwise enter project waters or adjacent non-project lands; Operating the project in a run-of...

  18. A multiprocessor operating system simulator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnston, G.M.; Campbell, R.H.

    1988-01-01

    This paper describes a multiprocessor operating system simulator that was developed by the authors in the Fall of 1987. The simulator was built in response to the need to provide students with an environment in which to build and test operating system concepts as part of the coursework of a third-year undergraduate operating systems course. Written in C++, the simulator uses the co-routine style task package that is distributed with the AT and T C++ Translator to provide a hierarchy of classes that represents a broad range of operating system software and hardware components. The class hierarchy closely follows thatmore » of the Choices family of operating systems for loosely and tightly coupled multiprocessors. During an operating system course, these classes are refined and specialized by students in homework assignments to facilitate experimentation with different aspects of operating system design and policy decisions. The current implementation runs on the IBM RT PC under 4.3bsd UNIX.« less

  19. Abundance and fate of antibiotics and hormones in a vegetative treatment system receiving cattle feedlot runoff

    USDA-ARS?s Scientific Manuscript database

    Vegetative treatment systems (VTS) have been developed and built as an alternative to conventional holding pond systems for managing run-off from animal feeding operations. Initially developed to manage runoff nutrients via uptake by grasses, their effectiveness at removing other runoff contaminant...

  20. Development of Advanced Czochralski Growth Process to produce low cost 150 KG silicon ingots from a single crucible for technology readiness

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The modified CG2000 crystal grower construction, installation, and machine check-out was completed. The process development check-out proceeded with several dry runs and one growth run. Several machine calibrations and functional problems were discovered and corrected. Several exhaust gas analysis system alternatives were evaluated and an integrated system approved and ordered. A contract presentation was made at the Project Integration Meeting at JPL, including cost-projections using contract projected throughput and machine parameters. Several growth runs on a development CG200 RC grower show that complete neck, crown, and body automated growth can be achieved with only one operator input. Work continued for melt level, melt temperature, and diameter sensor development.

  1. Mitigating Short-Term Variations of Photovoltaic Generation Using Energy Storage with VOLTTRON

    NASA Astrophysics Data System (ADS)

    Morrissey, Kevin

    A smart-building communications system performs smoothing on photovoltaic (PV) power generation using a battery energy storage system (BESS). The system runs using VOLTTRON(TM), a multi-agent python-based software platform dedicated to power systems. The VOLTTRON(TM) system designed for this project runs synergistically with the larger University of Washington VOLTTRON(TM) environment, which is designed to operate UW device communications and databases as well as to perform real-time operations for research. One such research algorithm that operates simultaneously with this PV Smoothing System is an energy cost optimization system which optimizes net demand and associated cost throughout a day using the BESS. The PV Smoothing System features an active low-pass filter with an adaptable time constant, as well as adjustable limitations on the output power and accumulated battery energy of the BESS contribution. The system was analyzed using 26 days of PV generation at 1-second resolution. PV smoothing was studied with unconstrained BESS contribution as well as under a broad range of BESS constraints analogous to variable-sized storage. It was determined that a large inverter output power was more important for PV smoothing than a large battery energy capacity. Two methods of selecting the time constant in real time, static and adaptive, are studied for their impact on system performance. It was found that both systems provide a high level of PV smoothing performance, within 8% of the ideal case where the best time constant is known ahead of time. The system was run in real time using VOLTTRON(TM) with BESS limitations of 5 kW/6.5 kWh and an adaptive update period of 7 days. The system behaved as expected given the BESS parameters and time constant selection methods, providing smoothing on the PV generation and updating the time constant periodically using the adaptive time constant selection method.

  2. Investigation of roughing machining simulation by using visual basic programming in NX CAM system

    NASA Astrophysics Data System (ADS)

    Hafiz Mohamad, Mohamad; Nafis Osman Zahid, Muhammed

    2018-03-01

    This paper outlines a simulation study to investigate the characteristic of roughing machining simulation in 4th axis milling processes by utilizing visual basic programming in NX CAM systems. The selection and optimization of cutting orientation in rough milling operation is critical in 4th axis machining. The main purpose of roughing operation is to approximately shape the machined parts into finished form by removing the bulk of material from workpieces. In this paper, the simulations are executed by manipulating a set of different cutting orientation to generate estimated volume removed from the machine parts. The cutting orientation with high volume removal is denoted as an optimum value and chosen to execute a roughing operation. In order to run the simulation, customized software is developed to assist the routines. Operations build-up instructions in NX CAM interface are translated into programming codes via advanced tool available in the Visual Basic Studio. The codes is customized and equipped with decision making tools to run and control the simulations. It permits the integration with any independent program files to execute specific operations. This paper aims to discuss about the simulation program and identifies optimum cutting orientations for roughing processes. The output of this study will broaden up the simulation routines performed in NX CAM systems.

  3. Alcator C-Mod Digital Plasma Control System

    NASA Astrophysics Data System (ADS)

    Wolfe, S. M.

    2005-10-01

    A new digital plasma control system (DPCS) has been implemented for Alcator C-Mod. The new system was put into service at the start of the 2005 run campaign and has been in routine operation since. The system consists of two 64-input, 16-output cPCI digitizers attached to a rack-mounted single-CPU Linux server, which performs both the I/O and the computation. During initial operation, the system was set up to directly emulate the original C-Mod ``Hybrid'' MIMO linear control system. Compatibility with the previous control system allows the existing user interface software and data structures to be used with the new hardware. The control program is written in IDL and runs under standard Linux. Interrupts are disabled during the plasma pulses to achieve real-time operation. A synchronous loop is executed with a nominal cycle rate of 10 kHz. Emulation of the original linear control algorithms requires 50 μsec per iteration, with the time evenly split between I/O and computation, so rates of about 20 KHz are achievable. Reliable vertical position control has been demonstrated with cycle rates as low as 5 KHz. Additional computations, including non-linear algorithms and adaptive response, are implemented as optional procedure calls within the main real-time loop.

  4. Lambda: A Mathematica package for operator product expansions in vertex algebras

    NASA Astrophysics Data System (ADS)

    Ekstrand, Joel

    2011-02-01

    We give an introduction to the Mathematica package Lambda, designed for calculating λ-brackets in both vertex algebras, and in SUSY vertex algebras. This is equivalent to calculating operator product expansions in two-dimensional conformal field theory. The syntax of λ-brackets is reviewed, and some simple examples are shown, both in component notation, and in N=1 superfield notation. Program summaryProgram title: Lambda Catalogue identifier: AEHF_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHF_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License No. of lines in distributed program, including test data, etc.: 18 087 No. of bytes in distributed program, including test data, etc.: 131 812 Distribution format: tar.gz Programming language: Mathematica Computer: See specifications for running Mathematica V7 or above. Operating system: See specifications for running Mathematica V7 or above. RAM: Varies greatly depending on calculation to be performed. Classification: 4.2, 5, 11.1. Nature of problem: Calculate operator product expansions (OPEs) of composite fields in 2d conformal field theory. Solution method: Implementation of the algebraic formulation of OPEs given by vertex algebras, and especially by λ-brackets. Running time: Varies greatly depending on calculation requested. The example notebook provided takes about 3 s to run.

  5. Interactive cutting path analysis programs

    NASA Technical Reports Server (NTRS)

    Weiner, J. M.; Williams, D. S.; Colley, S. R.

    1975-01-01

    The operation of numerically controlled machine tools is interactively simulated. Four programs were developed to graphically display the cutting paths for a Monarch lathe, Cintimatic mill, Strippit sheet metal punch, and the wiring path for a Standard wire wrap machine. These programs are run on a IMLAC PDS-ID graphic display system under the DOS-3 disk operating system. The cutting path analysis programs accept input via both paper tape and disk file.

  6. Control Transfer in Operating System Kernels

    DTIC Science & Technology

    1994-05-13

    microkernel system that runs less code in the kernel address space. To realize the performance benefit of allocating stacks in unmapped kseg0 memory, the...review how I modified the Mach 3.0 kernel to use continuations. Because of Mach’s message-passing microkernel structure, interprocess communication was...critical control transfer paths, deeply- nested call chains are undesirable in any case because of the function call overhead. 4.1.3 Microkernel Operating

  7. Device Driver Safety Through a Reference Validation Mechanism

    DTIC Science & Technology

    2008-05-01

    microkernels and other research operat- ing systems [2, 9, 21, 24] run device drivers in user space ∗Supported by NICECAP cooperative agreement FA8750...device driver architecture in the Nexus trusted operating system [28], which has many similarities to traditional microkernels , including hardware... microkernel operating sys- tems, every flaw in a device driver is a potential secu- rity hole given the absence of mechanisms to contain the (mis

  8. Device- and system-independent personal touchless user interface for operating rooms : One personal UI to control all displays in an operating room.

    PubMed

    Ma, Meng; Fallavollita, Pascal; Habert, Séverine; Weidert, Simon; Navab, Nassir

    2016-06-01

    In the modern day operating room, the surgeon performs surgeries with the support of different medical systems that showcase patient information, physiological data, and medical images. It is generally accepted that numerous interactions must be performed by the surgical team to control the corresponding medical system to retrieve the desired information. Joysticks and physical keys are still present in the operating room due to the disadvantages of mouses, and surgeons often communicate instructions to the surgical team when requiring information from a specific medical system. In this paper, a novel user interface is developed that allows the surgeon to personally perform touchless interaction with the various medical systems, switch effortlessly among them, all of this without modifying the systems' software and hardware. To achieve this, a wearable RGB-D sensor is mounted on the surgeon's head for inside-out tracking of his/her finger with any of the medical systems' displays. Android devices with a special application are connected to the computers on which the medical systems are running, simulating a normal USB mouse and keyboard. When the surgeon performs interaction using pointing gestures, the desired cursor position in the targeted medical system display, and gestures, are transformed into general events and then sent to the corresponding Android device. Finally, the application running on the Android devices generates the corresponding mouse or keyboard events according to the targeted medical system. To simulate an operating room setting, our unique user interface was tested by seven medical participants who performed several interactions with the visualization of CT, MRI, and fluoroscopy images at varying distances from them. Results from the system usability scale and NASA-TLX workload index indicated a strong acceptance of our proposed user interface.

  9. The DZERO Level 3 Data Acquisition System

    NASA Astrophysics Data System (ADS)

    Angstadt, R.; Brooijmans, G.; Chapin, D.; Clements, M.; Cutts, D.; Haas, A.; Hauser, R.; Johnson, M.; Kulyavtsev, A.; Mattingly, S. E. K.; Mulders, M.; Padley, P.; Petravick, D.; Rechenmacher, R.; Snyder, S.; Watts, G.

    2004-06-01

    The DZERO experiment began RunII datataking operation at Fermilab in spring 2001. The physics program of the experiment requires the Level 3 data acquisition (DAQ) system system to handle average event sizes of 250 kilobytes at a rate of 1 kHz. The system routes and transfers event fragments of approximately 1-20 kilobytes from 63 VME crate sources to any of approximately 100 processing nodes. It is built upon a Cisco 6509 Ethernet switch, standard PCs, and commodity VME single board computers (SBCs). The system has been in full operation since spring 2002.

  10. Real-time operating system for a multi-laser/multi-detector system

    NASA Technical Reports Server (NTRS)

    Coles, G.

    1980-01-01

    The laser-one hazard detector system, used on the Rensselaer Mars rover, is reviewed briefly with respect to the hardware subsystems, the operation, and the results obtained. A multidetector scanning system was designed to improve on the original system. Interactive support software was designed and programmed to implement real time control of the rover or platform with the elevation scanning mast. The formats of both the raw data and the post-run data files were selected. In addition, the interface requirements were selected and some initial hardware-software testing was completed.

  11. Dry-running gas seals save $200,000/yr in retrofit hydrogen recycle compressor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pennacchi, R.P.; Germain, A.

    1987-10-01

    Texaco Chemical Company was using three drums of oil per day in the seal oil system of a hydrogen recycle compressor, resulting in maintenance and operational expenses of more than $160,000 per year. Running 24 hours/day, 365 days/yr, the 26-yr-old compressor is the heart of the benzene manufacturing process unit at the Port Arthur, Texas plant. In the event of an unscheduled shutdown, the important aromatics unit process would halt and cause production losses of thousands of dollars per day. In addition, the close monitoring and minimization of leakage are essential since the gas consists of over 75% hydrogen, withmore » methane, ethane, propane, isobutane, N-butane and pentanes. Texaco Chemical Company decided that retrofit of the hydrogen recycle compressor should be undertaken if the system could be developed to sharply reduce operations and maintenance costs, and increase efficiencies. Texaco engineers selected a dry running-type gas sealing system developed for pipeline compressors in the United States, Canada, and overseas. A tandem-type sealing system was designed to meet specific needs of a hydrogen recycle compressor. The retrofit was scheduled for August 1986 to coincide with the plant's preventative maintenance program. The seal system installation required five days. The retrofit progressed according to schedule, with no problems experienced at the first and several startups since the initial installation. Oil consumption has been eliminated, along with seal support and parasitic energy requirements. With the savings in seal oil, energy, operations and maintenance, payback period for the retrofit sealing system was just over six months. Savings are expected to continue at an annual rate of over $200,000.« less

  12. Time-Motion Analysis of Four Automated Systems for the Detection of Chlamydia trachomatis and Neisseria gonorrhoeae by Nucleic Acid Amplification Testing.

    PubMed

    Williams, James A; Eddleman, Laura; Pantone, Amy; Martinez, Regina; Young, Stephen; Van Der Pol, Barbara

    2014-08-01

    Next-generation diagnostics for Chlamydia trachomatis and Neisseria gonorrhoeae are available on semi- or fully-automated platforms. These systems require less hands-on time than older platforms and are user friendly. Four automated systems, the ABBOTT m2000 system, Becton Dickinson Viper System with XTR Technology, Gen-Probe Tigris DTS system, and Roche cobas 4800 system, were evaluated for total run time, hands-on time, and walk-away time. All of the systems evaluated in this time-motion study were able to complete a diagnostic test run within an 8-h work shift, instrument setup and operation were straightforward and uncomplicated, and walk-away time ranged from approximately 90 to 270 min in a head-to-head comparison of each system. All of the automated systems provide technical staff with increased time to perform other tasks during the run, offer easy expansion of the diagnostic test menu, and have the ability to increase specimen throughput. © 2013 Society for Laboratory Automation and Screening.

  13. Safety and IVHM

    NASA Technical Reports Server (NTRS)

    Goebel, Kai

    2012-01-01

    When we address safety in a book on the business case for IVHM, the question arises whether safety isn t inherently in conflict with the need of operators to run their systems as efficiently (and as cost effectively) as possible. The answer may be that the system needs to be just as safe as needed, but not significantly more. That begs the next question: How safe is safe enough? Several regulatory bodies provide guidelines for operational safety, but irrespective of that, operators do not want their systems to be known as lacking safety. We illuminate the role of safety within the context of IVHM.

  14. Purple L1 Milestone Review Panel TotalView Debugger Functionality and Performance for ASC Purple

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wolfe, M

    2006-12-12

    ASC code teams require a robust software debugging tool to help developers quickly find bugs in their codes and get their codes running. Development debugging commonly runs up to 512 processes. Production jobs run up to full ASC Purple scale, and at times require introspection while running. Developers want a debugger that runs on all their development and production platforms and that works with all compilers and runtimes used with ASC codes. The TotalView Multiprocess Debugger made by Etnus was specified for ASC Purple to address this needed capability. The ASC Purple environment builds on the environment seen by TotalViewmore » on ASCI White. The debugger must now operate with the Power5 CPU, Federation switch, AIX 5.3 operating system including large pages, IBM compilers 7 and 9, POE 4.2 parallel environment, and rs6000 SLURM resource manager. Users require robust, basic debugger functionality with acceptable performance at development debugging scale. A TotalView installation must be provided at the beginning of the early user access period that meets these requirements. A functional enhancement, fast conditional data watchpoints, and a scalability enhancement, capability up to 8192 processes, are to be demonstrated.« less

  15. 40 CFR 761.65 - Storage for disposal.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... storage of non-liquid PCB/ radioactive wastes must be designed to prevent the buildup of liquids if such... conditions: (i) The waste is placed in a pile designed and operated to control dispersal of the waste by wind...) A run-on control system designed, constructed, operated, and maintained such that: (1) It prevents...

  16. Centrifuge. Operational Control Tests for Wastewater Treatment Facilities. Instructor's Manual [and] Student Workbook.

    ERIC Educational Resources Information Center

    Arasmith, E. E.

    Designed for individuals who have completed National Pollutant Discharge Elimination System (NPDES) level 1 laboratory training skills, this module provides waste water treatment plant operators with the basic information needed to: (1) successfully run a centrifuge test; (2) accurately read results obtained in test tubes; and (3) obtain…

  17. Percent CO2. Operational Control Tests for Wastewater Treatment Facilities. Instructor's Manual [and] Student Workbook.

    ERIC Educational Resources Information Center

    Wooley, John F.

    Designed for individuals who have completed National Pollutant Discharge Elimination System (NPDES) level 1 laboratory training skills, this module on digestor gas analysis provides waste water treatment plant operators with the basic skills and information needed to: (1) successfully run the carbon dioxide analysis test; (2) accurately record…

  18. Integration of High-Performance Computing into Cloud Computing Services

    NASA Astrophysics Data System (ADS)

    Vouk, Mladen A.; Sills, Eric; Dreher, Patrick

    High-Performance Computing (HPC) projects span a spectrum of computer hardware implementations ranging from peta-flop supercomputers, high-end tera-flop facilities running a variety of operating systems and applications, to mid-range and smaller computational clusters used for HPC application development, pilot runs and prototype staging clusters. What they all have in common is that they operate as a stand-alone system rather than a scalable and shared user re-configurable resource. The advent of cloud computing has changed the traditional HPC implementation. In this article, we will discuss a very successful production-level architecture and policy framework for supporting HPC services within a more general cloud computing infrastructure. This integrated environment, called Virtual Computing Lab (VCL), has been operating at NC State since fall 2004. Nearly 8,500,000 HPC CPU-Hrs were delivered by this environment to NC State faculty and students during 2009. In addition, we present and discuss operational data that show that integration of HPC and non-HPC (or general VCL) services in a cloud can substantially reduce the cost of delivering cloud services (down to cents per CPU hour).

  19. Full Spectrum Operations: A Running Start

    DTIC Science & Technology

    2009-03-31

    looking like nails. —MAJ Curt Taylor , S3 2-8 IN, Diwaniyah, Iraq, August 2006. To avoid the hammer and nails dynamic that may plague maneuver...Gasification System from Princeton Environmental Group; the AgriPower system, based on the “open” Brayton Cycle technology; and Thermogenics

  20. ERP=Efficiency

    ERIC Educational Resources Information Center

    Violino, Bob

    2008-01-01

    This article discusses the enterprise resource planning (ERP) system. Deploying an ERP system is one of the most extensive--and expensive--IT projects a college or university can undertake. The potential benefits of ERP are significant: a more smoothly running operation with efficiencies in virtually every area of administration, from automated…

  1. Embedded real-time operating system micro kernel design

    NASA Astrophysics Data System (ADS)

    Cheng, Xiao-hui; Li, Ming-qiang; Wang, Xin-zheng

    2005-12-01

    Embedded systems usually require a real-time character. Base on an 8051 microcontroller, an embedded real-time operating system micro kernel is proposed consisting of six parts, including a critical section process, task scheduling, interruption handle, semaphore and message mailbox communication, clock managent and memory managent. Distributed CPU and other resources are among tasks rationally according to the importance and urgency. The design proposed here provides the position, definition, function and principle of micro kernel. The kernel runs on the platform of an ATMEL AT89C51 microcontroller. Simulation results prove that the designed micro kernel is stable and reliable and has quick response while operating in an application system.

  2. A new bipolar Qtrim power supply system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mi, C.; Bruno, D.; Drozd, J.

    2015-05-03

    This year marks the 15th run of RHIC (Relativistic Heavy Ion Collider) operations. The reliability of superconducting magnet power supplies is one of the essential factors in the entire accelerator complex. Besides maintaining existing power supplies and their associated equipment, newly designed systems are also required based on the physicist’s latest requirements. A bipolar power supply was required for this year’s main quadruple trim power supply. This paper will explain the design, prototype, testing, installation and operation of this recently installed power supply system.

  3. Operating System For Numerically Controlled Milling Machine

    NASA Technical Reports Server (NTRS)

    Ray, R. B.

    1992-01-01

    OPMILL program is operating system for Kearney and Trecker milling machine providing fast easy way to program manufacture of machine parts with IBM-compatible personal computer. Gives machinist "equation plotter" feature, which plots equations that define movements and converts equations to milling-machine-controlling program moving cutter along defined path. System includes tool-manager software handling up to 25 tools and automatically adjusts to account for each tool. Developed on IBM PS/2 computer running DOS 3.3 with 1 MB of random-access memory.

  4. Queueing Network Models for Parallel Processing of Task Systems: an Operational Approach

    NASA Technical Reports Server (NTRS)

    Mak, Victor W. K.

    1986-01-01

    Computer performance modeling of possibly complex computations running on highly concurrent systems is considered. Earlier works in this area either dealt with a very simple program structure or resulted in methods with exponential complexity. An efficient procedure is developed to compute the performance measures for series-parallel-reducible task systems using queueing network models. The procedure is based on the concept of hierarchical decomposition and a new operational approach. Numerical results for three test cases are presented and compared to those of simulations.

  5. Initializing numerical weather prediction models with satellite-derived surface soil moisture: Data assimilation experiments with ECMWF's Integrated Forecast System and the TMI soil moisture data set

    NASA Astrophysics Data System (ADS)

    Drusch, M.

    2007-02-01

    Satellite-derived surface soil moisture data sets are readily available and have been used successfully in hydrological applications. In many operational numerical weather prediction systems the initial soil moisture conditions are analyzed from the modeled background and 2 m temperature and relative humidity. This approach has proven its efficiency to improve surface latent and sensible heat fluxes and consequently the forecast on large geographical domains. However, since soil moisture is not always related to screen level variables, model errors and uncertainties in the forcing data can accumulate in root zone soil moisture. Remotely sensed surface soil moisture is directly linked to the model's uppermost soil layer and therefore is a stronger constraint for the soil moisture analysis. For this study, three data assimilation experiments with the Integrated Forecast System (IFS) of the European Centre for Medium-Range Weather Forecasts (ECMWF) have been performed for the 2-month period of June and July 2002: a control run based on the operational soil moisture analysis, an open loop run with freely evolving soil moisture, and an experimental run incorporating TMI (TRMM Microwave Imager) derived soil moisture over the southern United States. In this experimental run the satellite-derived soil moisture product is introduced through a nudging scheme using 6-hourly increments. Apart from the soil moisture analysis, the system setup reflects the operational forecast configuration including the atmospheric 4D-Var analysis. Soil moisture analyzed in the nudging experiment is the most accurate estimate when compared against in situ observations from the Oklahoma Mesonet. The corresponding forecast for 2 m temperature and relative humidity is almost as accurate as in the control experiment. Furthermore, it is shown that the soil moisture analysis influences local weather parameters including the planetary boundary layer height and cloud coverage.

  6. Santa Clara County Planar Solid Oxide Fuel Cell Demonstration Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fred Mitlitsky; Sara Mulhauser; David Chien

    2009-11-14

    The Santa Clara County Planar Solid Oxide Fuel Cell (PSOFC) project demonstrated the technical viability of pre-commercial PSOFC technology at the County 911 Communications headquarters, as well as the input fuel flexibility of the PSOFC. PSOFC operation was demonstrated on natural gas and denatured ethanol. The Santa Clara County Planar Solid Oxide Fuel Cell (PSOFC) project goals were to acquire, site, and demonstrate the technical viability of a pre-commercial PSOFC technology at the County 911 Communications headquarters. Additional goals included educating local permit approval authorities, and other governmental entities about PSOFC technology, existing fuel cell standards and specific code requirements.more » The project demonstrated the Bloom Energy (BE) PSOFC technology in grid parallel mode, delivering a minimum 15 kW over 8760 operational hours. The PSOFC system demonstrated greater than 81% electricity availability and 41% electrical efficiency (LHV net AC), providing reliable, stable power to a critical, sensitive 911 communications system that serves geographical boundaries of the entire Santa Clara County. The project also demonstrated input fuel flexibility. BE developed and demonstrated the capability to run its prototype PSOFC system on ethanol. BE designed the hardware necessary to deliver ethanol into its existing PSOFC system. Operational parameters were determined for running the system on ethanol, natural gas (NG), and a combination of both. Required modeling was performed to determine viable operational regimes and regimes where coking could occur.« less

  7. NSTX-U Advances in Real-Time C++11 on Linux

    NASA Astrophysics Data System (ADS)

    Erickson, Keith G.

    2015-08-01

    Programming languages like C and Ada combined with proprietary embedded operating systems have dominated the real-time application space for decades. The new C++11 standard includes native, language-level support for concurrency, a required feature for any nontrivial event-oriented real-time software. Threads, Locks, and Atomics now exist to provide the necessary tools to build the structures that make up the foundation of a complex real-time system. The National Spherical Torus Experiment Upgrade (NSTX-U) at the Princeton Plasma Physics Laboratory (PPPL) is breaking new ground with the language as applied to the needs of fusion devices. A new Digital Coil Protection System (DCPS) will serve as the main protection mechanism for the magnetic coils, and it is written entirely in C++11 running on Concurrent Computer Corporation's real-time operating system, RedHawk Linux. It runs over 600 algorithms in a 5 kHz control loop that determine whether or not to shut down operations before physical damage occurs. To accomplish this, NSTX-U engineers developed software tools that do not currently exist elsewhere, including real-time atomic synchronization, real-time containers, and a real-time logging framework. Together with a recent (and carefully configured) version of the GCC compiler, these tools enable data acquisition, processing, and output using a conventional operating system to meet a hard real-time deadline (that is, missing one periodic is a failure) of 200 microseconds.

  8. A maritime decision support system to assess risk in the presence of environmental uncertainties: the REP10 experiment

    NASA Astrophysics Data System (ADS)

    Grasso, Raffaele; Cococcioni, Marco; Mourre, Baptiste; Chiggiato, Jacopo; Rixen, Michel

    2012-03-01

    The aim of this work is to report on an activity carried out during the 2010 Recognized Environmental Picture experiment, held in the Ligurian Sea during summer 2010. The activity was the first at-sea test of the recently developed decision support system (DSS) for operation planning, which had previously been tested in an artificial experiment. The DSS assesses the impact of both environmental conditions (meteorological and oceanographic) and non-environmental conditions (such as traffic density maps) on people and assets involved in the operation and helps in deciding a course of action that allows safer operation. More precisely, the environmental variables (such as wind speed, current speed and significant wave height) taken as input by the DSS are the ones forecasted by a super-ensemble model, which fuses the forecasts provided by multiple forecasting centres. The uncertainties associated with the DSS's inputs (generally due to disagreement between forecasts) are propagated through the DSS's output by using the unscented transform. In this way, the system is not only able to provide a traffic light map ( run/ not run the operation), but also to specify the confidence level associated with each action. This feature was tested on a particular type of operation with underwater gliders: the glider surfacing for data transmission. It is also shown how the availability of a glider path prediction tool provides surfacing options along the predicted path. The applicability to different operations is demonstrated by applying the same system to support diver operations.

  9. Support for User Interfaces for Distributed Systems

    NASA Technical Reports Server (NTRS)

    Eychaner, Glenn; Niessner, Albert

    2005-01-01

    An extensible Java(TradeMark) software framework supports the construction and operation of graphical user interfaces (GUIs) for distributed computing systems typified by ground control systems that send commands to, and receive telemetric data from, spacecraft. Heretofore, such GUIs have been custom built for each new system at considerable expense. In contrast, the present framework affords generic capabilities that can be shared by different distributed systems. Dynamic class loading, reflection, and other run-time capabilities of the Java language and JavaBeans component architecture enable the creation of a GUI for each new distributed computing system with a minimum of custom effort. By use of this framework, GUI components in control panels and menus can send commands to a particular distributed system with a minimum of system-specific code. The framework receives, decodes, processes, and displays telemetry data; custom telemetry data handling can be added for a particular system. The framework supports saving and later restoration of users configurations of control panels and telemetry displays with a minimum of effort in writing system-specific code. GUIs constructed within this framework can be deployed in any operating system with a Java run-time environment, without recompilation or code changes.

  10. Alternative IT Sourcing: A Discussion of Privacy, Security, and Risk

    ERIC Educational Resources Information Center

    Petersen, Rodney

    2011-01-01

    The sourcing of IT systems and services takes many shapes in higher education. Campus central IT organizations are increasingly responsible for the administration of enterprise systems and for the consolidation of operations into a single data center. Specialized academic and administrative systems may be run by local IT departments. In addition,…

  11. A perioperative echocardiographic reporting and recording system.

    PubMed

    Pybus, David A

    2004-11-01

    Advances in video capture, compression, and streaming technology, coupled with improvements in central processing unit design and the inclusion of a database engine in the Windows operating system, have simplified the task of implementing a digital echocardiographic recording system. I describe an application that uses these technologies and runs on a notebook computer.

  12. Brahms Mobile Agents: Architecture and Field Tests

    NASA Technical Reports Server (NTRS)

    Clancey, William J.; Sierhuis, Maarten; Kaskiris, Charis; vanHoof, Ron

    2002-01-01

    We have developed a model-based, distributed architecture that integrates diverse components in a system designed for lunar and planetary surface operations: an astronaut's space suit, cameras, rover/All-Terrain Vehicle (ATV), robotic assistant, other personnel in a local habitat, and a remote mission support team (with time delay). Software processes, called agents, implemented in the Brahms language, run on multiple, mobile platforms. These mobile agents interpret and transform available data to help people and robotic systems coordinate their actions to make operations more safe and efficient. The Brahms-based mobile agent architecture (MAA) uses a novel combination of agent types so the software agents may understand and facilitate communications between people and between system components. A state-of-the-art spoken dialogue interface is integrated with Brahms models, supporting a speech-driven field observation record and rover command system (e.g., return here later and bring this back to the habitat ). This combination of agents, rover, and model-based spoken dialogue interface constitutes a personal assistant. An important aspect of the methodology involves first simulating the entire system in Brahms, then configuring the agents into a run-time system.

  13. Launching applications on compute and service processors running under different operating systems in scalable network of processor boards with routers

    DOEpatents

    Tomkins, James L [Albuquerque, NM; Camp, William J [Albuquerque, NM

    2009-03-17

    A multiple processor computing apparatus includes a physical interconnect structure that is flexibly configurable to support selective segregation of classified and unclassified users. The physical interconnect structure also permits easy physical scalability of the computing apparatus. The computing apparatus can include an emulator which permits applications from the same job to be launched on processors that use different operating systems.

  14. User's guide for SYSTUM-1 (Version 2.0): A simulator of growth trends in young stands under management in California and Oregon

    Treesearch

    Martin W. Ritchie; Robert F. Powers

    1993-01-01

    SYSTUM-1 is an individual-tree/distance-independent simulator developed for use in young plantations in California and southern Oregon. The program was developed to run under the DOS operating system and requires DOS 3.0 or higher running on an 8086 or higher processor. The simulator is designed to provide a link with existing PC-based simulators (CACTOS and ORGANON)...

  15. The instant sequencing task: Toward constraint-checking a complex spacecraft command sequence interactively

    NASA Technical Reports Server (NTRS)

    Horvath, Joan C.; Alkalaj, Leon J.; Schneider, Karl M.; Amador, Arthur V.; Spitale, Joseph N.

    1993-01-01

    Robotic spacecraft are controlled by sets of commands called 'sequences.' These sequences must be checked against mission constraints. Making our existing constraint checking program faster would enable new capabilities in our uplink process. Therefore, we are rewriting this program to run on a parallel computer. To do so, we had to determine how to run constraint-checking algorithms in parallel and create a new method of specifying spacecraft models and constraints. This new specification gives us a means of representing flight systems and their predicted response to commands which could be used in a variety of applications throughout the command process, particularly during anomaly or high-activity operations. This commonality could reduce operations cost and risk for future complex missions. Lessons learned in applying some parts of this system to the TOPEX/Poseidon mission will be described.

  16. Universal Serial Bus Architecture for Removable Media (USB-ARM)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2011-03-09

    USB-ARM creates operating system drivers which sit between removable media and the user and applications. The drivers isolate the media and submit the contents of the media to a virtual machine containing an entire scanning system. This scanning system may include traditional anti-virus, but also allows more detailed analysis of files, including dynamic run-time analysis, helping to prevent "zero-day" threats not already identified in anti-virus signatures. Once cleared, the media is presented to the operating system, at which point it becomes available to users and applications.

  17. Remote media vision-based computer input device

    NASA Astrophysics Data System (ADS)

    Arabnia, Hamid R.; Chen, Ching-Yi

    1991-11-01

    In this paper, we introduce a vision-based computer input device which has been built at the University of Georgia. The user of this system gives commands to the computer without touching any physical device. The system receives input through a CCD camera; it is PC- based and is built on top of the DOS operating system. The major components of the input device are: a monitor, an image capturing board, a CCD camera, and some software (developed by use). These are interfaced with a standard PC running under the DOS operating system.

  18. Computing Operating Characteristics Of Bearing/Shaft Systems

    NASA Technical Reports Server (NTRS)

    Moore, James D.

    1996-01-01

    SHABERTH computer program predicts operating characteristics of bearings in multibearing load-support system. Lubricated and nonlubricated bearings modeled. Calculates loads, torques, temperatures, and fatigue lives of ball and/or roller bearings on single shaft. Provides for analysis of reaction of system to termination of supply of lubricant to bearings and other lubricated mechanical elements. Valuable in design and analysis of shaft/bearing systems. Two versions of SHABERTH available. Cray version (LEW-14860), "Computing Thermal Performances Of Shafts and Bearings". IBM PC version (MFS-28818), written for IBM PC-series and compatible computers running MS-DOS.

  19. Canadian Operational Air Quality Forecasting Systems: Status, Recent Progress, and Challenges

    NASA Astrophysics Data System (ADS)

    Pavlovic, Radenko; Davignon, Didier; Ménard, Sylvain; Munoz-Alpizar, Rodrigo; Landry, Hugo; Beaulieu, Paul-André; Gilbert, Samuel; Moran, Michael; Chen, Jack

    2017-04-01

    ECCC's Canadian Meteorological Centre Operations (CMCO) division runs a number of operational air quality (AQ)-related systems that revolve around the Regional Air Quality Deterministic Prediction System (RAQDPS). The RAQDPS generates 48-hour AQ forecasts and outputs hourly concentration fields of O3, PM2.5, NO2, and other pollutants twice daily on a North-American domain with 10-km horizontal grid spacing and 80 vertical levels. A closely related AQ forecast system with near-real-time wildfire emissions, known as FireWork, has been run by CMCO during the Canadian wildfire season (April to October) since 2014. This system became operational in June 2016. The CMCO`s operational AQ forecast systems also benefit from several support systems, such as a statistical post-processing model called UMOS-AQ that is applied to enhance forecast reliability at point locations with AQ monitors. The Regional Deterministic Air Quality Analysis (RDAQA) system has also been connected to the RAQDPS since February 2013, and hourly surface objective analyses are now available for O3, PM2.5, NO2, PM10, SO2 and, indirectly, the Canadian Air Quality Health Index. As of June 2015, another version of the RDAQA has been connected to FireWork (RDAQA-FW). For verification purposes, CMCO developed a third support system called Verification for Air QUality Models (VAQUM), which has a geospatial relational database core and which enables continuous monitoring of the AQ forecast systems' performance. Urban environments are particularly subject to AQ pollution. In order to improve the services offered, ECCC has recently been investing efforts to develop a high resolution air quality prediction capability for urban areas in Canada. In this presentation, a comprehensive description of the ECCC AQ systems will be provided, along with a discussion on AQ systems performance. Recent improvements, current challenges, and future directions of the Canadian operational AQ program will also be discussed.

  20. An Innovative Running Wheel-based Mechanism for Improved Rat Training Performance.

    PubMed

    Chen, Chi-Chun; Yang, Chin-Lung; Chang, Ching-Ping

    2016-09-19

    This study presents an animal mobility system, equipped with a positioning running wheel (PRW), as a way to quantify the efficacy of an exercise activity for reducing the severity of the effects of the stroke in rats. This system provides more effective animal exercise training than commercially available systems such as treadmills and motorized running wheels (MRWs). In contrast to an MRW that can only achieve speeds below 20 m/min, rats are permitted to run at a stable speed of 30 m/min on a more spacious and high-density rubber running track supported by a 15 cm wide acrylic wheel with a diameter of 55 cm in this work. Using a predefined adaptive acceleration curve, the system not only reduces the operator error but also trains the rats to run persistently until a specified intensity is reached. As a way to evaluate the exercise effectiveness, real-time position of a rat is detected by four pairs of infrared sensors deployed on the running wheel. Once an adaptive acceleration curve is initiated using a microcontroller, the data obtained by the infrared sensors are automatically recorded and analyzed in a computer. For comparison purposes, 3 week training is conducted on rats using a treadmill, an MRW and a PRW. After surgically inducing middle cerebral artery occlusion (MCAo), modified neurological severity scores (mNSS) and an inclined plane test were conducted to assess the neurological damages to the rats. PRW is experimentally validated as the most effective among such animal mobility systems. Furthermore, an exercise effectiveness measure, based on rat position analysis, showed that there is a high negative correlation between the effective exercise and the infarct volume, and can be employed to quantify a rat training in any type of brain damage reduction experiments.

  1. The SISMA Project: A pre-operative seismic hazard monitoring system.

    NASA Astrophysics Data System (ADS)

    Massimiliano Chersich, M. C.; Amodio, A. A. Angelo; Francia, A. F. Andrea; Sparpaglione, C. S. Claudio

    2009-04-01

    Galileian Plus is currently leading the development, in collaboration with several Italian Universities, of the SISMA (Seismic Information System for Monitoring and Alert) Pilot Project financed by the Italian Space Agency. The system is devoted to the continuous monitoring of the seismic risk and is addressed to support the Italian Civil Protection decisional process. Completion of the Pilot Project is planned at the beginning of 2010. Main scientific paradigm of SISMA is an innovative deterministic approach integrating geophysical models, geodesy and active tectonics. This paper will give a general overview of project along with its progress status and a particular focus will be put on the architectural design details and to the software implementation choices. SISMA is built on top of a software infrastructure developed by Galileian Plus to integrate the scientific programs devoted to the update of seismic risk maps. The main characteristics of the system may be resumed as follow: automatic download of input data; integration of scientific programs; definition and scheduling of chains of processes; monitoring and control of the system through a graphical user interface (GUI); compatibility of the products with ESRI ArcGIS, by mean of post-processing conversion. a) automatic download of input data SISMA needs input data such as GNSS observations, updated seismic catalogue, SAR satellites orbits, etc. that are periodically updated and made available from remote servers through FTP and HTTP. This task is accomplished by a dedicated user configurable component. b) integration of scientific programs SISMA integrates many scientific programs written in different languages (Fortran, C, C++, Perl and Bash) and running into different operating systems. This design requirements lead to the development of a distributed system which is platform independent and is able to run any terminal-based program following few simple predefined rules. c) definition and scheduling of chains of processes Processes are bound each other, in the sense that the output of process "A" should be passed as input to process "B". In this case the process "B" must run automatically as soon as the required input is ready. In SISMA this issue is handled with the "data-driven" activation concept allowing specifying that a process should be started as soon as the needed input datum has been made available in the archive. Moreover SISMA may run processes on a "time-driven" base. The infrastructure of SISMA provides a configurable scheduler allowing the user to define the start time and the periodicity of such processes. d) monitoring and control The operator of the system needs to monitor and control every process running in the system. The SISMA infrastructure allows, through its GUI, the user to: view log messages of running and old processes; stop running processes; monitor processes executions; monitor resource status (available ram, network reachability, and available disk space) for every machine in the system. e) compatibility with ESRI Shapefiles Nearly all the SISMA data has some geographic information, and it is useful to integrate it in a Geographic Information System (GIS). Processors output are georeferred, but they are generated as ASCII files in a proprietary format, and thus cannot directly loaded in a GIS. The infrastructures provides a simple framework for adding filters that reads the data in the proprietary format and converts it to ESRI Shapefile format.

  2. jade: An End-To-End Data Transfer and Catalog Tool

    NASA Astrophysics Data System (ADS)

    Meade, P.

    2017-10-01

    The IceCube Neutrino Observatory is a cubic kilometer neutrino telescope located at the Geographic South Pole. IceCube collects 1 TB of data every day. An online filtering farm processes this data in real time and selects 10% to be sent via satellite to the main data center at the University of Wisconsin-Madison. IceCube has two year-round on-site operators. New operators are hired every year, due to the hard conditions of wintering at the South Pole. These operators are tasked with the daily operations of running a complex detector in serious isolation conditions. One of the systems they operate is the data archiving and transfer system. Due to these challenging operational conditions, the data archive and transfer system must above all be simple and robust. It must also share the limited resource of satellite bandwidth, and collect and preserve useful metadata. The original data archive and transfer software for IceCube was written in 2005. After running in production for several years, the decision was taken to fully rewrite it, in order to address a number of structural drawbacks. The new data archive and transfer software (JADE2) has been in production for several months providing improved performance and resiliency. One of the main goals for JADE2 is to provide a unified system that handles the IceCube data end-to-end: from collection at the South Pole, all the way to long-term archive and preservation in dedicated repositories at the North. In this contribution, we describe our experiences and lessons learned from developing and operating the data archive and transfer software for a particle physics experiment in extreme operational conditions like IceCube.

  3. CMS Data Processing Workflows during an Extended Cosmic Ray Run

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    2009-11-01

    The CMS Collaboration conducted a month-long data taking exercise, the Cosmic Run At Four Tesla, during October-November 2008, with the goal of commissioning the experiment for extended operation. With all installed detector systems participating, CMS recorded 270 million cosmic ray events with the solenoid at a magnetic field strength of 3.8 T. This paper describes the data flow from the detector through the various online and offline computing systems, as well as the workflows used for recording the data, for aligning and calibrating the detector, and for analysis of the data.

  4. The event notification and alarm system for the Open Science Grid operations center

    NASA Astrophysics Data System (ADS)

    Hayashi, S.; Teige and, S.; Quick, R.

    2012-12-01

    The Open Science Grid Operations (OSG) Team operates a distributed set of services and tools that enable the utilization of the OSG by several HEP projects. Without these services users of the OSG would not be able to run jobs, locate resources, obtain information about the status of systems or generally use the OSG. For this reason these services must be highly available. This paper describes the automated monitoring and notification systems used to diagnose and report problems. Described here are the means used by OSG Operations to monitor systems such as physical facilities, network operations, server health, service availability and software error events. Once detected, an error condition generates a message sent to, for example, Email, SMS, Twitter, an Instant Message Server, etc. The mechanism being developed to integrate these monitoring systems into a prioritized and configurable alarming system is emphasized.

  5. A web-based remote radiation treatment planning system using the remote desktop function of a computer operating system: a preliminary report.

    PubMed

    Suzuki, Keishiro; Hirasawa, Yukinori; Yaegashi, Yuji; Miyamoto, Hideki; Shirato, Hiroki

    2009-01-01

    We developed a web-based, remote radiation treatment planning system which allowed staff at an affiliated hospital to obtain support from a fully staffed central institution. Network security was based on a firewall and a virtual private network (VPN). Client computers were installed at a cancer centre, at a university hospital and at a staff home. We remotely operated the treatment planning computer using the Remote Desktop function built in to the Windows operating system. Except for the initial setup of the VPN router, no special knowledge was needed to operate the remote radiation treatment planning system. There was a time lag that seemed to depend on the volume of data traffic on the Internet, but it did not affect smooth operation. The initial cost and running cost of the system were reasonable.

  6. Operational test of the prototype peewee yarder.

    Treesearch

    Charles N. Mann; Ronald W. Mifflin

    1979-01-01

    An operational test of a small, prototype running skyline yarder was conducted early in 1978. Test results indicate that this yarder concept promises a low cost, high performance system for harvesting small logs where skyline methods are indicated. Timber harvest by thinning took place on 12 uphill and 2 downhill skyline roads, and clearcut harvesting was performed on...

  7. Alternative Fuels Data Center: Installing New E85 Equipment

    Science.gov Websites

    "milk run"). Hiring a Project Contractor In most cases, a fleet operator hires a project contractor to alter the onsite fueling system. This is often done through a bid process, especially if it is a fueling site operated by a government entity. The contractor is responsible for project oversight

  8. Diagnostic Utility of the Social Skills Improvement System Performance Screening Guide

    ERIC Educational Resources Information Center

    Krach, S. Kathleen; McCreery, Michael P.; Wang, Ye; Mohammadiamin, Houra; Cirks, Christen K.

    2017-01-01

    Researchers investigated the diagnostic utility of the Social Skills Improvement System: Performance Screening Guide (SSIS-PSG). Correlational, regression, receiver operating characteristic (ROC), and conditional probability analyses were run to compare ratings on the SSIS-PSG subscales of Prosocial Behavior, Reading Skills, and Math Skills, to…

  9. UNIX Micros for Students Majoring in Computer Science and Personal Information Retrieval.

    ERIC Educational Resources Information Center

    Fox, Edward A.; Birch, Sandra

    1986-01-01

    Traces the history of Virginia Tech's requirement that incoming freshmen majoring in computer science each acquire a microcomputer running the UNIX operating system; explores rationale for the decision; explains system's key features; and describes program implementation and research and development efforts to provide personal information…

  10. SHIRCO PILOT-SCALE INFRARED INCINERATION SYSTEM AT THE ROSE TOWNSHIP DEMODE ROAD SUPERFUND SITE

    EPA Science Inventory

    Under the Superfund Innovative Technology Evaluation or SITE Program, an evaluation was made of the Shirco Pilot-Scale Infrared Incineration System during 17 separate test runs under varying operating conditions. The tests were conducted at the Demode Road Superfund site in Ros...

  11. Simple Library Bookkeeping.

    ERIC Educational Resources Information Center

    Hoffman, Herbert H.

    A simple and cheap manual double entry continuous transaction posting system with running balances is developed for bookkeeping by small libraries. A very small library may operate without any system of fiscal control but when a library's budget approaches three figures, some kind of bookkeeping must be introduced. To maintain control over his…

  12. Virtualization and cloud computing in dentistry.

    PubMed

    Chow, Frank; Muftu, Ali; Shorter, Richard

    2014-01-01

    The use of virtualization and cloud computing has changed the way we use computers. Virtualization is a method of placing software called a hypervisor on the hardware of a computer or a host operating system. It allows a guest operating system to run on top of the physical computer with a virtual machine (i.e., virtual computer). Virtualization allows multiple virtual computers to run on top of one physical computer and to share its hardware resources, such as printers, scanners, and modems. This increases the efficient use of the computer by decreasing costs (e.g., hardware, electricity administration, and management) since only one physical computer is needed and running. This virtualization platform is the basis for cloud computing. It has expanded into areas of server and storage virtualization. One of the commonly used dental storage systems is cloud storage. Patient information is encrypted as required by the Health Insurance Portability and Accountability Act (HIPAA) and stored on off-site private cloud services for a monthly service fee. As computer costs continue to increase, so too will the need for more storage and processing power. Virtual and cloud computing will be a method for dentists to minimize costs and maximize computer efficiency in the near future. This article will provide some useful information on current uses of cloud computing.

  13. Interactive Forecasting with the National Weather Service River Forecast System

    NASA Technical Reports Server (NTRS)

    Smith, George F.; Page, Donna

    1993-01-01

    The National Weather Service River Forecast System (NWSRFS) consists of several major hydrometeorologic subcomponents to model the physics of the flow of water through the hydrologic cycle. The entire NWSRFS currently runs in both mainframe and minicomputer environments, using command oriented text input to control the system computations. As computationally powerful and graphically sophisticated scientific workstations became available, the National Weather Service (NWS) recognized that a graphically based, interactive environment would enhance the accuracy and timeliness of NWS river and flood forecasts. Consequently, the operational forecasting portion of the NWSRFS has been ported to run under a UNIX operating system, with X windows as the display environment on a system of networked scientific workstations. In addition, the NWSRFS Interactive Forecast Program was developed to provide a graphical user interface to allow the forecaster to control NWSRFS program flow and to make adjustments to forecasts as necessary. The potential market for water resources forecasting is immense and largely untapped. Any private company able to market the river forecasting technologies currently developed by the NWS Office of Hydrology could provide benefits to many information users and profit from providing these services.

  14. A compact free space quantum key distribution system capable of daylight operation

    NASA Astrophysics Data System (ADS)

    Benton, David M.; Gorman, Phillip M.; Tapster, Paul R.; Taylor, David M.

    2010-06-01

    A free space quantum key distribution system has been demonstrated. Consideration has been given to factors such as field of view and spectral width, to cut down the deleterious effect from background light levels. Suitable optical sources such as lasers and RCLEDs have been investigated as well as optimal wavelength choices, always with a view to building a compact and robust system. The implementation of background reduction measures resulted in a system capable of operating in daylight conditions. An autonomous system was left running and generating shared key material continuously for over 7 days.

  15. UNIX-based operating systems robustness evaluation

    NASA Technical Reports Server (NTRS)

    Chang, Yu-Ming

    1996-01-01

    Robust operating systems are required for reliable computing. Techniques for robustness evaluation of operating systems not only enhance the understanding of the reliability of computer systems, but also provide valuable feed- back to system designers. This thesis presents results from robustness evaluation experiments on five UNIX-based operating systems, which include Digital Equipment's OSF/l, Hewlett Packard's HP-UX, Sun Microsystems' Solaris and SunOS, and Silicon Graphics' IRIX. Three sets of experiments were performed. The methodology for evaluation tested (1) the exception handling mechanism, (2) system resource management, and (3) system capacity under high workload stress. An exception generator was used to evaluate the exception handling mechanism of the operating systems. Results included exit status of the exception generator and the system state. Resource management techniques used by individual operating systems were tested using programs designed to usurp system resources such as physical memory and process slots. Finally, the workload stress testing evaluated the effect of the workload on system performance by running a synthetic workload and recording the response time of local and remote user requests. Moderate to severe performance degradations were observed on the systems under stress.

  16. First Assessment of Itaipu Dam Ensemble Inflow Forecasting System

    NASA Astrophysics Data System (ADS)

    Mainardi Fan, Fernando; Machado Vieira Lisboa, Auder; Gomes Villa Trinidad, Giovanni; Rógenes Monteiro Pontes, Paulo; Collischonn, Walter; Tucci, Carlos; Costa Buarque, Diogo

    2017-04-01

    Inflow forecasting for Hydropower Plants (HPP) Dams is one of the prominent uses for hydrological forecasts. A very important HPP in terms of energy generation for South America is the Itaipu Dam, located in the Paraná River, between Brazil and Paraguay countries, with a drainage area of 820.000km2. In this work, we present the development of an ensemble forecasting system for Itaipu, operational since November 2015. The system is based in the MGB-IPH hydrological model, includes hydrodynamics simulations of the main river, and is run every day morning forced by seven different rainfall forecasts: (i) CPTEC-ETA 15km; (ii) CPTEC-BRAMS 5km; (iii) SIMEPAR WRF Ferrier; (iv) SIMEPAR WRF Lin; (v) SIMEPAR WRF Morrison; (vi) SIMEPAR WRF WDM6; (vii) SIMEPAR MEDIAN. The last one (vii) corresponds to the median value of SIMEPAR WRF model versions (iii to vi) rainfall forecasts. Besides the developed system, the "traditional" method for inflow forecasting generation for the Itaipu Dam is also run every day. This traditional method consists in the approximation of the future inflow based on the discharge tendency of upstream telemetric gauges. Nowadays, after all the forecasts are run, the hydrology team of Itaipu develop a consensus forecast, based on all obtained results, which is the one used for the Itaipu HPP Dam operation. After one year of operation a first evaluation of the Ensemble Forecasting System was conducted. Results show that the system performs satisfactory for rising flows up to five days lead time. However, some false alarms were also issued by most ensemble members in some cases. And not in all cases the system performed better than the traditional method, especially during hydrograph recessions. In terms of meteorological forecasts, some members usage are being discontinued. In terms of the hydrodynamics representation, it seems that a better information of rivers cross section could improve hydrographs recession curves forecasts. Those opportunities for improvements are currently being addressed in the system next update.

  17. Operational Oceanograhy System for Oil Spill Risk Management at Santander Bay (Spain)

    NASA Astrophysics Data System (ADS)

    Castanedo Bárcena, S.; Nuñez, P.; Perez-Diaz, B.; Abascal, A.; Cardenas, M.; Medina, R.

    2016-02-01

    Estuaries and bays are sheltered areas that usually host a wide range of industry and interests (e.g. aquaculture, fishing, recreation, habitat protection). Oil spill risk assessment in these environments is fundamental given the reduced response time associated to this very local scale. This work presents a system comprising two modules: (1) an Operational Oceanography System (OOS) based on nesting high resolution models which provides short-term (within 48 hours) oil spill trajectory forecasting and (2) an oil spill risk assessment system (OSRAS) that estimates risk as the combination of hazard and vulnerability. Hazard is defined as the probability of the coast to be polluted by an oil spill and is calculated on the basis of a library of pre-run cases. The OOS is made up by: (1) Daily boundary conditions (sea level, ocean currents, salinity and temperature) and meteorological forcing are obtained from the European network MYOCEAN and from the Spanish met office, AEMET, respectively; (2) COAWST modelling system is the engine of the OOS (at this stage of the project only ROMS is on); (3) an oil spill transport and fate model, TESEO (4) a web service that manages the operational system and allows the user to run hypothetical as well as real oil spill trajectories using the daily forecast of wind and high resolution ocean variables carried out by COAWST. Regarding the OSRAS system, the main contributions of this work are: (1) the use of extensive meteorological and oceanographic database provided by state-of-the-art ocean and atmospheric models, (2) the use of clustering techniques to establish representative met-ocean scenarios (i.e. combination of sea state, meteorological conditions, tide and river flow), (3) dynamic downscaling of the met-ocean scenarios with COAWST modelling system and (4) management of hundreds of runs performed with the state-of-the-art oil spill transport model TESEO.

  18. Low-cost optical data acquisition system for blade vibration measurement

    NASA Technical Reports Server (NTRS)

    Posta, Stephen J.

    1988-01-01

    A low cost optical data acquisition system was designed to measure deflection of vibrating rotor blade tips. The basic principle of the new design is to record raw data, which is a set of blade arrival times, in memory and to perform all processing by software following a run. This approach yields a simple and inexpensive system with the least possible hardware. Functional elements of the system were breadboarded and operated satisfactorily during rotor simulations on the bench, and during a data collection run with a two-bladed rotor in the Lewis Research Center Spin Rig. Software was written to demonstrate the sorting and processing of data stored in the system control computer, after retrieval from the data acquisition system. The demonstration produced an accurate graphical display of deflection versus time.

  19. Build and Execute Environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guan, Qiang

    At exascale, the challenge becomes to develop applications that run at scale and use exascale platforms reliably, efficiently, and flexibly. Workflows become much more complex because they must seamlessly integrate simulation and data analytics. They must include down-sampling, post-processing, feature extraction, and visualization. Power and data transfer limitations require these analysis tasks to be run in-situ or in-transit. We expect successful workflows will comprise multiple linked simulations along with tens of analysis routines. Users will have limited development time at scale and, therefore, must have rich tools to develop, debug, test, and deploy applications. At this scale, successful workflows willmore » compose linked computations from an assortment of reliable, well-defined computation elements, ones that can come and go as required, based on the needs of the workflow over time. We propose a novel framework that utilizes both virtual machines (VMs) and software containers to create a workflow system that establishes a uniform build and execution environment (BEE) beyond the capabilities of current systems. In this environment, applications will run reliably and repeatably across heterogeneous hardware and software. Containers, both commercial (Docker and Rocket) and open-source (LXC and LXD), define a runtime that isolates all software dependencies from the machine operating system. Workflows may contain multiple containers that run different operating systems, different software, and even different versions of the same software. We will run containers in open-source virtual machines (KVM) and emulators (QEMU) so that workflows run on any machine entirely in user-space. On this platform of containers and virtual machines, we will deliver workflow software that provides services, including repeatable execution, provenance, checkpointing, and future proofing. We will capture provenance about how containers were launched and how they interact to annotate workflows for repeatable and partial re-execution. We will coordinate the physical snapshots of virtual machines with parallel programming constructs, such as barriers, to automate checkpoint and restart. We will also integrate with HPC-specific container runtimes to gain access to accelerators and other specialized hardware to preserve native performance. Containers will link development to continuous integration. When application developers check code in, it will automatically be tested on a suite of different software and hardware architectures.« less

  20. Optimizing a mobile robot control system using GPU acceleration

    NASA Astrophysics Data System (ADS)

    Tuck, Nat; McGuinness, Michael; Martin, Fred

    2012-01-01

    This paper describes our attempt to optimize a robot control program for the Intelligent Ground Vehicle Competition (IGVC) by running computationally intensive portions of the system on a commodity graphics processing unit (GPU). The IGVC Autonomous Challenge requires a control program that performs a number of different computationally intensive tasks ranging from computer vision to path planning. For the 2011 competition our Robot Operating System (ROS) based control system would not run comfortably on the multicore CPU on our custom robot platform. The process of profiling the ROS control program and selecting appropriate modules for porting to run on a GPU is described. A GPU-targeting compiler, Bacon, is used to speed up development and help optimize the ported modules. The impact of the ported modules on overall performance is discussed. We conclude that GPU optimization can free a significant amount of CPU resources with minimal effort for expensive user-written code, but that replacing heavily-optimized library functions is more difficult, and a much less efficient use of time.

  1. Colt: an experiment in wormhole run-time reconfiguration

    NASA Astrophysics Data System (ADS)

    Bittner, Ray; Athanas, Peter M.; Musgrove, Mark

    1996-10-01

    Wormhole run-time reconfiguration (RTR) is an attempt to create a refined computing paradigm for high performance computational tasks. By combining concepts from field programmable gate array (FPGA) technologies with data flow computing, the Colt/Stallion architecture achieves high utilization of hardware resources, and facilitates rapid run-time reconfiguration. Targeted mainly at DSP-type operations, the Colt integrated circuit -- a prototype wormhole RTR device -- compares favorably to contemporary DSP alternatives in terms of silicon area consumed per unit computation and in computing performance. Although emphasis has been placed on signal processing applications, general purpose computation has not been overlooked. Colt is a prototype that defines an architecture not only at the chip level but also in terms of an overall system design. As this system is realized, the concept of wormhole RTR will be applied to numerical computation and DSP applications including those common to image processing, communications systems, digital filters, acoustic processing, real-time control systems and simulation acceleration.

  2. Development of the integrated control system for the microwave ion source of the PEFP 100-MeV proton accelerator

    NASA Astrophysics Data System (ADS)

    Song, Young-Gi; Seol, Kyung-Tae; Jang, Ji-Ho; Kwon, Hyeok-Jung; Cho, Yong-Sub

    2012-07-01

    The Proton Engineering Frontier Project (PEFP) 20-MeV proton linear accelerator is currently operating at the Korea Atomic Energy Research Institute (KAERI). The ion source of the 100-MeV proton linac needs at least a 100-hour operation time. To meet the goal, we have developed a microwave ion source that uses no filament. For the ion source, a remote control system has been developed by using experimental physics and the industrial control system (EPICS) software framework. The control system consists of a versa module europa (VME) and EPICS-based embedded applications running on a VxWorks real-time operating system. The main purpose of the control system is to control and monitor the operational variables of the components remotely and to protect operators from radiation exposure and the components from critical problems during beam extraction. We successfully performed the operation test of the control system to confirm the degree of safety during the hardware performance.

  3. Controlling Laboratory Processes From A Personal Computer

    NASA Technical Reports Server (NTRS)

    Will, H.; Mackin, M. A.

    1991-01-01

    Computer program provides natural-language process control from IBM PC or compatible computer. Sets up process-control system that either runs without operator or run by workers who have limited programming skills. Includes three smaller programs. Two of them, written in FORTRAN 77, record data and control research processes. Third program, written in Pascal, generates FORTRAN subroutines used by other two programs to identify user commands with device-driving routines written by user. Also includes set of input data allowing user to define user commands to be executed by computer. Requires personal computer operating under MS-DOS with suitable hardware interfaces to all controlled devices. Also requires FORTRAN 77 compiler and device drivers written by user.

  4. Energy Efficiency Model for Induction Furnace

    NASA Astrophysics Data System (ADS)

    Dey, Asit Kr

    2018-01-01

    In this paper, a system of a solar induction furnace unit was design to find out a new solution for the existing AC power consuming heating process through Supervisory control and data acquisition system. This unit can be connected directly to the DC system without any internal conversion inside the device. The performance of the new system solution is compared with the existing one in terms of power consumption and losses. This work also investigated energy save, system improvement, process control model in a foundry induction furnace heating framework corresponding to PV solar power supply. The results are analysed for long run in terms of saving energy and integrated process system. The data acquisition system base solar foundry plant is an extremely multifaceted system that can be run over an almost innumerable range of operating conditions, each characterized by specific energy consumption. Determining ideal operating conditions is a key challenge that requires the involvement of the latest automation technologies, each one contributing to allow not only the acquisition, processing, storage, retrieval and visualization of data, but also the implementation of automatic control strategies that can expand the achievement envelope in terms of melting process, safety and energy efficiency.

  5. Mean Line Pump Flow Model in Rocket Engine System Simulation

    NASA Technical Reports Server (NTRS)

    Veres, Joseph P.; Lavelle, Thomas M.

    2000-01-01

    A mean line pump flow modeling method has been developed to provide a fast capability for modeling turbopumps of rocket engines. Based on this method, a mean line pump flow code PUMPA has been written that can predict the performance of pumps at off-design operating conditions, given the loss of the diffusion system at the design point. The pump code can model axial flow inducers, mixed-flow and centrifugal pumps. The code can model multistage pumps in series. The code features rapid input setup and computer run time, and is an effective analysis and conceptual design tool. The map generation capability of the code provides the map information needed for interfacing with a rocket engine system modeling code. The off-design and multistage modeling capabilities of the code permit parametric design space exploration of candidate pump configurations and provide pump performance data for engine system evaluation. The PUMPA code has been integrated with the Numerical Propulsion System Simulation (NPSS) code and an expander rocket engine system has been simulated. The mean line pump flow code runs as an integral part of the NPSS rocket engine system simulation and provides key pump performance information directly to the system model at all operating conditions.

  6. LASL benchmark performance 1978. [CDC STAR-100, 6600, 7600, Cyber 73, and CRAY-1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McKnight, A.L.

    1979-08-01

    This report presents the results of running several benchmark programs on a CDC STAR-100, a Cray Research CRAY-1, a CDC 6600, a CDC 7600, and a CDC Cyber 73. The benchmark effort included CRAY-1's at several installations running different operating systems and compilers. This benchmark is part of an ongoing program at Los Alamos Scientific Laboratory to collect performance data and monitor the development trend of supercomputers. 3 tables.

  7. Developing a Mobile Application "Educational Process Remote Management System" on the Android Operating System

    ERIC Educational Resources Information Center

    Abildinova, Gulmira M.; Alzhanov, Aitugan K.; Ospanova, Nazira N.; Taybaldieva, Zhymatay; Baigojanova, Dametken S.; Pashovkin, Nikita O.

    2016-01-01

    Nowadays, when there is a need to introduce various innovations into the educational process, most efforts are aimed at simplifying the learning process. To that end, electronic textbooks, testing systems and other software is being developed. Most of them are intended to run on personal computers with limited mobility. Smart education is…

  8. The Biogas/Biofertilizer Business Handbook. Third Edition. Appropriate Technologies for Development. Reprint R-48.

    ERIC Educational Resources Information Center

    Arnott, Michael

    This book describes one approach to building and operating biogas systems. The biogas systems include raw material preparation, digesters, separate gas storage tanks, use of the gas to run engines, and the use of the sludge as fertilizer. Chapters included are: (1) "Introduction"; (2) "Biogas Systems are Small Factories"; (3)…

  9. Analysis and Research on the effect of the Operation of Small Hydropower in the Regional Power Grid

    NASA Astrophysics Data System (ADS)

    Ang, Fu; Guangde, Dong; Xiaojun, Zhu; Ruimiao, Wang; Shengyi, Zhu

    2018-03-01

    The analysis of reactive power balance and voltage of power network not only affects the system voltage quality, but also affects the economic operation of power grid. In the calculation of reactive power balance and voltage analysis in the past, the problem of low power and low system voltage has been the concern of people. When small hydropower stations in the wet period of low load, the analysis of reactive power surplus and high voltage for the system, if small hydropower unit the capability of running in phase is considered, it can effectively solve the system low operation voltage of the key point on the high side.

  10. Characteristics of process oils from HTI coal/plastics co-liquefaction runs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robbins, G.A.; Brandes, S.D.; Winschel, R.A.

    1995-12-31

    The objective of this project is to provide timely analytical support to DOE`s liquefaction development effort. Specific objectives of the work reported here are presented. During a few operating periods of Run POC-2, HTI co-liquefied mixed plastics with coal, and tire rubber with coal. Although steady-state operation was not achieved during these brief tests periods, the results indicated that a liquefaction plant could operate with these waste materials as feedstocks. CONSOL analyzed 65 process stream samples from coal-only and coal/waste portions of the run. Some results obtained from characterization of samples from Run POC-2 coal/plastics operation are presented.

  11. Unsteady, Cooled Turbine Simulation Using a PC-Linux Analysis System

    NASA Technical Reports Server (NTRS)

    List, Michael G.; Turner, Mark G.; Chen, Jen-Pimg; Remotigue, Michael G.; Veres, Joseph P.

    2004-01-01

    The fist stage of the high-pressure turbine (HPT) of the GE90 engine was simulated with a three-dimensional unsteady Navier-Sokes solver, MSU Turbo, which uses source terms to simulate the cooling flows. In addition to the solver, its pre-processor, GUMBO, and a post-processing and visualization tool, Turbomachinery Visual3 (TV3) were run in a Linux environment to carry out the simulation and analysis. The solver was run both with and without cooling. The introduction of cooling flow on the blade surfaces, case, and hub and its effects on both rotor-vane interaction as well the effects on the blades themselves were the principle motivations for this study. The studies of the cooling flow show the large amount of unsteadiness in the turbine and the corresponding hot streak migration phenomenon. This research on the GE90 turbomachinery has also led to a procedure for running unsteady, cooled turbine analysis on commodity PC's running the Linux operating system.

  12. A Loader for Executing Multi-Binary Applications on the Thinking Machines CM-5: It's Not Just for SPMD Anymore

    NASA Technical Reports Server (NTRS)

    Becker, Jeffrey C.

    1995-01-01

    The Thinking Machines CM-5 platform was designed to run single program, multiple data (SPMD) applications, i.e., to run a single binary across all nodes of a partition, with each node possibly operating on different data. Certain classes of applications, such as multi-disciplinary computational fluid dynamics codes, are facilitated by the ability to have subsets of the partition nodes running different binaries. In order to extend the CM-5 system software to permit such applications, a multi-program loader was developed. This system is based on the dld loader which was originally developed for workstations. This paper provides a high level description of dld, and describes how it was ported to the CM-5 to provide support for multi-binary applications. Finally, it elaborates how the loader has been used to implement the CM-5 version of MPIRUN, a portable facility for running multi-disciplinary/multi-zonal MPI (Message-Passing Interface Standard) codes.

  13. ROSSTEP v1.3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allevato, Adam

    2016-07-21

    ROSSTEP is a system for sequentially running roslaunch, rosnode, and bash scripts automatically, for use in Robot Operating System (ROS) applications. The system consists of YAML files which define actions and conditions. A python file parses the code and runs actions sequentially using the sys and subprocess python modules. Between actions, it uses various ROS-based code to check conditions required to proceed, and only moves on to the next action when all the necessary conditions have been met. Included is rosstep-creator, a QT application designed to create the YAML files required for ROSSTEP. It has a nearly one-to-one mapping frommore » interface elements to YAML output, and serves as a convenient GUI for working with the ROSSTEP system.« less

  14. Design and implementation of fuzzy-PD controller based on relation models: A cross-entropy optimization approach

    NASA Astrophysics Data System (ADS)

    Anisimov, D. N.; Dang, Thai Son; Banerjee, Santo; Mai, The Anh

    2017-07-01

    In this paper, an intelligent system use fuzzy-PD controller based on relation models is developed for a two-wheeled self-balancing robot. Scaling factors of the fuzzy-PD controller are optimized by a Cross-Entropy optimization method. A linear Quadratic Regulator is designed to bring a comparison with the fuzzy-PD controller by control quality parameters. The controllers are ported and run on STM32F4 Discovery Kit based on the real-time operating system. The experimental results indicate that the proposed fuzzy-PD controller runs exactly on embedded system and has desired performance in term of fast response, good balance and stabilize.

  15. Statistical fingerprinting for malware detection and classification

    DOEpatents

    Prowell, Stacy J.; Rathgeb, Christopher T.

    2015-09-15

    A system detects malware in a computing architecture with an unknown pedigree. The system includes a first computing device having a known pedigree and operating free of malware. The first computing device executes a series of instrumented functions that, when executed, provide a statistical baseline that is representative of the time it takes the software application to run on a computing device having a known pedigree. A second computing device executes a second series of instrumented functions that, when executed, provides an actual time that is representative of the time the known software application runs on the second computing device. The system detects malware when there is a difference in execution times between the first and the second computing devices.

  16. Direct liquefaction proof-of-concept program. Topical report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Comolli, A.G.; Lee, L.K.; Pradhan, V.R.

    This report presents the results of work conducted under the DOE Proof-of-Concept Program in direct coal liquefaction at Hydrocarbon Technologies, Inc. in Lawrenceville, New Jersey, from February 1994 through April 1995. The work includes modifications to HRI`s existing 3 ton per day Process Development Unit (PDU) and completion of the second PDU run (POC Run 2) under the Program. The 45-day POC Run 2 demonstrated scale up of the Catalytic Two-Stage Liquefaction (CTSL Process) for a subbituminous Wyoming Black Thunder Mine coal to produce distillate liquid products at a rate of up to 4 barrels per ton of moisture-ash-free coal.more » The combined processing of organic hydrocarbon wastes, such as waste plastics and used tire rubber, with coal was also successfully demonstrated during the last nine days of operations of Run POC-02. Prior to the first PDU run (POC-01) in this program, a major effort was made to modify the PDU to improve reliability and to provide the flexibility to operate in several alternative modes. The Kerr McGee Rose-SR{sup SM} unit from Wilsonville, Alabama, was redesigned and installed next to the U.S. Filter installation to allow a comparison of the two solids removal systems. The 45-day CTSL Wyoming Black Thunder Mine coal demonstration run achieved several milestones in the effort to further reduce the cost of liquid fuels from coal. The primary objective of PDU Run POC-02 was to scale-up the CTSL extinction recycle process for subbituminous coal to produce a total distillate product using an in-line fixed-bed hydrotreater. Of major concern was whether calcium-carbon deposits would occur in the system as has happened in other low rank coal conversion processes. An additional objective of major importance was to study the co-liquefaction of plastics with coal and waste tire rubber with coal.« less

  17. A Simple, Scalable, Script-based Science Processor

    NASA Technical Reports Server (NTRS)

    Lynnes, Christopher

    2004-01-01

    The production of Earth Science data from orbiting spacecraft is an activity that takes place 24 hours a day, 7 days a week. At the Goddard Earth Sciences Distributed Active Archive Center (GES DAAC), this results in as many as 16,000 program executions each day, far too many to be run by human operators. In fact, when the Moderate Resolution Imaging Spectroradiometer (MODIS) was launched aboard the Terra spacecraft in 1999, the automated commercial system for running science processing was able to manage no more than 4,000 executions per day. Consequently, the GES DAAC developed a lightweight system based on the popular Per1 scripting language, named the Simple, Scalable, Script-based Science Processor (S4P). S4P automates science processing, allowing operators to focus on the rare problems occurring from anomalies in data or algorithms. S4P has been reused in several systems ranging from routine processing of MODIS data to data mining and is publicly available from NASA.

  18. Holistic Context-Sensitivity for Run-Time Optimization of Flexible Manufacturing Systems.

    PubMed

    Scholze, Sebastian; Barata, Jose; Stokic, Dragan

    2017-02-24

    Highly flexible manufacturing systems require continuous run-time (self-) optimization of processes with respect to diverse parameters, e.g., efficiency, availability, energy consumption etc. A promising approach for achieving (self-) optimization in manufacturing systems is the usage of the context sensitivity approach based on data streaming from high amount of sensors and other data sources. Cyber-physical systems play an important role as sources of information to achieve context sensitivity. Cyber-physical systems can be seen as complex intelligent sensors providing data needed to identify the current context under which the manufacturing system is operating. In this paper, it is demonstrated how context sensitivity can be used to realize a holistic solution for (self-) optimization of discrete flexible manufacturing systems, by making use of cyber-physical systems integrated in manufacturing systems/processes. A generic approach for context sensitivity, based on self-learning algorithms, is proposed aiming at a various manufacturing systems. The new solution encompasses run-time context extractor and optimizer. Based on the self-learning module both context extraction and optimizer are continuously learning and improving their performance. The solution is following Service Oriented Architecture principles. The generic solution is developed and then applied to two very different manufacturing processes.

  19. Holistic Context-Sensitivity for Run-Time Optimization of Flexible Manufacturing Systems

    PubMed Central

    Scholze, Sebastian; Barata, Jose; Stokic, Dragan

    2017-01-01

    Highly flexible manufacturing systems require continuous run-time (self-) optimization of processes with respect to diverse parameters, e.g., efficiency, availability, energy consumption etc. A promising approach for achieving (self-) optimization in manufacturing systems is the usage of the context sensitivity approach based on data streaming from high amount of sensors and other data sources. Cyber-physical systems play an important role as sources of information to achieve context sensitivity. Cyber-physical systems can be seen as complex intelligent sensors providing data needed to identify the current context under which the manufacturing system is operating. In this paper, it is demonstrated how context sensitivity can be used to realize a holistic solution for (self-) optimization of discrete flexible manufacturing systems, by making use of cyber-physical systems integrated in manufacturing systems/processes. A generic approach for context sensitivity, based on self-learning algorithms, is proposed aiming at a various manufacturing systems. The new solution encompasses run-time context extractor and optimizer. Based on the self-learning module both context extraction and optimizer are continuously learning and improving their performance. The solution is following Service Oriented Architecture principles. The generic solution is developed and then applied to two very different manufacturing processes. PMID:28245564

  20. Implementing a Parallel Image Edge Detection Algorithm Based on the Otsu-Canny Operator on the Hadoop Platform.

    PubMed

    Cao, Jianfang; Chen, Lichao; Wang, Min; Tian, Yun

    2018-01-01

    The Canny operator is widely used to detect edges in images. However, as the size of the image dataset increases, the edge detection performance of the Canny operator decreases and its runtime becomes excessive. To improve the runtime and edge detection performance of the Canny operator, in this paper, we propose a parallel design and implementation for an Otsu-optimized Canny operator using a MapReduce parallel programming model that runs on the Hadoop platform. The Otsu algorithm is used to optimize the Canny operator's dual threshold and improve the edge detection performance, while the MapReduce parallel programming model facilitates parallel processing for the Canny operator to solve the processing speed and communication cost problems that occur when the Canny edge detection algorithm is applied to big data. For the experiments, we constructed datasets of different scales from the Pascal VOC2012 image database. The proposed parallel Otsu-Canny edge detection algorithm performs better than other traditional edge detection algorithms. The parallel approach reduced the running time by approximately 67.2% on a Hadoop cluster architecture consisting of 5 nodes with a dataset of 60,000 images. Overall, our approach system speeds up the system by approximately 3.4 times when processing large-scale datasets, which demonstrates the obvious superiority of our method. The proposed algorithm in this study demonstrates both better edge detection performance and improved time performance.

  1. Automated JPSS VIIRS GEO code change testing by using Chain Run Scripts

    NASA Astrophysics Data System (ADS)

    Chen, W.; Wang, W.; Zhao, Q.; Das, B.; Mikles, V. J.; Sprietzer, K.; Tsidulko, M.; Zhao, Y.; Dharmawardane, V.; Wolf, W.

    2015-12-01

    The Joint Polar Satellite System (JPSS) is the next generation polar-orbiting operational environmental satellite system. The first satellite in the JPSS series of satellites, J-1, is scheduled to launch in early 2017. J1 will carry similar versions of the instruments that are on board of Suomi National Polar-Orbiting Partnership (S-NPP) satellite which was launched on October 28, 2011. The center for Satellite Applications and Research Algorithm Integration Team (STAR AIT) uses the Algorithm Development Library (ADL) to run S-NPP and pre-J1 algorithms in a development and test mode. The ADL is an offline test system developed by Raytheon to mimic the operational system while enabling a development environment for plug and play algorithms. The Perl Chain Run Scripts have been developed by STAR AIT to automate the staging and processing of multiple JPSS Sensor Data Record (SDR) and Environmental Data Record (EDR) products. JPSS J1 VIIRS Day Night Band (DNB) has anomalous non-linear response at high scan angles based on prelaunch testing. The flight project has proposed multiple mitigation options through onboard aggregation, and the Option 21 has been suggested by the VIIRS SDR team as the baseline aggregation mode. VIIRS GEOlocation (GEO) code analysis results show that J1 DNB GEO product cannot be generated correctly without the software update. The modified code will support both Op21, Op21/26 and is backward compatible with SNPP. J1 GEO code change version 0 delivery package is under development for the current change request. In this presentation, we will discuss how to use the Chain Run Script to verify the code change and Lookup Tables (LUTs) update in ADL Block2.

  2. Airport Noise Prediction Model -- MOD 7

    DOT National Transportation Integrated Search

    1978-07-01

    The MOD 7 Airport Noise Prediction Model is fully operational. The language used is Fortran, and it has been run on several different computer systems. Its capabilities include prediction of noise levels for single parameter changes, for multiple cha...

  3. Results of the ETV-1 breadboard tests under steady-state and transient conditions. [conducted in the NASA-LeRC Road Load Simulator

    NASA Technical Reports Server (NTRS)

    Sargent, N. B.; Dustin, M. O.

    1981-01-01

    Steady state tests were run to characterize the system and component efficiencies over the complete speed-torque capabilities of the propulsion system in both motoring and regenerative modes of operation. The steady state data were obtained using a battery simulator to separate the effects on efficiency caused by changing battery state-of-charge and component temperature. Transient tests were performed to determine the energy profiles of the propulsion system operating over the SAE J227a driving schedules.

  4. Development of Broadband Telecommunication System for Railways using Laser Technology

    NASA Astrophysics Data System (ADS)

    Nakamura, Kazuki; Nakagawa, Shingo; Matsubara, Hiroshi; Tatsui, Daisuke; Seki, Kiyotaka; Haruyama, Shinichiro; Teraoka, Fumio

    We developed a high-speed telecommunication system applicable to railways, to improve customer service and efficiency of operator's telecommunication between the ground facilities and trains under operations. We manufactured a mobile telecommunication system, capable of recording the transfer rate of 1Gbps in theory by applying the laser beam communication technology. We carried out a field test using trains in active service, and obtained the result of the transfer rate of approximately 700Mbps on the TCP layer between the ground and the train running at a speed of approximately 130km/h.

  5. Ada 9X Project Report: Ada 9X Revision Issues. Release 1

    DTIC Science & Technology

    1990-04-01

    interrupts in Ada. Users are using specialized run-time executives which promote semaphores , monitors , etc ., as well as interrupt support, are using...The focus here is on two specific problems: 1. lack of time-out on operations . 2. no efficient way to program a shared-variable monitor for the... operation . 43 !Issue implementation [3 - Remote Operations for Real-Time Systems ] The real-time implementation standards should define various remote

  6. The Navy's First Seasonal Ice Forecasts using the Navy's Arctic Cap Nowcast/Forecast System

    NASA Astrophysics Data System (ADS)

    Preller, Ruth

    2013-04-01

    As conditions in the Arctic continue to change, the Naval Research Laboratory (NRL) has developed an interest in longer-term seasonal ice extent forecasts. The Arctic Cap Nowcast/Forecast System (ACNFS), developed by the Oceanography Division of NRL, was run in forward model mode, without assimilation, to estimate the minimum sea ice extent for September 2012. The model was initialized with varying assimilative ACNFS analysis fields (June 1, July 1, August 1 and September 1, 2012) and run forward for nine simulations using the archived Navy Operational Global Atmospheric Prediction System (NOGAPS) atmospheric forcing fields from 2003-2011. The mean ice extent in September, averaged across all ensemble members was the projected summer ice extent. These results were submitted to the Study of Environmental Arctic Change (SEARCH) Sea Ice Outlook project (http://www.arcus.org/search/seaiceoutlook). The ACNFS is a ~3.5 km coupled ice-ocean model that produces 5 day forecasts of the Arctic sea ice state in all ice covered areas in the northern hemisphere (poleward of 40° N). The ocean component is the HYbrid Coordinate Ocean Model (HYCOM) and is coupled to the Los Alamos National Laboratory Community Ice CodE (CICE) via the Earth System Modeling Framework (ESMF). The ocean and ice models are run in an assimilative cycle with the Navy's Coupled Ocean Data Assimilation (NCODA) system. Currently the ACNFS is being transitioned to operations at the Naval Oceanographic Office.

  7. Decentralized operating procedures for orchestrating data and behavior across distributed military systems and assets

    NASA Astrophysics Data System (ADS)

    Peach, Nicholas

    2011-06-01

    In this paper, we present a method for a highly decentralized yet structured and flexible approach to achieve systems interoperability by orchestrating data and behavior across distributed military systems and assets with security considerations addressed from the beginning. We describe an architecture of a tool-based design of business processes called Decentralized Operating Procedures (DOP) and the deployment of DOPs onto run time nodes, supporting the parallel execution of each DOP at multiple implementation nodes (fixed locations, vehicles, sensors and soldiers) throughout a battlefield to achieve flexible and reliable interoperability. The described method allows the architecture to; a) provide fine grain control of the collection and delivery of data between systems; b) allow the definition of a DOP at a strategic (or doctrine) level by defining required system behavior through process syntax at an abstract level, agnostic of implementation details; c) deploy a DOP into heterogeneous environments by the nomination of actual system interfaces and roles at a tactical level; d) rapidly deploy new DOPs in support of new tactics and systems; e) support multiple instances of a DOP in support of multiple missions; f) dynamically add or remove run-time nodes from a specific DOP instance as missions requirements change; g) model the passage of, and business reasons for the transmission of each data message to a specific DOP instance to support accreditation; h) run on low powered computers with lightweight tactical messaging. This approach is designed to extend the capabilities of existing standards, such as the Generic Vehicle Architecture (GVA).

  8. NSTX-U Advances in Real-Time C++11 on Linux

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erickson, Keith G.

    Programming languages like C and Ada combined with proprietary embedded operating systems have dominated the real-time application space for decades. The new C++11standard includes native, language-level support for concurrency, a required feature for any nontrivial event-oriented real-time software. Threads, Locks, and Atomics now exist to provide the necessary tools to build the structures that make up the foundation of a complex real-time system. The National Spherical Torus Experiment Upgrade (NSTX-U) at the Princeton Plasma Physics Laboratory (PPPL) is breaking new ground with the language as applied to the needs of fusion devices. A new Digital Coil Protection System (DCPS) willmore » serve as the main protection mechanism for the magnetic coils, and it is written entirely in C++11 running on Concurrent Computer Corporation's real-time operating system, RedHawk Linux. It runs over 600 algorithms in a 5 kHz control loop that determine whether or not to shut down operations before physical damage occurs. To accomplish this, NSTX-U engineers developed software tools that do not currently exist elsewhere, including real-time atomic synchronization, real-time containers, and a real-time logging framework. Together with a recent (and carefully configured) version of the GCC compiler, these tools enable data acquisition, processing, and output using a conventional operating system to meet a hard real-time deadline (that is, missing one periodic is a failure) of 200 microseconds.« less

  9. NSTX-U Advances in Real-Time C++11 on Linux

    DOE PAGES

    Erickson, Keith G.

    2015-08-14

    Programming languages like C and Ada combined with proprietary embedded operating systems have dominated the real-time application space for decades. The new C++11standard includes native, language-level support for concurrency, a required feature for any nontrivial event-oriented real-time software. Threads, Locks, and Atomics now exist to provide the necessary tools to build the structures that make up the foundation of a complex real-time system. The National Spherical Torus Experiment Upgrade (NSTX-U) at the Princeton Plasma Physics Laboratory (PPPL) is breaking new ground with the language as applied to the needs of fusion devices. A new Digital Coil Protection System (DCPS) willmore » serve as the main protection mechanism for the magnetic coils, and it is written entirely in C++11 running on Concurrent Computer Corporation's real-time operating system, RedHawk Linux. It runs over 600 algorithms in a 5 kHz control loop that determine whether or not to shut down operations before physical damage occurs. To accomplish this, NSTX-U engineers developed software tools that do not currently exist elsewhere, including real-time atomic synchronization, real-time containers, and a real-time logging framework. Together with a recent (and carefully configured) version of the GCC compiler, these tools enable data acquisition, processing, and output using a conventional operating system to meet a hard real-time deadline (that is, missing one periodic is a failure) of 200 microseconds.« less

  10. [Comprehensive system integration and networking in operating rooms].

    PubMed

    Feußner, H; Ostler, D; Kohn, N; Vogel, T; Wilhelm, D; Koller, S; Kranzfelder, M

    2016-12-01

    A comprehensive surveillance and control system integrating all devices and functions is a precondition for realization of the operating room of the future. Multiple proprietary integrated operation room systems are currently available with a central user interface; however, they only cover a relatively small part of all functionalities. Internationally, there are at least three different initiatives to promote a comprehensive systems integration and networking in the operating room: the Japanese smart cyber operating theater (SCOT), the American medical device plug-and-play interoperability program (MDPnP) and the German secure and dynamic networking in operating room and hospital (OR.NET) project supported by the Federal Ministry of Education and Research. Within the framework of the internationally advanced OR.NET project, prototype solution approaches were realized, which make short-term and mid-term comprehensive data retrieval systems probable. An active and even autonomous control of the medical devices by the surveillance and control system (closed loop) is expected only in the long run due to strict regulatory barriers.

  11. Hydrologic Modeling at the National Water Center: Operational Implementation of the WRF-Hydro Model to support National Weather Service Hydrology

    NASA Astrophysics Data System (ADS)

    Cosgrove, B.; Gochis, D.; Clark, E. P.; Cui, Z.; Dugger, A. L.; Fall, G. M.; Feng, X.; Fresch, M. A.; Gourley, J. J.; Khan, S.; Kitzmiller, D.; Lee, H. S.; Liu, Y.; McCreight, J. L.; Newman, A. J.; Oubeidillah, A.; Pan, L.; Pham, C.; Salas, F.; Sampson, K. M.; Smith, M.; Sood, G.; Wood, A.; Yates, D. N.; Yu, W.; Zhang, Y.

    2015-12-01

    The National Weather Service (NWS) National Water Center(NWC) is collaborating with the NWS National Centers for Environmental Prediction (NCEP) and the National Center for Atmospheric Research (NCAR) to implement a first-of-its-kind operational instance of the Weather Research and Forecasting (WRF)-Hydro model over the Continental United States (CONUS) and contributing drainage areas on the NWS Weather and Climate Operational Supercomputing System (WCOSS) supercomputer. The system will provide seamless, high-resolution, continuously cycling forecasts of streamflow and other hydrologic outputs of value from both deterministic- and ensemble-type runs. WRF-Hydro will form the core of the NWC national water modeling strategy, supporting NWS hydrologic forecast operations along with emergency response and water management efforts of partner agencies. Input and output from the system will be comprehensively verified via the NWC Water Resource Evaluation Service. Hydrologic events occur on a wide range of temporal scales, from fast acting flash floods, to long-term flow events impacting water supply. In order to capture this range of events, the initial operational WRF-Hydro configuration will feature 1) hourly analysis runs, 2) short-and medium-range deterministic forecasts out to two day and ten day horizons and 3) long-range ensemble forecasts out to 30 days. All three of these configurations are underpinned by a 1km execution of the NoahMP land surface model, with channel routing taking place on 2.67 million NHDPlusV2 catchments covering the CONUS and contributing areas. Additionally, the short- and medium-range forecasts runs will feature surface and sub-surface routing on a 250m grid, while the hourly analyses will feature this same 250m routing in addition to nudging-based assimilation of US Geological Survey (USGS) streamflow observations. A limited number of major reservoirs will be configured within the model to begin to represent the first-order impacts of streamflow regulation.

  12. JaxoDraw: A graphical user interface for drawing Feynman diagrams

    NASA Astrophysics Data System (ADS)

    Binosi, D.; Theußl, L.

    2004-08-01

    JaxoDraw is a Feynman graph plotting tool written in Java. It has a complete graphical user interface that allows all actions to be carried out via mouse click-and-drag operations in a WYSIWYG fashion. Graphs may be exported to postscript/EPS format and can be saved in XML files to be used for later sessions. One of JaxoDraw's main features is the possibility to create ? code that may be used to generate graphics output, thus combining the powers of ? with those of a modern day drawing program. With JaxoDraw it becomes possible to draw even complicated Feynman diagrams with just a few mouse clicks, without the knowledge of any programming language. Program summaryTitle of program: JaxoDraw Catalogue identifier: ADUA Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADUA Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar gzip file Operating system: Any Java-enabled platform, tested on Linux, Windows ME, XP, Mac OS X Programming language used: Java License: GPL Nature of problem: Existing methods for drawing Feynman diagrams usually require some 'hard-coding' in one or the other programming or scripting language. It is not very convenient and often time consuming, to generate relatively simple diagrams. Method of solution: A program is provided that allows for the interactive drawing of Feynman diagrams with a graphical user interface. The program is easy to learn and use, produces high quality output in several formats and runs on any operating system where a Java Runtime Environment is available. Number of bytes in distributed program, including test data: 2 117 863 Number of lines in distributed program, including test data: 60 000 Restrictions: Certain operations (like internal latex compilation, Postscript preview) require the execution of external commands that might not work on untested operating systems. Typical running time: As an interactive program, the running time depends on the complexity of the diagram to be drawn.

  13. Dual-comb spectroscopy of water vapor with a free-running semiconductor disk laser.

    PubMed

    Link, S M; Maas, D J H C; Waldburger, D; Keller, U

    2017-06-16

    Dual-comb spectroscopy offers the potential for high accuracy combined with fast data acquisition. Applications are often limited, however, by the complexity of optical comb systems. Here we present dual-comb spectroscopy of water vapor using a substantially simplified single-laser system. Very good spectroscopy measurements with fast sampling rates are achieved with a free-running dual-comb mode-locked semiconductor disk laser. The absolute stability of the optical comb modes is characterized both for free-running operation and with simple microwave stabilization. This approach drastically reduces the complexity for dual-comb spectroscopy. Band-gap engineering to tune the center wavelength from the ultraviolet to the mid-infrared could optimize frequency combs for specific gas targets, further enabling dual-comb spectroscopy for a wider range of industrial applications. Copyright © 2017, American Association for the Advancement of Science.

  14. RISA: Remote Interface for Science Analysis

    NASA Astrophysics Data System (ADS)

    Gabriel, C.; Ibarra, A.; de La Calle, I.; Salgado, J.; Osuna, P.; Tapiador, D.

    2008-08-01

    The Scientific Analysis System (SAS) is the package for interactive and pipeline data reduction of all XMM-Newton data. Freely distributed by ESA to run under many different operating systems, the SAS has been used by almost every one of the 1600 refereed scientific publications obtained so far from the mission. We are developing RISA, the Remote Interface for Science Analysis, which makes it possible to run SAS through fully configurable web service workflows, enabling observers to access and analyse data making use of all of the existing SAS functionalities, without any installation/download of software/data. The workflows run primarily but not exclusively on the ESAC Grid, which offers scalable processing resources, directly connected to the XMM-Newton Science Archive. A first project internal version of RISA was issued in May 2007, a public release is expected already within this year.

  15. Performance of the LHCb RICH detectors during the LHC Run II

    NASA Astrophysics Data System (ADS)

    Papanestis, A.; D'Ambrosio, C.; LHCb RICH Collaboration

    2017-12-01

    The LHCb RICH system provides hadron identification over a wide momentum range (2-100 GeV/c). This detector system is key to LHCb's precision flavour physics programme, which has unique sensitivity to physics beyond the standard model. This paper reports on the performance of the LHCb RICH in Run II, following significant changes in the detector and operating conditions. The changes include the refurbishment of significant number of photon detectors, assembled using new vacuum technologies, and the removal of the aerogel radiator. The start of Run II of the LHC saw the beam energy increase to 6.5 TeV per beam and a new trigger strategy for LHCb with full online detector calibration. The RICH information has also been made available for all trigger streams in the High Level Trigger for the first time.

  16. The Slow Control System of the Auger Fluorescence Detectors

    NASA Astrophysics Data System (ADS)

    Barenthien, N.; Bethge, C.; Daumiller, K.; Gemmeke, H.; Kampert, K.-H.; Wiebusch, C.

    2003-07-01

    The fluorescence detector (FD) of the Pierre Auger experiment [1] comprises 24 telescopes that will be situated in 4 remote buildings in the Pampa Amarilla. It is planned to run the fluorescence detectors in absence of operators on site. Therefore, the main task of the Slow Control System (SCS) is to ensure a secure remote operation of the FD system. The Slow Control System works autonomously and continuously monitors those parameters which may disturb a secure operation. Commands from the data-acquisition system or the remote operator are accepted only if they do not violate safety rules that depend on the actual experimental conditions (e.g. high-voltage, wind-sp eed, light, etc.). In case of malfunctions (power failure, communication breakdown, ...) the SCS performs an orderly shutdown and subsequent startup of the fluorescence detector system. The concept and the implementation of the Slow Control System are presented.

  17. User's manual for a computer program for simulating intensively managed allowable cut.

    Treesearch

    Robert W. Sassaman; Ed Holt; Karl Bergsvik

    1972-01-01

    Detailed operating instructions are described for SIMAC, a computerized forest simulation model which calculates the allowable cut assuming volume regulation for forests with intensively managed stands. A sample problem illustrates the required inputs and expected output. SIMAC is written in FORTRAN IV and runs on a CDC 6400 computer with a SCOPE 3.3 operating system....

  18. 40 CFR 63.4165 - How do I determine the emission capture system efficiency?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... of appendix M to 40 CFR part 51 to determine the mass fraction of TVH liquid input from each coating... materials used in the coating operation during the capture efficiency test run, kg. TVHi = mass fraction of... compares the mass of liquid TVH in materials used in the coating operation, to the mass of TVH emissions...

  19. 40 CFR 63.4165 - How do I determine the emission capture system efficiency?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... of appendix M to 40 CFR part 51 to determine the mass fraction of TVH liquid input from each coating... materials used in the coating operation during the capture efficiency test run, kg. TVHi = mass fraction of... compares the mass of liquid TVH in materials used in the coating operation, to the mass of TVH emissions...

  20. System analysis for the Huntsville Operational Support Center distributed computer system

    NASA Technical Reports Server (NTRS)

    Ingels, E. M.

    1983-01-01

    A simulation model was developed and programmed in three languages BASIC, PASCAL, and SLAM. Two of the programs are included in this report, the BASIC and the PASCAL language programs. SLAM is not supported by NASA/MSFC facilities and hence was not included. The statistical comparison of simulations of the same HOSC system configurations are in good agreement and are in agreement with the operational statistics of HOSC that were obtained. Three variations of the most recent HOSC configuration was run and some conclusions drawn as to the system performance under these variations.

  1. Designing an operator interface? Consider user`s `psychology`

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Toffer, D.E.

    The modern operator interface is a channel of communication between operators and the plant that, ideally, provides them with information necessary to keep the plant running at maximum efficiency. Advances in automation technology have increased information flow from the field to the screen. New and improved Supervisory Control and Data Acquisition (SCADA) packages provide designers with powerful and open design considerations. All too often, however, systems go to the field designed for the software rather than the operator. Plant operators` jobs have changed fundamentally, from controlling their plants from out in the field to doing so from within control rooms.more » Control room-based operation does not denote idleness. Trained operators should be engaged in examination of plant status and cognitive evaluation of plant efficiencies. Designers who are extremely computer literate, often do not consider demographics of field operators. Many field operators have little knowledge of modern computer systems. As a result, they do not take full advantage of the interface`s capabilities. Designers often fail to understand the true nature of how operators run their plants. To aid field operators, designers must provide familiar controls and intuitive choices. To achieve success in interface design, it is necessary to understand the ways in which humans think conceptually, and to understand how they process this information physically. The physical and the conceptual are closely related when working with any type of interface. Designers should ask themselves: {open_quotes}What type of information is useful to the field operator?{close_quotes} Let`s explore an integration model that contains the following key elements: (1) Easily navigated menus; (2) Reduced chances for misunderstanding; (3) Accurate representations of the plant or operation; (4) Consistent and predictable operation; (5) A pleasant and engaging interface that conforms to the operator`s expectations. 4 figs.« less

  2. CDC/1000: a Control Data Corporation remote batch terminal emulator for Hewlett-Packard minicomputers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berg, D.E.

    1981-02-01

    The Control Data Corporation Type 200 User Terminal utilizes a unique communications protocol to provide users with batch mode remote terminal access to Control Data computers. CDC/1000 is a software subsystem that implements this protocol on Hewlett-Packard minicomputers running the Real Time Executive III, IV, or IVB operating systems. This report provides brief descriptions of the various software modules comprising CDC/1000, and contains detailed instructions for integrating CDC/1000 into the Hewlett Packard operating system and for operating UTERM, the user interface program for CDC/1000. 6 figures.

  3. Managing Information On Technical Requirements

    NASA Technical Reports Server (NTRS)

    Mauldin, Lemuel E., III; Hammond, Dana P.

    1993-01-01

    Technical Requirements Analysis and Control Systems/Initial Operating Capability (TRACS/IOC) computer program provides supplemental software tools for analysis, control, and interchange of project requirements so qualified project members have access to pertinent project information, even if in different locations. Enables users to analyze and control requirements, serves as focal point for project requirements, and integrates system supporting efficient and consistent operations. TRACS/IOC is HyperCard stack for use on Macintosh computers running HyperCard 1.2 or later and Oracle 1.2 or later.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    KIRKBRIDE, R.A.

    The Tank Waste Remediation System Operation and Utilization Plan updates the operating scenario and plans for the delivery of feed to BNFL Inc., retrieval of waste from single-shell tanks, and the overall process flowsheets for Phases I and II of the privatization of the Tank Waste Remediation System. The plans and flowsheets are updated with the most recent tank-by-tank inventory and sludge washing data. Sensitivity cases were run to evaluate the impact or benefits of proposed changes to the BNFL Inc. contract and to evaluate a risk-based SST retrieval strategy.

  5. Open Radio Communications Architecture Core Framework V1.1.0 Volume 1 Software Users Manual

    DTIC Science & Technology

    2005-02-01

    on a PC utilizing the KDE desktop that comes with Red Hat Linux . The default desktop for most Red Hat Linux installations is the GNOME desktop. The...SCA) v2.2. The software was designed for a desktop computer running the Linux operating system (OS). It was developed in C++, uses ACE/TAO for CORBA...middleware, Xerces for the XML parser, and Red Hat Linux for the Operating System. The software is referred to as, Open Radio Communication

  6. Investigating the Naval Logistics Role in Humanitarian Assistance Activities

    DTIC Science & Technology

    2015-03-01

    transportation means. E. BASE CASE RESULTS The computations were executed on a MacBook Pro , 3 GHz Intel Core i7-4578U processor with 8 GB. The...MacBook Pro was partitioned to also contain a Windows 7, 64-bit operating system. The computations were run in the Windows 7 operating system using the...it impacts the types of metamodels that can be developed as a result of data farming (Lucas et al., 2015). Using a metamodel, one can closely

  7. SLAC modulator system improvements and reliability results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Donaldson, A.R.

    1998-06-01

    In 1995, an improvement project was completed on the 244 klystron modulators in the linear accelerator. The modulator system has been previously described. This article offers project details and their resulting effect on modulator and component reliability. Prior to the project, the authors had collected four operating cycles (1991 through 1995) of MTTF data. In this discussion, the '91 data will be excluded since the modulators operated at 60 Hz. The five periods following the '91 run were reviewed due to the common repetition rate at 120 Hz.

  8. 78 FR 13086 - Agency Information Collection Activities; Submission for OMB Review; Comment Request; Job Clubs...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-26

    ... ``job clubs'' have evolved into one of several important activities used by the public workforce system... are formally run through the public workforce system--including at Department of Labor funded American... communities; (2) documenting how they differ from and are similar to the job clubs operated by publicly...

  9. An Analysis of Factors That Influence Logistics, Operational Availability, and Flight Hour Supply of the German Attack Helicopter Fleet

    DTIC Science & Technology

    2017-06-01

    maintenance times from the fleet are randomly resampled when running the model to enhance model realism. The use of a simulation model to represent the...helicopter regiment. 2. Attack Helicopter UH TIGER The EC665, or Airbus Helicopter TIGER, (Figure 3) is a four- bladed , twin- engine multi-role attack...migrated into the automated management system SAP Standard Product Family (SASPF), and the usage clock starts to run with the amount of the current

  10. Simple Schlieren Light Meter

    NASA Technical Reports Server (NTRS)

    Rhodes, David B.; Franke, John M.; Jones, Stephen B.; Leighty, Bradley D.

    1992-01-01

    Simple light-meter circuit used to position knife edge of schlieren optical system to block exactly half light. Enables operator to check quickly position of knife edge between tunnel runs to ascertain whether or not in alignment. Permanent measuring system made part of each schlieren system. If placed in unused area of image plane, or in monitoring beam from mirror knife edge, provides real-time assessment of alignment of schlieren system.

  11. 46 CFR 113.30-25 - Detailed requirements.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... stations must be able to communicate at the same time. (b) The loss of one component of the system must not disable the rest of the system. (c) The system must be able to operate under full load for the same period... must run as close to the fore-and-aft centerline of the vessel as practicable. (l) No cable for voice...

  12. 46 CFR 113.30-25 - Detailed requirements.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... stations must be able to communicate at the same time. (b) The loss of one component of the system must not disable the rest of the system. (c) The system must be able to operate under full load for the same period... must run as close to the fore-and-aft centerline of the vessel as practicable. (l) No cable for voice...

  13. 46 CFR 113.30-25 - Detailed requirements.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... stations must be able to communicate at the same time. (b) The loss of one component of the system must not disable the rest of the system. (c) The system must be able to operate under full load for the same period... must run as close to the fore-and-aft centerline of the vessel as practicable. (l) No cable for voice...

  14. 46 CFR 113.30-25 - Detailed requirements.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... stations must be able to communicate at the same time. (b) The loss of one component of the system must not disable the rest of the system. (c) The system must be able to operate under full load for the same period... must run as close to the fore-and-aft centerline of the vessel as practicable. (l) No cable for voice...

  15. 46 CFR 113.30-25 - Detailed requirements.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... stations must be able to communicate at the same time. (b) The loss of one component of the system must not disable the rest of the system. (c) The system must be able to operate under full load for the same period... must run as close to the fore-and-aft centerline of the vessel as practicable. (l) No cable for voice...

  16. User's guide to UGRS: the Ultimate Grading and Remanufacturing System (version 5.0).

    Treesearch

    John Moody; Charles J. Gatchell; Elizabeth S. Walker; Powsiri Klinkhachorn

    1998-01-01

    The Ultimate Grading and Remanufacturing System (UGRS) is the latest generation of advanced computer programs for lumber grading. It is designed to be a training and research tool that allows grading of lumber according to 1998 NHLA rules and remanufacturing for maximum dollar value. A 32-bit application that runs under all Microsoft Windows operating systems, UGRS...

  17. Fermi GBM Observations During the Second Observing Run of LIGO/Virgo

    NASA Astrophysics Data System (ADS)

    Goldstein, Adam; Fermi-GBM

    2018-01-01

    The Fermi Gamma-ray Burst Monitor (GBM) is a prolific detector of gamma-ray bursts (GRBs) and detects more short duration GRBs than any other instrument currently in operation. Short GRBs are thought to be associated with the mergers of binary neutron star systems (or neutron star-black hole systems), and are therefore considered likely counterparts to gravitational-wave detections from LIGO/Virgo. We report on the GBM observations during the second observing run of LIGO/Virgo and detail the physical and astrophysical insights that might be gleaned from a joint detection of a short GRB and a gravitational-wave source.

  18. Cheetah: A Framework for Scalable Hierarchical Collective Operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Graham, Richard L; Gorentla Venkata, Manjunath; Ladd, Joshua S

    2011-01-01

    Collective communication operations, used by many scientific applications, tend to limit overall parallel application performance and scalability. Computer systems are becoming more heterogeneous with increasing node and core-per-node counts. Also, a growing number of data-access mechanisms, of varying characteristics, are supported within a single computer system. We describe a new hierarchical collective communication framework that takes advantage of hardware-specific data-access mechanisms. It is flexible, with run-time hierarchy specification, and sharing of collective communication primitives between collective algorithms. Data buffers are shared between levels in the hierarchy reducing collective communication management overhead. We have implemented several versions of the Message Passingmore » Interface (MPI) collective operations, MPI Barrier() and MPI Bcast(), and run experiments using up to 49, 152 processes on a Cray XT5, and a small InfiniBand based cluster. At 49, 152 processes our barrier implementation outperforms the optimized native implementation by 75%. 32 Byte and one Mega-Byte broadcasts outperform it by 62% and 11%, respectively, with better scalability characteristics. Improvements relative to the default Open MPI implementation are much larger.« less

  19. Completion of the LANSCE Proton Storage Ring Control System Upgrade -- A Successful Integration of EPICS Into a Running Control System

    NASA Astrophysics Data System (ADS)

    Schaller, S. C.; Bjorklund, E. A.; Carr, G. P.; Faucett, J. A.; Oothoudt, M. A.

    1997-05-01

    The Los Alamos Neutron Scattering Center (LANSCE) Proton Storage Ring (PSR) control system upgrade was completed in 1996. In previous work, much of a PDP-11-based control system was replaced with Experimental Physics and Industrial Control System (EPICS) controls. Several parts of the old control system which used a VAX for operator displays and direct access to a CAMAC serial highway still remained. The old system was preserved as a "fallback" if the new EPICS-based system had problems. The control system upgrade completion included conversion of several application programs to EPICS-based operator interfaces, moving some data acquisition hardware to EPICS Input-Output Controllers (IOCs), and the implementation of new gateway software to complete the overall control system interoperability. Many operator interface (OPI) screens, written by LANSCE operators, have been incorporated in the new system. The old PSR control system hardware was removed. The robustness and reliability of the new controls obviated the need for a fallback capability.

  20. Wire Rope Failure on the Guppy Winch

    NASA Technical Reports Server (NTRS)

    Figert, John

    2016-01-01

    On January 6, 2016 at El Paso, the Guppy winch motor was changed. After completion of the operational checks, the load bar was being reinstalled on the cargo pallet when the motor control FORWARD relay failed in the energized position. The pallet was pinned at all locations (each pin has a load capacity of 16,000 lbs.) while the winch was running. The wire rope snapped before aircraft power could be removed. After disassembly, the fractured wire rope was shipped to ES4 lab for further characterization of the wire rope portion of the failure. The system was being operated without a clear understanding of the system capability and function. The proximate cause was the failure of the K48 -Forward Winch Control Relay in the energized position, which allowed the motor to continuously run without command from the hand controller, and operation of the winch system with both controllers connected to the system. This prevented the emergency stop feature on the hand controller from functioning as designed. An electrical checkout engineering work instruction was completed and identified the failed relay and confirmed the emergency stop only paused the system when the STOP button on both connected hand controllers were depressed simultaneously. The winch system incorporates a torque limiting clutch. It is suspected that the clutch did not slip and the motor did not stall or overload the current limiter. Aircraft Engineering is looking at how to change the procedures to provide a checkout of the clutch and set to a slip torque limit appropriate to support operations.

  1. 77 FR 37316 - Drawbridge Operation Regulation; Trent River, New Bern, NC

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-21

    ... navigation position for 3 hours to accommodate the annual Neuse River Bridge Run. DATES: This deviation is...) 366-9826. SUPPLEMENTARY INFORMATION: The Event Director for the Neuse River Bridge Run, with approval... temporary deviation from the current operating schedule to accommodate the Neuse River Bridge Run. The...

  2. Operating a petabyte class archive at ESO

    NASA Astrophysics Data System (ADS)

    Suchar, Dieter; Lockhart, John S.; Burrows, Andrew

    2008-07-01

    The challenges of setting up and operating a Petabyte Class Archive will be described in terms of computer systems within a complex Data Centre environment. The computer systems, including the ESO Primary and Secondary Archive and the associated computational environments such as relational databases will be explained. This encompasses the entire system project cycle, including the technical specifications, procurement process, equipment installation and all further operational phases. The ESO Data Centre construction and the complexity of managing the environment will be presented. Many factors had to be considered during the construction phase, such as power consumption, targeted cooling and the accumulated load on the building structure to enable the smooth running of a Petabyte class Archive.

  3. Wearable computer technology for dismounted applications

    NASA Astrophysics Data System (ADS)

    Daniels, Reginald

    2010-04-01

    Small computing devices which rival the compact size of traditional personal digital assistants (PDA) have recently established a market niche. These computing devices are small enough to be considered unobtrusive for humans to wear. The computing devices are also powerful enough to run full multi-tasking general purpose operating systems. This paper will explore the wearable computer information system for dismounted applications recently fielded for ground-based US Air Force use. The environments that the information systems are used in will be reviewed, as well as a description of the net-centric, ground-based warrior. The paper will conclude with a discussion regarding the importance of intuitive, usable, and unobtrusive operator interfaces for dismounted operators.

  4. Tcl as a Software Environment for a TCS

    NASA Astrophysics Data System (ADS)

    Terrett, David L.

    2002-12-01

    This paper describes how the Tcl scripting language and C API has been used as the software environment for a telescope pointing kernel so that new pointing algorithms and software architectures can be developed and tested without needing a real-time operating system or real-time software environment. It has enabled development to continue outside the framework of a specific telescope project while continuing to build a system that is sufficiently complete to be capable of controlling real hardware but expending minimum effort on replacing the services that would normally by provided by a real-time software environment. Tcl is used as a scripting language for configuring the system at startup and then as the command interface for controlling the running system; the Tcl C language API is used to provided a system independent interface to file and socket I/O and other operating system services. The pointing algorithms themselves are implemented as a set of C++ objects calling C library functions that implement the algorithms described in [2]. Although originally designed as a test and development environment, the system, running as a soft real-time process on Linux, has been used to test the SOAR mount control system and will be used as the pointing kernel of the SOAR telescope control system

  5. ChronQC: a quality control monitoring system for clinical next generation sequencing.

    PubMed

    Tawari, Nilesh R; Seow, Justine Jia Wen; Perumal, Dharuman; Ow, Jack L; Ang, Shimin; Devasia, Arun George; Ng, Pauline C

    2018-05-15

    ChronQC is a quality control (QC) tracking system for clinical implementation of next-generation sequencing (NGS). ChronQC generates time series plots for various QC metrics to allow comparison of current runs to historical runs. ChronQC has multiple features for tracking QC data including Westgard rules for clinical validity, laboratory-defined thresholds and historical observations within a specified time period. Users can record their notes and corrective actions directly onto the plots for long-term recordkeeping. ChronQC facilitates regular monitoring of clinical NGS to enable adherence to high quality clinical standards. ChronQC is freely available on GitHub (https://github.com/nilesh-tawari/ChronQC), Docker (https://hub.docker.com/r/nileshtawari/chronqc/) and the Python Package Index. ChronQC is implemented in Python and runs on all common operating systems (Windows, Linux and Mac OS X). tawari.nilesh@gmail.com or pauline.c.ng@gmail.com. Supplementary data are available at Bioinformatics online.

  6. Oxygen production on Mars and the Moon

    NASA Technical Reports Server (NTRS)

    Sridhar, K. R.; Vaniman, B.; Miller, S.

    1992-01-01

    Significant progress was made in the area of in-situ oxygen production in the last year. In order to reduce sealing problems due to thermal expansion mismatch in the disk configuration, several all-Zirconia cells were constructed and are being tested. Two of these cells were run successfully for extended periods of time. One was run for over 200 hours and the other for over 800 hours. These extended runs, along with gas sample analysis, showed that the oxygen being produced is definitely from CO2 and not from air leaks or from the disk material. A new tube system is being constructed that is more rugged, portable, durable, and energy efficient. The important operating parameters of this system will be better controlled compared to previous systems. An electrochemical compressor will also be constructed with a similar configuration. The electrochemical compressor will use less energy since the feed stock is already heated in the separation unit. In addition, it does not have moving parts.

  7. A temporal-spatial postprocessing model for probabilistic run-off forecast. With a case study from Ulla-Førre with five catchments and ten lead times

    NASA Astrophysics Data System (ADS)

    Engeland, K.; Steinsland, I.

    2012-04-01

    This work is driven by the needs of next generation short term optimization methodology for hydro power production. Stochastic optimization are about to be introduced; i.e. optimizing when available resources (water) and utility (prices) are uncertain. In this paper we focus on the available resources, i.e. water, where uncertainty mainly comes from uncertainty in future runoff. When optimizing a water system all catchments and several lead times have to be considered simultaneously. Depending on the system of hydropower reservoirs, it might be a set of headwater catchments, a system of upstream /downstream reservoirs where water used from one catchment /dam arrives in a lower catchment maybe days later, or a combination of both. The aim of this paper is therefore to construct a simultaneous probabilistic forecast for several catchments and lead times, i.e. to provide a predictive distribution for the forecasts. Stochastic optimization methods need samples/ensembles of run-off forecasts as input. Hence, it should also be possible to sample from our probabilistic forecast. A post-processing approach is taken, and an error model based on Box- Cox transformation, power transform and a temporal-spatial copula model is used. It accounts for both between catchment and between lead time dependencies. In operational use it is strait forward to sample run-off ensembles from this models that inherits the catchment and lead time dependencies. The methodology is tested and demonstrated in the Ulla-Førre river system, and simultaneous probabilistic forecasts for five catchments and ten lead times are constructed. The methodology has enough flexibility to model operationally important features in this case study such as hetroscadasety, lead-time varying temporal dependency and lead-time varying inter-catchment dependency. Our model is evaluated using CRPS for marginal predictive distributions and energy score for joint predictive distribution. It is tested against deterministic run-off forecast, climatology forecast and a persistent forecast, and is found to be the better probabilistic forecast for lead time grater then two. From an operational point of view the results are interesting as the between catchment dependency gets stronger with longer lead-times.

  8. Data Reprocessing on Worldwide Distributed Systems

    NASA Astrophysics Data System (ADS)

    Wicke, Daniel

    The DØ experiment faces many challenges in terms of enabling access to large datasets for physicists on four continents. The strategy for solving these problems on worldwide distributed computing clusters is presented. Since the beginning of Run II of the Tevatron (March 2001) all Monte-Carlo simulations for the experiment have been produced at remote systems. For data analysis, a system of regional analysis centers (RACs) was established which supply the associated institutes with the data. This structure, which is similar to the tiered structure foreseen for the LHC was used in Fall 2003 to reprocess all DØ data with a much improved version of the reconstruction software. This makes DØ the first running experiment that has implemented and operated all important computing tasks of a high energy physics experiment on systems distributed worldwide.

  9. Operational effectiveness of a Multiple Aquila Control System (MACS)

    NASA Technical Reports Server (NTRS)

    Brown, R. W.; Flynn, J. D.; Frey, M. R.

    1983-01-01

    The operational effectiveness of a multiple aquila control system (MACS) was examined under a variety of remotely piloted vehicle (RPV) mission configurations. The set of assumptions and inputs used to form the rules under which a computerized simulation of MACS was run is given. The characteristics that are to govern MACS operations include: the battlefield environment that generates the requests for RPV missions, operating time-lines of the RPV-peculiar equipment, maintenance requirements, and vulnerability to enemy fire. The number of RPV missions and the number of operation days are discussed. Command, control, and communication data rates are estimated by determining how many messages are passed and what information is necessary in them to support ground coordination between MACS sections.

  10. TSTA Piping and Flame Arrestor Operating Experience Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cadwallader, Lee C.; Willms, R. Scott

    The Tritium Systems Test Assembly (TSTA) was a facility dedicated to tritium handling technology and experiment research at the Los Alamos National Laboratory. The facility operated from 1984 to 2001, running a prototype fusion fuel processing loop with ~100 grams of tritium as well as small experiments. There have been several operating experience reports written on this facility’s operation and maintenance experience. This paper describes analysis of two additional components from TSTA, small diameter gas piping that handled small amounts of tritium in a nitrogen carrier gas, and the flame arrestor used in this piping system. The operating experiences andmore » the component failure rates for these components are discussed in this paper. Comparison data from other applications are also presented.« less

  11. The X-ray system of crystallographic programs for any computer having a PIDGIN FORTRAN compiler

    NASA Technical Reports Server (NTRS)

    Stewart, J. M.; Kruger, G. J.; Ammon, H. L.; Dickinson, C.; Hall, S. R.

    1972-01-01

    A manual is presented for the use of a library of crystallographic programs. This library, called the X-ray system, is designed to carry out the calculations required to solve the structure of crystals by diffraction techniques. It has been implemented at the University of Maryland on the Univac 1108. It has, however, been developed and run on a variety of machines under various operating systems. It is considered to be an essentially machine independent library of applications programs. The report includes definition of crystallographic computing terms, program descriptions, with some text to show their application to specific crystal problems, detailed card input descriptions, mass storage file structure and some example run streams.

  12. PIV/HPIV Film Analysis Software Package

    NASA Technical Reports Server (NTRS)

    Blackshire, James L.

    1997-01-01

    A PIV/HPIV film analysis software system was developed that calculates the 2-dimensional spatial autocorrelations of subregions of Particle Image Velocimetry (PIV) or Holographic Particle Image Velocimetry (HPIV) film recordings. The software controls three hardware subsystems including (1) a Kodak Megaplus 1.4 camera and EPIX 4MEG framegrabber subsystem, (2) an IEEE/Unidex 11 precision motion control subsystem, and (3) an Alacron I860 array processor subsystem. The software runs on an IBM PC/AT host computer running either the Microsoft Windows 3.1 or Windows 95 operating system. It is capable of processing five PIV or HPIV displacement vectors per second, and is completely automated with the exception of user input to a configuration file prior to analysis execution for update of various system parameters.

  13. Space station operating system study

    NASA Technical Reports Server (NTRS)

    Horn, Albert E.; Harwell, Morris C.

    1988-01-01

    The current phase of the Space Station Operating System study is based on the analysis, evaluation, and comparison of the operating systems implemented on the computer systems and workstations in the software development laboratory. Primary emphasis has been placed on the DEC MicroVMS operating system as implemented on the MicroVax II computer, with comparative analysis of the SUN UNIX system on the SUN 3/260 workstation computer, and to a limited extent, the IBM PC/AT microcomputer running PC-DOS. Some benchmark development and testing was also done for the Motorola MC68010 (VM03 system) before the system was taken from the laboratory. These systems were studied with the objective of determining their capability to support Space Station software development requirements, specifically for multi-tasking and real-time applications. The methodology utilized consisted of development, execution, and analysis of benchmark programs and test software, and the experimentation and analysis of specific features of the system or compilers in the study.

  14. Starbase Data Tables: An ASCII Relational Database for Unix

    NASA Astrophysics Data System (ADS)

    Roll, John

    2011-11-01

    Database management is an increasingly important part of astronomical data analysis. Astronomers need easy and convenient ways of storing, editing, filtering, and retrieving data about data. Commercial databases do not provide good solutions for many of the everyday and informal types of database access astronomers need. The Starbase database system with simple data file formatting rules and command line data operators has been created to answer this need. The system includes a complete set of relational and set operators, fast search/index and sorting operators, and many formatting and I/O operators. Special features are included to enhance the usefulness of the database when manipulating astronomical data. The software runs under UNIX, MSDOS and IRAF.

  15. 40 CFR 63.3544 - How do I determine the emission capture system efficiency?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... the coating operation during the capture efficiency test run, kg. TVHi = Mass fraction of TVH in... the mass of liquid TVH in materials used in the coating operation to the mass of TVH emissions not... 40 CFR part 51. (2) Use Method 204A or 204F of appendix M to 40 CFR part 51 to determine the mass...

  16. The CHAT System: An OS/360 MVT Time-Sharing Subsystem for Displays and Teletype. Technical Progress Report.

    ERIC Educational Resources Information Center

    Schultz, Gary D.

    The design and operation of a time-sharing monitor are described. It runs under OS/360 MVT that supports multiple application program interaction with operators of CRT (cathode ray tube) display stations and of a teletype. Key design features discussed include: 1) an interface allowing application programs to be coded in either PL/I or assembler…

  17. runDM: Running couplings of Dark Matter to the Standard Model

    NASA Astrophysics Data System (ADS)

    D'Eramo, Francesco; Kavanagh, Bradley J.; Panci, Paolo

    2018-02-01

    runDM calculates the running of the couplings of Dark Matter (DM) to the Standard Model (SM) in simplified models with vector mediators. By specifying the mass of the mediator and the couplings of the mediator to SM fields at high energy, the code can calculate the couplings at low energy, taking into account the mixing of all dimension-6 operators. runDM can also extract the operator coefficients relevant for direct detection, namely low energy couplings to up, down and strange quarks and to protons and neutrons.

  18. The CMS Tier0 goes cloud and grid for LHC Run 2

    DOE PAGES

    Hufnagel, Dirk

    2015-12-23

    In 2015, CMS will embark on a new era of collecting LHC collisions at unprecedented rates and complexity. This will put a tremendous stress on our computing systems. Prompt Processing of the raw data by the Tier-0 infrastructure will no longer be constrained to CERN alone due to the significantly increased resource requirements. In LHC Run 2, we will need to operate it as a distributed system utilizing both the CERN Cloud-based Agile Infrastructure and a significant fraction of the CMS Tier-1 Grid resources. In another big change for LHC Run 2, we will process all data using the multi-threadedmore » framework to deal with the increased event complexity and to ensure efficient use of the resources. Furthermore, this contribution will cover the evolution of the Tier-0 infrastructure and present scale testing results and experiences from the first data taking in 2015.« less

  19. The CMS TierO goes Cloud and Grid for LHC Run 2

    NASA Astrophysics Data System (ADS)

    Hufnagel, Dirk

    2015-12-01

    In 2015, CMS will embark on a new era of collecting LHC collisions at unprecedented rates and complexity. This will put a tremendous stress on our computing systems. Prompt Processing of the raw data by the Tier-0 infrastructure will no longer be constrained to CERN alone due to the significantly increased resource requirements. In LHC Run 2, we will need to operate it as a distributed system utilizing both the CERN Cloud-based Agile Infrastructure and a significant fraction of the CMS Tier-1 Grid resources. In another big change for LHC Run 2, we will process all data using the multi-threaded framework to deal with the increased event complexity and to ensure efficient use of the resources. This contribution will cover the evolution of the Tier-0 infrastructure and present scale testing results and experiences from the first data taking in 2015.

  20. Telerobot local-remote control architecture for space flight program applications

    NASA Technical Reports Server (NTRS)

    Zimmerman, Wayne; Backes, Paul; Steele, Robert; Long, Mark; Bon, Bruce; Beahan, John

    1993-01-01

    The JPL Supervisory Telerobotics (STELER) Laboratory has developed and demonstrated a unique local-remote robot control architecture which enables management of intermittent communication bus latencies and delays such as those expected for ground-remote operation of Space Station robotic systems via the Tracking and Data Relay Satellite System (TDRSS) communication platform. The current work at JPL in this area has focused on enhancing the technologies and transferring the control architecture to hardware and software environments which are more compatible with projected ground and space operational environments. At the local site, the operator updates the remote worksite model using stereo video and a model overlay/fitting algorithm which outputs the location and orientation of the object in free space. That information is relayed to the robot User Macro Interface (UMI) to enable programming of the robot control macros. This capability runs on a single Silicon Graphics Inc. machine. The operator can employ either manual teleoperation, shared control, or supervised autonomous control to manipulate the intended object. The remote site controller, called the Modular Telerobot Task Execution System (MOTES), runs in a multi-processor VME environment and performs the task sequencing, task execution, trajectory generation, closed loop force/torque control, task parameter monitoring, and reflex action. This paper describes the new STELER architecture implementation, and also documents the results of the recent autonomous docking task execution using the local site and MOTES.

  1. FROST - FREEDOM OPERATIONS SIMULATION TEST VERSION 1.0

    NASA Technical Reports Server (NTRS)

    Deshpande, G. K.

    1994-01-01

    The Space Station Freedom Information System processes and transmits data between the space station and the station controllers and payload operators on the ground. Components of the system include flight hardware, communications satellites, software and ground facilities. FROST simulates operation of the SSF Information System, tracking every data packet from generation to destination for both uplinks and downlinks. This program collects various statistics concerning the SSF Information System operation and provides reports of these at user-specified intervals. Additionally, FROST has graphical display capability to enhance interpretation of these statistics. FROST models each of the components of the SSF Information System as an object, to which packets are generated, received, processed, transmitted, and/or dumped. The user must provide the information system design with specified parameters and inter-connections among objects. To aid this process, FROST supplies an example SSF Information System for simulation, but this example must be copied before it is changed and used for further simulation. Once specified, system architecture and parameters are put into the input file, named the Test Configuration Definition (TCD) file. Alternative system designs can then be simulated simply by editing the TCD file. Within this file the user can define new objects, alter object parameters, redefine paths, redefine generation rates and windows, and redefine object interconnections. At present, FROST does not model every feature of the SSF Information System, but it is capable of simulating many of the system's important functions. To generate data messages, which can come from any object, FROST defines "windows" to specify when, what kind, and how much of that data is generated. All messages are classified by priority as either (1)emergency (2)quick look (3)telemetry or (4)payload data. These messages are processed by all objects according to priority. That is, all priority 1 (emergency) messages are processed and transmitted before priority 2 messages, and so forth. FROST also allows for specification of "pipeline" or "direct" links. Pipeline links are used to broadcast at constant intervals, while direct links transmit messages only when packets are ready for transmission. FROST allows the user substantial flexibility to customize output for a simulation. Output consists of tables and graphs, as specified in the TCD file, to be generated at the specified interval. These tables may be generated at short intervals during the run to produce snapshots as simulation proceeds, or generated after the run to give a summary of the entire run. FROST is written in SIMSCRIPT II.5 (developed by CACI) for DEC VAX series computers running VMS. FROST was developed on a VAX 8700 and is intended to be run on large VAXes with at least 32Mb of memory. The main memory requirement for FROST is dependent on the number of processors used in the simulation and the event time. The standard distribution medium for this package is a 9-track 1600 BPI DEC VAX BACKUP Format Magnetic Tape. An executable is included on the tape in addition to the source code. FROST was developed in 1990 and is a copyrighted work with all copyright vested in NASA. DEC, VAX and VMS are registered trademarks of Digital Equipment Corporation. IBM PC is a trademark of International Business Machines. SIMSCRIPT II.5 is a trademark of CACI.

  2. Solving Equations of Multibody Dynamics

    NASA Technical Reports Server (NTRS)

    Jain, Abhinandan; Lim, Christopher

    2007-01-01

    Darts++ is a computer program for solving the equations of motion of a multibody system or of a multibody model of a dynamic system. It is intended especially for use in dynamical simulations performed in designing and analyzing, and developing software for the control of, complex mechanical systems. Darts++ is based on the Spatial-Operator- Algebra formulation for multibody dynamics. This software reads a description of a multibody system from a model data file, then constructs and implements an efficient algorithm that solves the dynamical equations of the system. The efficiency and, hence, the computational speed is sufficient to make Darts++ suitable for use in realtime closed-loop simulations. Darts++ features an object-oriented software architecture that enables reconfiguration of system topology at run time; in contrast, in related prior software, system topology is fixed during initialization. Darts++ provides an interface to scripting languages, including Tcl and Python, that enable the user to configure and interact with simulation objects at run time.

  3. The Met Office Coupled Atmosphere/Land/Ocean/Sea-Ice Data Assimilation System

    NASA Astrophysics Data System (ADS)

    Lea, Daniel; Mirouze, Isabelle; Martin, Matthew; Hines, Adrian; Guiavarch, Catherine; Shelly, Ann

    2014-05-01

    The Met Office has developed a weakly-coupled data assimilation (DA) system using the global coupled model HADGEM3 (Hadley Centre Global Environment Model, version 3). This model combines the atmospheric model UM (Unified Model) at 60 km horizontal resolution on 85 vertical levels, the ocean model NEMO (Nucleus for European Modeling of the Ocean) at 25 km (at the equator) horizontal resolution on 75 vertical levels, and the sea-ice model CICE at the same resolution as NEMO. The atmosphere and the ocean/sea-ice fields are coupled every 1-hour using the OASIS coupler. The coupled model is corrected using two separate 6-hour window data assimilation systems: a 4D-Var for the atmosphere with associated soil moisture content nudging and snow analysis schemes on the one hand, and a 3D-Var FGAT for the ocean and sea-ice on the other hand. The background information in the DA systems comes from a previous 6-hour forecast of the coupled model. To show the impact of coupled DA, one-month experiments have been carried out, including 1) a full atmosphere/land/ocean/sea-ice coupled DA run, 2) an atmosphere-only run forced by OSTIA SSTs and sea-ice with atmosphere and land DA, and 3) an ocean-only run forced by atmospheric fields from run 2 with ocean and sea-ice DA. In addition, 5-day forecast runs, started twice a day, have been produced from initial conditions generated by either run 1 or a combination of runs 2 and 3. The different results have been compared to each other and, whenever possible, to other references such as the Met Office atmosphere and ocean operational analyses or the OSTIA data. These all show the coupled DA system functioning well. Evidence of imbalances and initialisation shocks has also been looked for.

  4. Introducing the productive operating theatre programme in urology theatre suites.

    PubMed

    Ahmed, Kamran; Khan, Nuzhath; Anderson, Deirdre; Watkiss, Jonathan; Challacombe, Ben; Khan, Mohammed Shamim; Dasgupta, Prokar; Cahill, Declan

    2013-01-01

    The Productive Operating Theatre (TPOT) is a theatre improvement programme designed by the UK National Health Service. The aim of this study was to evaluate the implementation of TPOT in urology operating theatres and identify obstacles to running an ideal operating list. TPOT was introduced in two urology operating theatres in September 2010. A multidisciplinary team identified and audited obstacles to the running of an ideal operating list. A brief/debrief system was introduced and patient satisfaction was recorded via a structured questionnaire. The primary outcome measure was the effect of TPOT on start and overrun times. Start times: 39-41% increase in operating lists starting on time from September 2010 to June 2011, involving 1,365 cases. Overrun times: Declined by 832 min between March 2010 and March 2011. The cost of monthly overrun decreased from September 2010 to June 2011 by GBP 510-3,030. Patient experience: A high degree of satisfaction regarding level of care (77%), staff hygiene (71%) and information provided (72%), while negative comments regarding staff shortages and environment/facilities were recorded. TPOT has helped identify key obstacles and shown improvements in efficiency measures such as start/overrun times. Copyright © 2013 S. Karger AG, Basel.

  5. A Framework for Enterprise Operating Systems Based on Zachman Framework

    NASA Astrophysics Data System (ADS)

    Ostadzadeh, S. Shervin; Rahmani, Amir Masoud

    Nowadays, the Operating System (OS) isn't only the software that runs your computer. In the typical information-driven organization, the operating system is part of a much larger platform for applications and data that extends across the LAN, WAN and Internet. An OS cannot be an island unto itself; it must work with the rest of the enterprise. Enterprise wide applications require an Enterprise Operating System (EOS). Enterprise operating systems used in an enterprise have brought about an inevitable tendency to lunge towards organizing their information activities in a comprehensive way. In this respect, Enterprise Architecture (EA) has proven to be the leading option for development and maintenance of enterprise operating systems. EA clearly provides a thorough outline of the whole information system comprising an enterprise. To establish such an outline, a logical framework needs to be laid upon the entire information system. Zachman Framework (ZF) has been widely accepted as a standard scheme for identifying and organizing descriptive representations that have prominent roles in enterprise-wide system development. In this paper, we propose a framework based on ZF for enterprise operating systems. The presented framework helps developers to design and justify completely integrated business, IT systems, and operating systems which results in improved project success rate.

  6. A Software Architecture for Adaptive Modular Sensing Systems

    PubMed Central

    Lyle, Andrew C.; Naish, Michael D.

    2010-01-01

    By combining a number of simple transducer modules, an arbitrarily complex sensing system may be produced to accommodate a wide range of applications. This work outlines a novel software architecture and knowledge representation scheme that has been developed to support this type of flexible and reconfigurable modular sensing system. Template algorithms are used to embed intelligence within each module. As modules are added or removed, the composite sensor is able to automatically determine its overall geometry and assume an appropriate collective identity. A virtual machine-based middleware layer runs on top of a real-time operating system with a pre-emptive kernel, enabling platform-independent template algorithms to be written once and run on any module, irrespective of its underlying hardware architecture. Applications that may benefit from easily reconfigurable modular sensing systems include flexible inspection, mobile robotics, surveillance, and space exploration. PMID:22163614

  7. A software architecture for adaptive modular sensing systems.

    PubMed

    Lyle, Andrew C; Naish, Michael D

    2010-01-01

    By combining a number of simple transducer modules, an arbitrarily complex sensing system may be produced to accommodate a wide range of applications. This work outlines a novel software architecture and knowledge representation scheme that has been developed to support this type of flexible and reconfigurable modular sensing system. Template algorithms are used to embed intelligence within each module. As modules are added or removed, the composite sensor is able to automatically determine its overall geometry and assume an appropriate collective identity. A virtual machine-based middleware layer runs on top of a real-time operating system with a pre-emptive kernel, enabling platform-independent template algorithms to be written once and run on any module, irrespective of its underlying hardware architecture. Applications that may benefit from easily reconfigurable modular sensing systems include flexible inspection, mobile robotics, surveillance, and space exploration.

  8. Missed deadline notification in best-effort schedulers

    NASA Astrophysics Data System (ADS)

    Banachowski, Scott A.; Wu, Joel; Brandt, Scott A.

    2003-12-01

    It is common to run multimedia and other periodic, soft real-time applications on general-purpose computer systems. These systems use best-effort scheduling algorithms that cannot guarantee applications will receive responsive scheduling to meet deadline or timing requirements. We present a simple mechanism called Missed Deadline Notification (MDN) that allows applications to notify the system when they do not receive their desired level of responsiveness. Consisting of a single system call with no arguments, this simple interface allows the operating system to provide better support for soft real-time applications without any a priori information about their timing or resource needs. We implemented MDN in three different schedulers: Linux, BEST, and BeRate. We describe these implementations and their performance when running real-time applications and discuss policies to prevent applications from abusing MDN to gain extra resources.

  9. Using the Model Coupling Toolkit to couple earth system models

    USGS Publications Warehouse

    Warner, J.C.; Perlin, N.; Skyllingstad, E.D.

    2008-01-01

    Continued advances in computational resources are providing the opportunity to operate more sophisticated numerical models. Additionally, there is an increasing demand for multidisciplinary studies that include interactions between different physical processes. Therefore there is a strong desire to develop coupled modeling systems that utilize existing models and allow efficient data exchange and model control. The basic system would entail model "1" running on "M" processors and model "2" running on "N" processors, with efficient exchange of model fields at predetermined synchronization intervals. Here we demonstrate two coupled systems: the coupling of the ocean circulation model Regional Ocean Modeling System (ROMS) to the surface wave model Simulating WAves Nearshore (SWAN), and the coupling of ROMS to the atmospheric model Coupled Ocean Atmosphere Prediction System (COAMPS). Both coupled systems use the Model Coupling Toolkit (MCT) as a mechanism for operation control and inter-model distributed memory transfer of model variables. In this paper we describe requirements and other options for model coupling, explain the MCT library, ROMS, SWAN and COAMPS models, methods for grid decomposition and sparse matrix interpolation, and provide an example from each coupled system. Methods presented in this paper are clearly applicable for coupling of other types of models. ?? 2008 Elsevier Ltd. All rights reserved.

  10. A keyboard control method for loop measurement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Z.W.

    1994-12-31

    This paper describes a keyboard control mode based on the DEC VAX computer. The VAX Keyboard code can be found under running of a program was developed. During the loop measurement or multitask operation, it ables to be distinguished from a keyboard code to stop current operation or transfer to another operation while previous information can be held. The combining of this mode, the author successfully used one key control loop measurement for test Dual Input Memory module which is used in a rearrange Energy Trigger system for LEP 8 Bunch operation.

  11. 76 FR 63858 - Drawbridge Operation Regulation; Trent River, New Bern, NC

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-14

    ... River Bridge Runs. This deviation allows the bridge to remain in the closed position to ensure safe..., Docket Operations, telephone 202-366-9826. SUPPLEMENTARY INFORMATION: The Neuse River Bridge Run... River, mile 0.0, at New Bern, NC. The route of the three Neuse River Bridge Run races cross the bridge...

  12. Altitude Wind Tunnel Operating at Night

    NASA Image and Video Library

    1945-04-21

    The Altitude Wind Tunnel (AWT) during one of its overnight runs at the National Advisory Committee for Aeronautics (NACA) Aircraft Engine Research Laboratory in Cleveland, Ohio. The AWT was run during night hours so that its massive power loads were handled when regional electric demands were lowest. At the time the AWT was among the most complex wind tunnels ever designed. In order to simulate conditions at high altitudes, NACA engineers designed innovative new systems that required tremendous amounts of electricity. The NACA had an agreement with the local electric company that it would run its larger facilities overnight when local demand was at its lowest. In return the utility discounted its rates for the NACA during those hours. The AWT could produce wind speeds up to 500 miles per hour through its 20-foot-diameter test section at the standard operating altitude of 30,000 feet. The airflow was created by a large fan that was driven by an 18,000-horsepower General Electric induction motor. The altitude simulation was accomplished by large exhauster and refrigeration systems. The cold temperatures were created by 14 Carrier compressors and the thin atmosphere by four 1750-horsepower exhausters. The first and second shifts usually set up and broke down the test articles, while the third shift ran the actual tests. Engineers would often have to work all day, then operate the tunnel overnight, and analyze the data the next day. The night crew usually briefed the dayshift on the tests during morning staff meetings.

  13. 14 CFR Appendix K to Part 25 - Extended Operations (ETOPS)

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... that is time-limited. K25.1.4Propulsion systems. (a) Fuel system design. Fuel necessary to complete an... does not apply to airplanes with a required flight engineer. (b) APU design. If an APU is needed to..., whichever is lower, and run for the remainder of any flight . (c) Engine oil tank design. The engine oil...

  14. 14 CFR Appendix K to Part 25 - Extended Operations (ETOPS)

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... that is time-limited. K25.1.4Propulsion systems. (a) Fuel system design. Fuel necessary to complete an... does not apply to airplanes with a required flight engineer. (b) APU design. If an APU is needed to..., whichever is lower, and run for the remainder of any flight . (c) Engine oil tank design. The engine oil...

  15. 14 CFR Appendix K to Part 25 - Extended Operations (ETOPS)

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... that is time-limited. K25.1.4Propulsion systems. (a) Fuel system design. Fuel necessary to complete an... does not apply to airplanes with a required flight engineer. (b) APU design. If an APU is needed to..., whichever is lower, and run for the remainder of any flight . (c) Engine oil tank design. The engine oil...

  16. 14 CFR Appendix K to Part 25 - Extended Operations (ETOPS)

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... that is time-limited. K25.1.4Propulsion systems. (a) Fuel system design. Fuel necessary to complete an... does not apply to airplanes with a required flight engineer. (b) APU design. If an APU is needed to..., whichever is lower, and run for the remainder of any flight . (c) Engine oil tank design. The engine oil...

  17. A Mobile Computing Solution for Collecting Functional Analysis Data on a Pocket PC

    ERIC Educational Resources Information Center

    Jackson, James; Dixon, Mark R.

    2007-01-01

    The present paper provides a task analysis for creating a computerized data system using a Pocket PC and Microsoft Visual Basic. With Visual Basic software and any handheld device running the Windows MOBLE operating system, this task analysis will allow behavior analysts to program and customize their own functional analysis data-collection…

  18. Batteries for autonomous renewable energy systems

    NASA Astrophysics Data System (ADS)

    Sheridan, Norman R.

    Now that the Coconut Island plant has been running successfully for three years, it is appropriate to review the design decisions that were made with regard to the battery and to consider how these might be changed for future systems. The following aspects are discussed: type, package, energy storage, voltage, parallel operation, installation, charging, watering, life and quality assurance.

  19. A Simulation Program for Dynamic Infrared (IR) Spectra

    ERIC Educational Resources Information Center

    Zoerb, Matthew C.; Harris, Charles B.

    2013-01-01

    A free program for the simulation of dynamic infrared (IR) spectra is presented. The program simulates the spectrum of two exchanging IR peaks based on simple input parameters. Larger systems can be simulated with minor modifications. The program is available as an executable program for PCs or can be run in MATLAB on any operating system. Source…

  20. Implementing a Parallel Image Edge Detection Algorithm Based on the Otsu-Canny Operator on the Hadoop Platform

    PubMed Central

    Wang, Min; Tian, Yun

    2018-01-01

    The Canny operator is widely used to detect edges in images. However, as the size of the image dataset increases, the edge detection performance of the Canny operator decreases and its runtime becomes excessive. To improve the runtime and edge detection performance of the Canny operator, in this paper, we propose a parallel design and implementation for an Otsu-optimized Canny operator using a MapReduce parallel programming model that runs on the Hadoop platform. The Otsu algorithm is used to optimize the Canny operator's dual threshold and improve the edge detection performance, while the MapReduce parallel programming model facilitates parallel processing for the Canny operator to solve the processing speed and communication cost problems that occur when the Canny edge detection algorithm is applied to big data. For the experiments, we constructed datasets of different scales from the Pascal VOC2012 image database. The proposed parallel Otsu-Canny edge detection algorithm performs better than other traditional edge detection algorithms. The parallel approach reduced the running time by approximately 67.2% on a Hadoop cluster architecture consisting of 5 nodes with a dataset of 60,000 images. Overall, our approach system speeds up the system by approximately 3.4 times when processing large-scale datasets, which demonstrates the obvious superiority of our method. The proposed algorithm in this study demonstrates both better edge detection performance and improved time performance. PMID:29861711

  1. Space shuttle main engine definition (phase B). Volume 2: Avionics. [for space shuttle

    NASA Technical Reports Server (NTRS)

    1971-01-01

    The advent of the space shuttle engine with its requirements for high specific impulse, long life, and low cost have dictated a combustion cycle and a closed loop control system to allow the engine components to run close to operating limits. These performance requirements, combined with the necessity for low operational costs, have placed new demands on rocket engine control, system checkout, and diagnosis technology. Based on considerations of precision environment, and compatibility with vehicle interface commands, an electronic control, makes available many functions that logically provide the information required for engine system checkout and diagnosis.

  2. Processable Data Making in the Remote Server Sent by Android Phone as a GIS Data Collecting Tool

    NASA Astrophysics Data System (ADS)

    Karaagac, Abdullah; Bostancı, Bulent

    2016-04-01

    Mobile technologies are improving and getting cheaper everyday. Not only smart phones are improved much but also new types of mobile applications and sensors come with the smart phone together. Maps and navigation applications one of the most popular types of applications on these types. Most of these applications uses location services including GNSS, Wi Fi, cellular data and beacon services. Although these coordinate precision not very high, it is appropriate for many applications to utilize. Android is a mobile operating system based on Linux Kernel. It is compatible for varies mobile devices like smart phones, tablets, smart TV's, wearable technologies etc. Android has large capability for application development by using the open source libraries and device sensors like gyroscope, GNSS etc. Android Studio is the most popular integrated development environment (IDE) for Android devices, mainly developing by Google. It had been announced on May 16, 2013 at Google I/O conference. Android Studio is built upon Gradle architecture which is written in Java language. SQLite is a relational database operating system which has so common usage for mobile devices. It developed by using C programming library. It is mostly used via embedding into a software or application. It supports many operating systems including Android. Remote servers can be in several forms from high complexity to simplicity. For this project we will use a open source quad core board computer named Raspberry Pi 2. This device includes 900 MHz ARMv7 compatible quad core CPU, VideoCore IV GPU and 1 GB RAM. Although Raspberry Pi 2's main operating system is Raspbian, we use Debian which are both Linux based operating systems. Raspberry is compatible for many programming language, however some languages are optimized for this device. These are Python, Java, C, C++, Ruby, Perl and Squeak Smalltalk. In this paper, a mobile application will be developed to send coordinate and string data to a SQL database embedded to a remote server. The application will run on Android Operating System running mobile phone. The application will get the location information from the GNSS and cellular data. The user will enter the other information individually. These information will send by clicking a button to remote server which runs SQLite. All these informations will be convertible to any type of measure like type of coordinates could be converted from WGS 84 to ITRF.

  3. HIPAA-compliant automatic monitoring system for RIS-integrated PACS operation

    NASA Astrophysics Data System (ADS)

    Jin, Jin; Zhang, Jianguo; Chen, Xiaomeng; Sun, Jianyong; Yang, Yuanyuan; Liang, Chenwen; Feng, Jie; Sheng, Liwei; Huang, H. K.

    2006-03-01

    As a governmental regulation, Health Insurance Portability and Accountability Act (HIPAA) was issued to protect the privacy of health information that identifies individuals who are living or deceased. HIPAA requires security services supporting implementation features: Access control; Audit controls; Authorization control; Data authentication; and Entity authentication. These controls, which proposed in HIPAA Security Standards, are Audit trails here. Audit trails can be used for surveillance purposes, to detect when interesting events might be happening that warrant further investigation. Or they can be used forensically, after the detection of a security breach, to determine what went wrong and who or what was at fault. In order to provide security control services and to achieve the high and continuous availability, we design the HIPAA-Compliant Automatic Monitoring System for RIS-Integrated PACS operation. The system consists of two parts: monitoring agents running in each PACS component computer and a Monitor Server running in a remote computer. Monitoring agents are deployed on all computer nodes in RIS-Integrated PACS system to collect the Audit trail messages defined by the Supplement 95 of the DICOM standard: Audit Trail Messages. Then the Monitor Server gathers all audit messages and processes them to provide security information in three levels: system resources, PACS/RIS applications, and users/patients data accessing. Now the RIS-Integrated PACS managers can monitor and control the entire RIS-Integrated PACS operation through web service provided by the Monitor Server. This paper presents the design of a HIPAA-compliant automatic monitoring system for RIS-Integrated PACS Operation, and gives the preliminary results performed by this monitoring system on a clinical RIS-integrated PACS.

  4. The Impact of Conflicting Spatial Representations in Airborne Unmanned Aerial System Sensor Control

    DTIC Science & Technology

    2016-02-01

    Spatial Discordance 1 Running head: SPATIAL DISCORDANCE IN AIRBORNE UAS OPERATIONS The impact of conflicting spatial...representations in airborne unmanned aerial system sensor control Joseph W Geeseman, James E Patrey, Caroline Davy, Katherine Peditto, & Christine Zernickow...system (UAS) simulation while riding in the fuselage of an airborne Lockheed P-3 Orion. The P-3 flew a flight profile of intermittent ascending

  5. Validation of Mission Plans Through Simulation

    NASA Astrophysics Data System (ADS)

    St-Pierre, J.; Melanson, P.; Brunet, C.; Crabtree, D.

    2002-01-01

    The purpose of a spacecraft mission planning system is to automatically generate safe and optimized mission plans for a single spacecraft, or more functioning in unison. The system verifies user input syntax, conformance to commanding constraints, absence of duty cycle violations, timing conflicts, state conflicts, etc. Present day constraint-based systems with state-based predictive models use verification rules derived from expert knowledge. A familiar solution found in Mission Operations Centers, is to complement the planning system with a high fidelity spacecraft simulator. Often a dedicated workstation, the simulator is frequently used for operator training and procedure validation, and may be interfaced to actual control stations with command and telemetry links. While there are distinct advantages to having a planning system offer realistic operator training using the actual flight control console, physical verification of data transfer across layers and procedure validation, experience has revealed some drawbacks and inefficiencies in ground segment operations: With these considerations, two simulation-based mission plan validation projects are under way at the Canadian Space Agency (CSA): RVMP and ViSION. The tools proposed in these projects will automatically run scenarios and provide execution reports to operations planning personnel, prior to actual command upload. This can provide an important safeguard for system or human errors that can only be detected with high fidelity, interdependent spacecraft models running concurrently. The core element common to these projects is a spacecraft simulator, built with off-the- shelf components such as CAE's Real-Time Object-Based Simulation Environment (ROSE) technology, MathWork's MATLAB/Simulink, and Analytical Graphics' Satellite Tool Kit (STK). To complement these tools, additional components were developed, such as an emulated Spacecraft Test and Operations Language (STOL) interpreter and CCSDS TM/TC encoders and decoders. This paper discusses the use of simulation in the context of space mission planning, describes the projects under way and proposes additional venues of investigation and development.

  6. Analysis of the economics of photovoltaic-diesel-battery energy systems for remote applications

    NASA Technical Reports Server (NTRS)

    Brainard, W. A.

    1983-01-01

    Computer simulations were conducted to analyze the performance and operating cost of a photovoltaic energy source combined with a diesel generator system and battery storage. The simulations were based on the load demand profiles used for the design of an all photovoltaic energy system installed in the remote Papago Indian Village of Schuchuli, Arizona. Twenty year simulations were run using solar insolation data from Phoenix SOLMET tapes. Total energy produced, energy consumed, operation and maintenance costs were calculated. The life cycle and levelized energy costs were determined for a variety of system configurations (i.e., varying amounts of photovoltaic array and battery storage).

  7. Application of inexpensive, low-cost, low-bandwidth silhouette profiling UGS systems to current remote sensing operations

    NASA Astrophysics Data System (ADS)

    Haskovic, Emir Y.; Walsh, Sterling; Cloud, Glenn; Winkelman, Rick; Jia, Yingqing; Vishnyakov, Sergey; Jin, Feng

    2013-05-01

    Low cost, power and bandwidth UGS can be used to fill the growing need for surveillance in remote environments. In particular, linear and 2D thermal sensor systems can run for up to months at a time and their deployment can be scaled to suit the size of the mission. Thermal silhouette profilers like Brimrose's SPOT system reduce power and bandwidth requirements by performing elementary classification and only transmitting binary data using optimized compression methods. These systems satisfy the demands for an increasing number of surveillance operations where reduced bandwidth and power consumption are mission critical.

  8. Pushing HTCondor and glideinWMS to 200K+ Jobs in a Global Pool for CMS before Run 2

    NASA Astrophysics Data System (ADS)

    Balcas, J.; Belforte, S.; Bockelman, B.; Gutsche, O.; Khan, F.; Larson, K.; Letts, J.; Mascheroni, M.; Mason, D.; McCrea, A.; Saiz-Santos, M.; Sfiligoi, I.

    2015-12-01

    The CMS experiment at the LHC relies on HTCondor and glideinWMS as its primary batch and pilot-based Grid provisioning system. So far we have been running several independent resource pools, but we are working on unifying them all to reduce the operational load and more effectively share resources between various activities in CMS. The major challenge of this unification activity is scale. The combined pool size is expected to reach 200K job slots, which is significantly bigger than any other multi-user HTCondor based system currently in production. To get there we have studied scaling limitations in our existing pools, the biggest of which tops out at about 70K slots, providing valuable feedback to the development communities, who have responded by delivering improvements which have helped us reach higher and higher scales with more stability. We have also worked on improving the organization and support model for this critical service during Run 2 of the LHC. This contribution will present the results of the scale testing and experiences from the first months of running the Global Pool.

  9. Front Range commuter bus study. Phase 2 : final report

    DOT National Transportation Integrated Search

    2003-10-01

    The goal of Front Range Commuter Bus service would be to provide a commuter bus service that would operate seamlessly with local transit systems and would be run through a partnership with each of the cities, CDOT, RTD and participating private provi...

  10. Seamless transitions from early prototypes to mature operational software - A technology that enables the process for planning and scheduling applications

    NASA Technical Reports Server (NTRS)

    Hornstein, Rhoda S.; Wunderlich, Dana A.; Willoughby, John K.

    1992-01-01

    New and innovative software technology is presented that provides a cost effective bridge for smoothly transitioning prototype software, in the field of planning and scheduling, into an operational environment. Specifically, this technology mixes the flexibility and human design efficiency of dynamic data typing with the rigor and run-time efficiencies of static data typing. This new technology provides a very valuable tool for conducting the extensive, up-front system prototyping that leads to specifying the correct system and producing a reliable, efficient version that will be operationally effective and will be accepted by the intended users.

  11. Effect of cycle run time of backwash and relaxation on membrane fouling removal in submerged membrane bioreactor treating sewage at higher flux.

    PubMed

    Tabraiz, Shamas; Haydar, Sajjad; Sallis, Paul; Nasreen, Sadia; Mahmood, Qaisar; Awais, Muhammad; Acharya, Kishor

    2017-08-01

    Intermittent backwashing and relaxation are mandatory in the membrane bioreactor (MBR) for its effective operation. The objective of the current study was to evaluate the effects of run-relaxation and run-backwash cycle time on fouling rates. Furthermore, comparison of the effects of backwashing and relaxation on the fouling behavior of membrane in high rate submerged MBR. The study was carried out on a laboratory scale MBR at high flux (30 L/m 2 ·h), treating sewage. The MBR was operated at three relaxation operational scenarios by keeping the run time to relaxation time ratio constant. Similarly, the MBR was operated at three backwashing operational scenarios by keeping the run time to backwashing time ratio constant. The results revealed that the provision of relaxation or backwashing at small intervals prolonged the MBR operation by reducing fouling rates. The cake and pores fouling rates in backwashing scenarios were far less as compared to the relaxation scenarios, which proved backwashing a better option as compared to relaxation. The operation time of backwashing scenario (lowest cycle time) was 64.6% and 21.1% more as compared to continuous scenario and relaxation scenario (lowest cycle time), respectively. Increase in cycle time increased removal efficiencies insignificantly, in both scenarios of relaxation and backwashing.

  12. SPI/U3.2. Security Profile Inspector for UNIX Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bartoletti, A.

    1994-08-01

    SPI/U3.2 consists of five tools used to assess and report the security posture of computers running the UNIX operating system. The tools are: Access Control Test: A rule-based system which identifies sequential dependencies in UNIX access controls. Binary Authentication Tool: Evaluates the release status of system binaries by comparing a crypto-checksum to provide table entries. Change Detection Tool: Maintains and applies a snapshot of critical system files and attributes for purposes of change detection. Configuration Query Language: Accepts CQL-based scripts (provided) to evaluate queries over the status of system files, configuration of services and many other elements of UNIX systemmore » security. Password Security Inspector: Tests for weak or aged passwords. The tools are packaged with a forms-based user interface providing on-line context-sensistive help, job scheduling, parameter management and output report management utilities. Tools may be run independent of the UI.« less

  13. Modeling a maintenance simulation of the geosynchronous platform

    NASA Technical Reports Server (NTRS)

    Kleiner, A. F., Jr.

    1980-01-01

    A modeling technique used to conduct a simulation study comparing various maintenance routines for a space platform is dicussed. A system model is described and illustrated, the basic concepts of a simulation pass are detailed, and sections on failures and maintenance are included. The operation of the system across time is best modeled by a discrete event approach with two basic events - failure and maintenance of the system. Each overall simulation run consists of introducing a particular model of the physical system, together with a maintenance policy, demand function, and mission lifetime. The system is then run through many passes, each pass corresponding to one mission and the model is re-initialized before each pass. Statistics are compiled at the end of each pass and after the last pass a report is printed. Items of interest typically include the time to first maintenance, total number of maintenance trips for each pass, average capability of the system, etc.

  14. ARCAS (ACACIA Regional Climate-data Access System) -- a Web Access System for Climate Model Data Access, Visualization and Comparison

    NASA Astrophysics Data System (ADS)

    Hakkarinen, C.; Brown, D.; Callahan, J.; hankin, S.; de Koningh, M.; Middleton-Link, D.; Wigley, T.

    2001-05-01

    A Web-based access system to climate model output data sets for intercomparison and analysis has been produced, using the NOAA-PMEL developed Live Access Server software as host server and Ferret as the data serving and visualization engine. Called ARCAS ("ACACIA Regional Climate-data Access System"), and publicly accessible at http://dataserver.ucar.edu/arcas, the site currently serves climate model outputs from runs of the NCAR Climate System Model for the 21st century, for Business as Usual and Stabilization of Greenhouse Gas Emission scenarios. Users can select, download, and graphically display single variables or comparisons of two variables from either or both of the CSM model runs, averaged for monthly, seasonal, or annual time resolutions. The time length of the averaging period, and the geographical domain for download and display, are fully selectable by the user. A variety of arithmetic operations on the data variables can be computed "on-the-fly", as defined by the user. Expansions of the user-selectable options for defining analysis options, and for accessing other DOD-compatible ("Distributed Ocean Data System-compatible") data sets, residing at locations other than the NCAR hardware server on which ARCAS operates, are planned for this year. These expansions are designed to allow users quick and easy-to-operate web-based access to the largest possible selection of climate model output data sets available throughout the world.

  15. PanDA Pilot Submission using Condor-G: Experience and Improvements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao X.; Hover John; Wlodek Tomasz

    2011-01-01

    PanDA (Production and Distributed Analysis) is the workload management system of the ATLAS experiment, used to run managed production and user analysis jobs on the grid. As a late-binding, pilot-based system, the maintenance of a smooth and steady stream of pilot jobs to all grid sites is critical for PanDA operation. The ATLAS Computing Facility (ACF) at BNL, as the ATLAS Tier1 center in the US, operates the pilot submission systems for the US. This is done using the PanDA 'AutoPilot' scheduler component which submits pilot jobs via Condor-G, a grid job scheduling system developed at the University of Wisconsin-Madison.more » In this paper, we discuss the operation and performance of the Condor-G pilot submission at BNL, with emphasis on the challenges and issues encountered in the real grid production environment. With the close collaboration of Condor and PanDA teams, the scalability and stability of the overall system has been greatly improved over the last year. We review improvements made to Condor-G resulting from this collaboration, including isolation of site-based issues by running a separate Gridmanager for each remote site, introduction of the 'Nonessential' job attribute to allow Condor to optimize its behavior for the specific character of pilot jobs, better understanding and handling of the Gridmonitor process, as well as better scheduling in the PanDA pilot scheduler component. We will also cover the monitoring of the health of the system.« less

  16. On the assimilation of satellite derived soil moisture in numerical weather prediction models

    NASA Astrophysics Data System (ADS)

    Drusch, M.

    2006-12-01

    Satellite derived surface soil moisture data sets are readily available and have been used successfully in hydrological applications. In many operational numerical weather prediction systems the initial soil moisture conditions are analysed from the modelled background and 2 m temperature and relative humidity. This approach has proven its efficiency to improve surface latent and sensible heat fluxes and consequently the forecast on large geographical domains. However, since soil moisture is not always related to screen level variables, model errors and uncertainties in the forcing data can accumulate in root zone soil moisture. Remotely sensed surface soil moisture is directly linked to the model's uppermost soil layer and therefore is a stronger constraint for the soil moisture analysis. Three data assimilation experiments with the Integrated Forecast System (IFS) of the European Centre for Medium-range Weather Forecasts (ECMWF) have been performed for the two months period of June and July 2002: A control run based on the operational soil moisture analysis, an open loop run with freely evolving soil moisture, and an experimental run incorporating bias corrected TMI (TRMM Microwave Imager) derived soil moisture over the southern United States through a nudging scheme using 6-hourly departures. Apart from the soil moisture analysis, the system setup reflects the operational forecast configuration including the atmospheric 4D-Var analysis. Soil moisture analysed in the nudging experiment is the most accurate estimate when compared against in-situ observations from the Oklahoma Mesonet. The corresponding forecast for 2 m temperature and relative humidity is almost as accurate as in the control experiment. Furthermore, it is shown that the soil moisture analysis influences local weather parameters including the planetary boundary layer height and cloud coverage. The transferability of the results to other satellite derived soil moisture data sets will be discussed.

  17. Users Manual for the Geospatial Stream Flow Model (GeoSFM)

    USGS Publications Warehouse

    Artan, Guleid A.; Asante, Kwabena; Smith, Jodie; Pervez, Md Shahriar; Entenmann, Debbie; Verdin, James P.; Rowland, James

    2008-01-01

    The monitoring of wide-area hydrologic events requires the manipulation of large amounts of geospatial and time series data into concise information products that characterize the location and magnitude of the event. To perform these manipulations, scientists at the U.S. Geological Survey Center for Earth Resources Observation and Science (EROS), with the cooperation of the U.S. Agency for International Development, Office of Foreign Disaster Assistance (USAID/OFDA), have implemented a hydrologic modeling system. The system includes a data assimilation component to generate data for a Geospatial Stream Flow Model (GeoSFM) that can be run operationally to identify and map wide-area streamflow anomalies. GeoSFM integrates a geographical information system (GIS) for geospatial preprocessing and postprocessing tasks and hydrologic modeling routines implemented as dynamically linked libraries (DLLs) for time series manipulations. Model results include maps that depicting the status of streamflow and soil water conditions. This Users Manual provides step-by-step instructions for running the model and for downloading and processing the input data required for initial model parameterization and daily operation.

  18. MagAO: Status and on-sky performance of the Magellan adaptive optics system

    NASA Astrophysics Data System (ADS)

    Morzinski, Katie M.; Close, Laird M.; Males, Jared R.; Kopon, Derek; Hinz, Phil M.; Esposito, Simone; Riccardi, Armando; Puglisi, Alfio; Pinna, Enrico; Briguglio, Runa; Xompero, Marco; Quirós-Pacheco, Fernando; Bailey, Vanessa; Follette, Katherine B.; Rodigas, T. J.; Wu, Ya-Lin; Arcidiacono, Carmelo; Argomedo, Javier; Busoni, Lorenzo; Hare, Tyson; Uomoto, Alan; Weinberger, Alycia

    2014-07-01

    MagAO is the new adaptive optics system with visible-light and infrared science cameras, located on the 6.5-m Magellan "Clay" telescope at Las Campanas Observatory, Chile. The instrument locks on natural guide stars (NGS) from 0th to 16th R-band magnitude, measures turbulence with a modulating pyramid wavefront sensor binnable from 28×28 to 7×7 subapertures, and uses a 585-actuator adaptive secondary mirror (ASM) to provide at wavefronts to the two science cameras. MagAO is a mutated clone of the similar AO systems at the Large Binocular Telescope (LBT) at Mt. Graham, Arizona. The high-level AO loop controls up to 378 modes and operates at frame rates up to 1000 Hz. The instrument has two science cameras: VisAO operating from 0.5-1μm and Clio2 operating from 1-5 μm. MagAO was installed in 2012 and successfully completed two commissioning runs in 2012-2013. In April 2014 we had our first science run that was open to the general Magellan community. Observers from Arizona, Carnegie, Australia, Harvard, MIT, Michigan, and Chile took observations in collaboration with the MagAO instrument team. Here we describe the MagAO instrument, describe our on-sky performance, and report our status as of summer 2014.

  19. 76 FR 28505 - Okanogan Public Utility District No. 1 of Okanogan County, WA; Notice of Availability of Draft...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-17

    ....5 miles of new and upgraded access roads. The Enloe Project would operate automatically in a run-of... run-of-river and implementing agency-recommended ramping rates downstream of the project during... effects on geology and soils and water quality. Run-of-river operation would minimize effects on aquatic...

  20. Impact of Lake Okeechobee Sea Surface Temperatures on Numerical Predictions of Summertime Convective Systems over South Florida

    NASA Technical Reports Server (NTRS)

    Case, Jonathan L.; Splitt, Michael E.; Fuell, Kevin K.; Santos, Pablo; Lazarus, Steven M.; Jedlovec, Gary J.

    2009-01-01

    The NASA Short-term Prediction Research and Transition (SPoRT) Center, the Florida Institute of Technology, and the NOAA/NWS Weather Forecast Office at Miami, FL (MFL) are collaborating on a project to investigate the impact of using high-resolution, 2-km Moderate Resolution Imaging Spectroradiometer (MODIS) sea surface temperature (SST) composites within the Weather Research and Forecasting (WRF) prediction system. The NWS MFL is currently running WRF in real-time to support daily forecast operations, using the National Centers for Environmental Prediction Nonhydrostatic Mesoscale Model dynamical core within the NWS Science and Training Resource Center's Environmental Modeling System (EMS) software. Twenty-seven hour forecasts are run daily initialized at 0300, 0900, 1500, and 2100 UTC on a domain with 4-km grid spacing covering the southern half of Florida and adjacent waters of the Gulf of Mexico and Atlantic Ocean. The SSTs are initialized with the NCEP Real-Time Global (RTG) analyses at 1/12deg resolution. The project objective is to determine whether more accurate specification of the lower-boundary forcing over water using the MODIS SST composites within the 4-km WRF runs will result in improved sea fluxes and hence, more accurate e\\olutiono f coastal mesoscale circulations and the associated sensible weather elements. SPoRT conducted parallel WRF EMS runs from February to August 2007 identical to the operational runs at NWS MFL except for the use of MODIS SST composites in place of the RTG product as the initial and boundary conditions over water. During the course of this evaluation, an intriguing case was examined from 6 May 2007, in which lake breezes and convection around Lake Okeechobee evolved quite differently when using the high-resolution SPoRT MODIS SST composites versus the lower-resolution RTG SSTs. This paper will analyze the differences in the 6 May simulations, as well as examine other cases from the summer 2007 in which the WRF-simulated Lake Okeechobee breezes evolved differently due to the SST initialization. The effects on wind fields and precipitation systems will be emphasized, including validation against surface mesonet observations and Stage IV precipitation grids.

  1. HYPERDIRE-HYPERgeometric functions DIfferential REduction: Mathematica-based packages for the differential reduction of generalized hypergeometric functions: Lauricella function FC of three variables

    NASA Astrophysics Data System (ADS)

    Bytev, Vladimir V.; Kniehl, Bernd A.

    2016-09-01

    We present a further extension of the HYPERDIRE project, which is devoted to the creation of a set of Mathematica-based program packages for manipulations with Horn-type hypergeometric functions on the basis of differential equations. Specifically, we present the implementation of the differential reduction for the Lauricella function FC of three variables. Catalogue identifier: AEPP_v4_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEPP_v4_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 243461 No. of bytes in distributed program, including test data, etc.: 61610782 Distribution format: tar.gz Programming language: Mathematica. Computer: All computers running Mathematica. Operating system: Operating systems running Mathematica. Classification: 4.4. Does the new version supersede the previous version?: No, it significantly extends the previous version. Nature of problem: Reduction of hypergeometric function FC of three variables to a set of basis functions. Solution method: Differential reduction. Reasons for new version: The extension package allows the user to handle the Lauricella function FC of three variables. Summary of revisions: The previous version goes unchanged. Running time: Depends on the complexity of the problem.

  2. Preliminary Optimization for Spring-Run Chinook Salmon Environmental Flows in Lassen Foothill Watersheds

    NASA Astrophysics Data System (ADS)

    Ta, J.; Kelsey, R.; Howard, J.; Hall, M.; Lund, J. R.; Viers, J. H.

    2014-12-01

    Stream flow controls physical and ecological processes in rivers that support freshwater ecosystems and biodiversity vital for services that humans depend on. This master variable has been impaired by human activities like dam operations, water diversions, and flood control infrastructure. Furthermore, increasing water scarcity due to rising water demands and droughts has further stressed these systems, calling for the need to find better ways to identify and allocate environmental flows. In this study, a linear optimization model was developed for environmental flows in river systems that have minimal or no regulation from dam operations, but still exhibit altered flow regimes due to surface water diversions and groundwater abstraction. Flow regime requirements for California Central Valley spring-run Chinook salmon (Oncorhynchus tshawytscha) life history were used as a test case to examine how alterations to the timing and magnitude of water diversions meet environmental flow objectives while minimizing impact to local water supply. The model was then applied to Mill Creek, a tributary of the Sacramento River, in northern California, and its altered flow regime that currently impacts adult spring-run Chinook spawning and migration. The resulting optimized water diversion schedule can be used to inform water management decisions that aim to maximize benefit for the environment while meeting local water demands.

  3. Hybrid cryptosystem for image file using elgamal and double playfair cipher algorithm

    NASA Astrophysics Data System (ADS)

    Hardi, S. M.; Tarigan, J. T.; Safrina, N.

    2018-03-01

    In this paper, we present an implementation of an image file encryption using hybrid cryptography. We chose ElGamal algorithm to perform asymmetric encryption and Double Playfair for the symmetric encryption. Our objective is to show that these algorithms are capable to encrypt an image file with an acceptable running time and encrypted file size while maintaining the level of security. The application was built using C# programming language and ran as a stand alone desktop application under Windows Operating System. Our test shows that the system is capable to encrypt an image with a resolution of 500×500 to a size of 976 kilobytes with an acceptable running time.

  4. Runway Incursion Prevention System Simulation Evaluation

    NASA Technical Reports Server (NTRS)

    Jones, Denise R.

    2002-01-01

    A Runway Incursion Prevention System (RIPS) was evaluated in a full mission simulation study at the NASA Langley Research center in March 2002. RIPS integrates airborne and ground-based technologies to provide (1) enhanced surface situational awareness to avoid blunders and (2) alerts of runway conflicts in order to prevent runway incidents while also improving operational capability. A series of test runs was conducted in a high fidelity simulator. The purpose of the study was to evaluate the RIPS airborne incursion detection algorithms and associated alerting and airport surface display concepts. Eight commercial airline crews participated as test subjects completing 467 test runs. This paper gives an overview of the RIPS, simulation study, and test results.

  5. Run-Curve Design for Energy Saving Operation in a Modern DC-Electrification

    NASA Astrophysics Data System (ADS)

    Koseki, Takafumi; Noda, Takashi

    Mechanical brakes are often used by electric trains. These brakes have a few problems like response speed, coefficient of friction, maintenance cost and so on. As a result, methods for actively using regenerative brakes are required. In this paper, we propose the useful pure electric braking, which would involve ordinary brakes by only regenerative brakes without any mechanical brakes at high speed. Benefits of our proposal include a DC-electrification system with regenerative substations that can return powers to the commercial power system and a train that can use the full regenerative braking force. We furthermore evaluate the effects on running time and energies saved by regenerative substations in the proposed method.

  6. Experimental control of sea lampreys with electricity on the south shore of Lake Superior, 1953-60

    USGS Publications Warehouse

    McLain, Alberton L.; Smith, Bernard R.; Moore, Harry H.

    1965-01-01

    Electric devices of the type and design used are capable of blocking entire runs of adult sea lampreys. An accurate appraisal of the effectiveness of the barrier system is impossible, however. Most of the barriers were not operated long enough to reduce the contribution of parasites from the streams. Furthermore, a complete system of efficient electric barriers was never realized. The greatest weakness of this method of control lies in maintenance of the units in continuous, uninterrupted operation through consecutive migratory seasons.

  7. Coupled lagged ensemble weather- and river runoff prediction in complex Alpine terrain

    NASA Astrophysics Data System (ADS)

    Smiatek, Gerhard; Kunstmann, Harald; Werhahn, Johannes

    2013-04-01

    It is still a challenge to predict fast reacting streamflow precipitation response in Alpine terrain. Civil protection measures require flood prediction in 24 - 48 lead time. This holds particularly true for the Ammer River region which was affected by century floods in 1999, 2003 and 2005. Since 2005 a coupled NWP/Hydrology model system is operated in simulating and predicting the Ammer River discharges. The Ammer River catchment is located in the Bavarian Ammergau Alps and alpine forelands, Germany. With elevations reaching 2185 m and annual mean precipitation between 1100 and 2000 mm it represents very demanding test ground for a river runoff prediction system. The one way coupled system utilizes a lagged ensemble prediction system (EPS) taking into account combination of recent and previous NWP forecasts. The major components of the system are the MM5 NWP model run at 3.5 km resolution and initialized twice a day, the hydrology model WaSiM-ETH run at 100 m resolution and Perl object environment (POE) implementing the networking and the system operation. Results obtained in the years 2005-2012 reveal that river runoff simulations depict already high correlation (NSC in range 0.53 and 0.95) with observed runoff in retrospective runs with monitored meteorology data, but suffer from errors in quantitative precipitation forecast (QPF) from the employed numerical weather prediction model. We evaluate the NWP model accuracy, especially the precipitation intensity, frequency and location and put a focus on the performance gain of bias adjustment procedures. We show how this enhanced QFP data help to reduce the uncertainty in the discharge prediction. In addition to the HND (Hochwassernachrichtendienst, Bayern) observations TERENO Longterm Observatory hydrometeorological observation data are available since 2011. They are used to evaluate the NWP performance and setup of a bias correction procedure based on ensemble postprocessing applying Bayesian (BMA) model averaging. We first present briefly the technical setup of the operational coupled lagged NWP/Hydrology model system and then focus on the evaluation of the NWP model, the BMA enhanced QPF and its application within the Ammer simulation system in the period 2011 - 2012

  8. Virtualization of Legacy Instrumentation Control Computers for Improved Reliability, Operational Life, and Management.

    PubMed

    Katz, Jonathan E

    2017-01-01

    Laboratories tend to be amenable environments for long-term reliable operation of scientific measurement equipment. Indeed, it is not uncommon to find equipment 5, 10, or even 20+ years old still being routinely used in labs. Unfortunately, the Achilles heel for many of these devices is the control/data acquisition computer. Often these computers run older operating systems (e.g., Windows XP) and, while they might only use standard network, USB or serial ports, they require proprietary software to be installed. Even if the original installation disks can be found, it is a burdensome process to reinstall and is fraught with "gotchas" that can derail the process-lost license keys, incompatible hardware, forgotten configuration settings, etc. If you have running legacy instrumentation, the computer is the ticking time bomb waiting to put a halt to your operation.In this chapter, I describe how to virtualize your currently running control computer. This virtualized computer "image" is easy to maintain, easy to back up and easy to redeploy. I have used this multiple times in my own lab to greatly improve the robustness of my legacy devices.After completing the steps in this chapter, you will have your original control computer as well as a virtual instance of that computer with all the software installed ready to control your hardware should your original computer ever be decommissioned.

  9. Using mean duration and variation of procedure times to plan a list of surgical operations to fit into the scheduled list time.

    PubMed

    Pandit, Jaideep J; Tavare, Aniket

    2011-07-01

    It is important that a surgical list is planned to utilise as much of the scheduled time as possible while not over-running, because this can lead to cancellation of operations. We wished to assess whether, theoretically, the known duration of individual operations could be used quantitatively to predict the likely duration of the operating list. In a university hospital setting, we first assessed the extent to which the current ad-hoc method of operating list planning was able to match the scheduled operating list times for 153 consecutive historical lists. Using receiver operating curve analysis, we assessed the ability of an alternative method to predict operating list duration for the same operating lists. This method uses a simple formula: the sum of individual operation times and a pooled standard deviation of these times. We used the operating list duration estimated from this formula to generate a probability that the operating list would finish within its scheduled time. Finally, we applied the simple formula prospectively to 150 operating lists, 'shadowing' the current ad-hoc method, to confirm the predictive ability of the formula. The ad-hoc method was very poor at planning: 50% of historical operating lists were under-booked and 37% over-booked. In contrast, the simple formula predicted the correct outcome (under-run or over-run) for 76% of these operating lists. The calculated probability that a planned series of operations will over-run or under-run was found useful in developing an algorithm to adjust the planned cases optimally. In the prospective series, 65% of operating lists were over-booked and 10% were under-booked. The formula predicted the correct outcome for 84% of operating lists. A simple quantitative method of estimating operating list duration for a series of operations leads to an algorithm (readily created on an Excel spreadsheet, http://links.lww.com/EJA/A19) that can potentially improve operating list planning.

  10. Why not make a PC cluster of your own? 5. AppleSeed: A Parallel Macintosh Cluster for Scientific Computing

    NASA Astrophysics Data System (ADS)

    Decyk, Viktor K.; Dauger, Dean E.

    We have constructed a parallel cluster consisting of Apple Macintosh G4 computers running both Classic Mac OS as well as the Unix-based Mac OS X, and have achieved very good performance on numerically intensive, parallel plasma particle-in-cell simulations. Unlike other Unix-based clusters, no special expertise in operating systems is required to build and run the cluster. This enables us to move parallel computing from the realm of experts to the mainstream of computing.

  11. Design and development of a mobile system for supporting emergency triage.

    PubMed

    Michalowski, W; Slowinski, R; Wilk, S; Farion, K J; Pike, J; Rubin, S

    2005-01-01

    Our objective was to design and develop a mobile clinical decision support system for emergency triage of different acute pain presentations. The system should interact with existing hospital information systems, run on mobile computing devices (handheld computers) and be suitable for operation in weak-connectivity conditions (with unstable connections between mobile clients and a server). The MET (Mobile Emergency Triage) system was designed following an extended client-server architecture. The client component, responsible for triage decision support, is built as a knowledge-based system, with domain ontology separated from generic problem solving methods and used for the automatic creation of a user interface. The MET system is well suited for operation in the Emergency Department of a hospital. The system's external interactions are managed by the server, while the MET clients, running on handheld computers are used by clinicians for collecting clinical data and supporting triage at the bedside. The functionality of the MET client is distributed into specialized modules, responsible for triaging specific types of acute pain presentations. The modules are stored on the server, and on request they can be transferred and executed on the mobile clients. The modular design provides for easy extension of the system's functionality. A clinical trial of the MET system validated the appropriateness of the system's design, and proved the usefulness and acceptance of the system in clinical practice. The MET system captures the necessary hospital data, allows for entry of patient information, and provides triage support. By operating on handheld computers, it fits into the regular emergency department workflow without introducing any hindrances or disruptions. It supports triage anytime and anywhere, directly at the point of care, and also can be used as an electronic patient chart, facilitating structured data collection.

  12. LV software support for supersonic flow analysis

    NASA Technical Reports Server (NTRS)

    Bell, William A.

    1991-01-01

    During 1991, the software developed allowed an operator to configure and checkout the TSI, Inc. laser velocimeter (LV) system prior to a run. This setup procedure established the operating conditions for the TSI MI-990 multichannel interface and the RMR-1989 rotating machinery resolver. In addition to initializing the instruments, the software package provides a means of specifying LV calibration constants, controlling the sampling process, and identifying the test parameters.

  13. The study of operating an air conditioning system using Maisotsenko-Cycle

    NASA Astrophysics Data System (ADS)

    Khan, Mohammad S.; Tahan, Sami; Toufic El-Achkar, Mohamad; Abou Jamus, Saleh

    2018-03-01

    The project aims to design and build an air conditioning system that runs on the Maisotsenko cycle. The system is required to condition and cool down ambient air for a small residential space with the reduction in the use of electricity and eliminating the use of commercial refrigerants. This project can operate at its optimum performance in remote areas like oil diggers and other projects that run in the desert or any site that would not have a very high relative humidity level. The Maisotsenko cycle is known as the thermodynamic concept that captures energy from the air by using the psychometric renewable energy available in the latent heat in water evaporating in air. The heat and mass exchanger design was based on choosing a material that would-be water resistant and breathable, which was found to be layers of cardboard placed on top of each other and thus creating channels for air to pass through. Aiming for this design eliminates any high power electrical equipment such as compressors, condensers and evaporators that would be used in an AC system with the exception of a 600 W blower and a 10 W fan, thus making it a more environmentally friendly project. Moreover, the project is limited by the ambient temperature and humidity, as the model operates at an optimum when the relative humidity is lower.

  14. Supersonic Wind Tunnel Capabilities Expanded Into Subsonic Region

    NASA Technical Reports Server (NTRS)

    Roeder, James W., Jr.

    1997-01-01

    The operating envelope of the Abe Silverstein 10- by 10-Foot Supersonic Wind Tunnel (10x10 SWT) at the NASA Lewis Research Center was recently expanded to include operation at subsonic test section speeds. This new capability generates test section air speeds ranging from Mach 0.05 to 0.35 (32 to 240 kn). Most of the expansion in air speed range was obtained by running the tunnel's main compressor at much lower speeds than ever before. The compressor drive system, consisting of four large electric motors, was run with only one or two motors energized to obtain the lower compressor speed range. This new capability makes the 10x10 SWT more versatile and gives U.S. researchers an enhanced ability to perform subsonic propulsion and aerodynamic testing.

  15. GridPP - Preparing for LHC Run 2 and the Wider Context

    NASA Astrophysics Data System (ADS)

    Coles, Jeremy

    2015-12-01

    This paper elaborates upon the operational status and directions within the UK Computing for Particle Physics (GridPP) project as it approaches LHC Run 2. It details the pressures that have been gradually reshaping the deployed hardware and middleware environments at GridPP sites - from the increasing adoption of larger multicore nodes to the move towards alternative batch systems and cloud alternatives - as well as changes being driven by funding considerations. The paper highlights work being done with non-LHC communities and describes some of the early outcomes of adopting a generic DIRAC based job submission and management framework. The paper presents results from an analysis of how GridPP effort is distributed across various deployment and operations tasks and how this may be used to target further improvements in efficiency.

  16. Real-Time, General-Purpose, High-Speed Signal Processing Systems for Underwater Research. Proceedings of a Working Level Conference held at Supreme Allied Commander, Atlantic Anti-Submarine Warfare Research Center (SACLANTCEN) on 18-21 September 1979. Part 2. Sessions IV to VI.

    DTIC Science & Technology

    1979-12-01

    ACTIVATED, SYSTEM OPERATION AND TESTING MASCOT PROVIDES: 1. SYSTEM BUILD SOFTWARE COMPILE-TIME CHECKS,a. 2. RUN-TIME SUPERVISOR KERNEL, 3, MONITOR AND...p AD-AOBI 851 SACLANT ASW RESEARCH CENTRE LA SPEZIA 11ITALY) F/B 1711 REAL-TIME, GENERAL-PURPOSE, HIGH-SPEED SIGNAL PROCESSING SYSTEM -- ETC (U) DEC 79...Table of Contents Table of Contents (Cont’d) Page Signal processing language and operating system (w) 23-1 to 23-12 by S. Weinstein A modular signal

  17. Computer-Aided System Engineering and Analysis (CASE/A) Programmer's Manual, Version 5.0

    NASA Technical Reports Server (NTRS)

    Knox, J. C.

    1996-01-01

    The Computer Aided System Engineering and Analysis (CASE/A) Version 5.0 Programmer's Manual provides the programmer and user with information regarding the internal structure of the CASE/A 5.0 software system. CASE/A 5.0 is a trade study tool that provides modeling/simulation capabilities for analyzing environmental control and life support systems and active thermal control systems. CASE/A has been successfully used in studies such as the evaluation of carbon dioxide removal in the space station. CASE/A modeling provides a graphical and command-driven interface for the user. This interface allows the user to construct a model by placing equipment components in a graphical layout of the system hardware, then connect the components via flow streams and define their operating parameters. Once the equipment is placed, the simulation time and other control parameters can be set to run the simulation based on the model constructed. After completion of the simulation, graphical plots or text files can be obtained for evaluation of the simulation results over time. Additionally, users have the capability to control the simulation and extract information at various times in the simulation (e.g., control equipment operating parameters over the simulation time or extract plot data) by using "User Operations (OPS) Code." This OPS code is written in FORTRAN with a canned set of utility subroutines for performing common tasks. CASE/A version 5.0 software runs under the VAX VMS(Trademark) environment. It utilizes the Tektronics 4014(Trademark) graphics display system and the VTIOO(Trademark) text manipulation/display system.

  18. Development of operating mode distributions for different types of roadways under different congestion levels for vehicle emission assessment using MOVES.

    PubMed

    Qi, Yi; Padiath, Ameena; Zhao, Qun; Yu, Lei

    2016-10-01

    The Motor Vehicle Emission Simulator (MOVES) quantifies emissions as a function of vehicle modal activities. Hence, the vehicle operating mode distribution is the most vital input for running MOVES at the project level. The preparation of operating mode distributions requires significant efforts with respect to data collection and processing. This study is to develop operating mode distributions for both freeway and arterial facilities under different traffic conditions. For this purpose, in this study, we (1) collected/processed geographic information system (GIS) data, (2) developed a model of CO2 emissions and congestion from observations, (3) implemented the model to evaluate potential emission changes from a hypothetical roadway accident scenario. This study presents a framework by which practitioners can assess emission levels in the development of different strategies for traffic management and congestion mitigation. This paper prepared the primary input, that is, the operating mode ID distribution, required for running MOVES and developed models for estimating emissions for different types of roadways under different congestion levels. The results of this study will provide transportation planners or environmental analysts with the methods for qualitatively assessing the air quality impacts of different transportation operation and demand management strategies.

  19. Muons in the CMS High Level Trigger System

    NASA Astrophysics Data System (ADS)

    Verwilligen, Piet; CMS Collaboration

    2016-04-01

    The trigger systems of LHC detectors play a fundamental role in defining the physics capabilities of the experiments. A reduction of several orders of magnitude in the rate of collected events, with respect to the proton-proton bunch crossing rate generated by the LHC, is mandatory to cope with the limits imposed by the readout and storage system. An accurate and efficient online selection mechanism is thus required to fulfill the task keeping maximal the acceptance to physics signals. The CMS experiment operates using a two-level trigger system. Firstly a Level-1 Trigger (L1T) system, implemented using custom-designed electronics, is designed to reduce the event rate to a limit compatible to the CMS Data Acquisition (DAQ) capabilities. A High Level Trigger System (HLT) follows, aimed at further reducing the rate of collected events finally stored for analysis purposes. The latter consists of a streamlined version of the CMS offline reconstruction software and operates on a computer farm. It runs algorithms optimized to make a trade-off between computational complexity, rate reduction and high selection efficiency. With the computing power available in 2012 the maximum reconstruction time at HLT was about 200 ms per event, at the nominal L1T rate of 100 kHz. An efficient selection of muons at HLT, as well as an accurate measurement of their properties, such as transverse momentum and isolation, is fundamental for the CMS physics programme. The performance of the muon HLT for single and double muon triggers achieved in Run I will be presented. Results from new developments, aimed at improving the performance of the algorithms for the harsher scenarios of collisions per event (pile-up) and luminosity expected for Run II will also be discussed.

  20. An evaluation of the real-time tropical cyclone forecast skill of the Navy Operational Global Atmospheric Prediction System in the western North Pacific

    NASA Technical Reports Server (NTRS)

    Fiorino, Michael; Goerss, James S.; Jensen, Jack J.; Harrison, Edward J., Jr.

    1993-01-01

    The paper evaluates the meteorological quality and operational utility of the Navy Operational Global Atmospheric Prediction System (NOGAPS) in forecasting tropical cyclones. It is shown that the model can provide useful predictions of motion and formation on a real-time basis in the western North Pacific. The meterological characteristics of the NOGAPS tropical cyclone predictions are evaluated by examining the formation of low-level cyclone systems in the tropics and vortex structure in the NOGAPS analysis and verifying 72-h forecasts. The adjusted NOGAPS track forecasts showed equitable skill to the baseline aid and the dynamical model. NOGAPS successfully predicted unusual equatorward turns for several straight-running cyclones.

  1. NATO In Africa: Ready for Action?

    DTIC Science & Technology

    2007-04-01

    options for NATO planners who might be called upon to prepare NATO forces for the gamut of operations on the continent of Africa. vi Chapter 1...in a number of military operations running the gamut from peacekeeping/presence operations to combat operations and stability/reconstruction efforts...which run the gamut from 25 peacekeeping/humanitarian intervention to peacemaking operations.13 Some have criticized the EU for establishing its

  2. A graphics subsystem retrofit design for the bladed-disk data acquisition system. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Carney, R. R.

    1983-01-01

    A graphics subsystem retrofit design for the turbojet blade vibration data acquisition system is presented. The graphics subsystem will operate in two modes permitting the system operator to view blade vibrations on an oscilloscope type of display. The first mode is a real-time mode that displays only gross blade characteristics, such as maximum deflections and standing waves. This mode is used to aid the operator in determining when to collect detailed blade vibration data. The second mode of operation is a post-processing mode that will animate the actual blade vibrations using the detailed data collected on an earlier data collection run. The operator can vary the rate of payback to view differring characteristics of blade vibrations. The heart of the graphics subsystem is a modified version of AMD's ""super sixteen'' computer, called the graphics preprocessor computer (GPC). This computer is based on AMD's 2900 series of bit-slice components.

  3. Bringing simulation to engineers in the field: a Web 2.0 approach.

    PubMed

    Haines, Robert; Khan, Kashif; Brooke, John

    2009-07-13

    Field engineers working on water distribution systems have to implement day-to-day operational decisions. Since pipe networks are highly interconnected, the effects of such decisions are correlated with hydraulic and water quality conditions elsewhere in the network. This makes the provision of predictive decision support tools (DSTs) for field engineers critical to optimizing the engineering work on the network. We describe how we created DSTs to run on lightweight mobile devices by using the Web 2.0 technique known as Software as a Service. We designed our system following the architectural style of representational state transfer. The system not only displays static geographical information system data for pipe networks, but also dynamic information and prediction of network state, by invoking and displaying the results of simulations running on more powerful remote resources.

  4. UNIVERS Product. Phase 1.

    DTIC Science & Technology

    1987-04-27

    foundation for MCAD, - ECAD , and CIM applications. The existing product runs under 4.2 BSD UNIX’** on SUN 3T s workstations, and will soon be available...on Digital Equipment’s VMSM operating system. Potential UNIVERS applications include Government-sponsored ECAD design applications (for example, the

  5. 49 CFR 383.113 - Required skills.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... inspected to ensure a safe operating condition of each part, including: (i) Engine compartment; (ii) Cab/engine start; (iii) Steering; (iv) Suspension; (v) Brakes; (vi) Wheels; (vii) Side of vehicle; (viii... they will activate in emergency situations; (iv) With the engine running, make sure that the system...

  6. 49 CFR 383.113 - Required skills.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... inspected to ensure a safe operating condition of each part, including: (i) Engine compartment; (ii) Cab/engine start; (iii) Steering; (iv) Suspension; (v) Brakes; (vi) Wheels; (vii) Side of vehicle; (viii... they will activate in emergency situations; (iv) With the engine running, make sure that the system...

  7. 49 CFR 383.113 - Required skills.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... inspected to ensure a safe operating condition of each part, including: (i) Engine compartment; (ii) Cab/engine start; (iii) Steering; (iv) Suspension; (v) Brakes; (vi) Wheels; (vii) Side of vehicle; (viii... they will activate in emergency situations; (iv) With the engine running, make sure that the system...

  8. 49 CFR 383.113 - Required skills.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... inspected to ensure a safe operating condition of each part, including: (i) Engine compartment; (ii) Cab/engine start; (iii) Steering; (iv) Suspension; (v) Brakes; (vi) Wheels; (vii) Side of vehicle; (viii... they will activate in emergency situations; (iv) With the engine running, make sure that the system...

  9. Programs To Optimize Spacecraft And Aircraft Trajectories

    NASA Technical Reports Server (NTRS)

    Brauer, G. L.; Petersen, F. M.; Cornick, D.E.; Stevenson, R.; Olson, D. W.

    1994-01-01

    POST/6D POST is set of two computer programs providing ability to target and optimize trajectories of powered or unpowered spacecraft or aircraft operating at or near rotating planet. POST treats point-mass, three-degree-of-freedom case. 6D POST treats more-general rigid-body, six-degree-of-freedom (with point masses) case. Used to solve variety of performance, guidance, and flight-control problems for atmospheric and orbital vehicles. Applications include computation of performance or capability of vehicle in ascent, or orbit, and during entry into atmosphere, simulation and analysis of guidance and flight-control systems, dispersion-type analyses and analyses of loads, general-purpose six-degree-of-freedom simulation of controlled and uncontrolled vehicles, and validation of performance in six degrees of freedom. Written in FORTRAN 77 and C language. Two machine versions available: one for SUN-series computers running SunOS(TM) (LAR-14871) and one for Silicon Graphics IRIS computers running IRIX(TM) operating system (LAR-14869).

  10. Operational wave now- and forecast in the German Bight as a basis for the assessment of wave-induced hydrodynamic loads on coastal dikes

    NASA Astrophysics Data System (ADS)

    Dreier, Norman; Fröhle, Peter

    2017-12-01

    The knowledge of the wave-induced hydrodynamic loads on coastal dikes including their temporal and spatial resolution on the dike in combination with actual water levels is of crucial importance of any risk-based early warning system. As a basis for the assessment of the wave-induced hydrodynamic loads, an operational wave now- and forecast system is set up that consists of i) available field measurements from the federal and local authorities and ii) data from numerical simulation of waves in the German Bight using the SWAN wave model. In this study, results of the hindcast of deep water wave conditions during the winter storm on 5-6 December, 2013 (German name `Xaver') are shown and compared with available measurements. Moreover field measurements of wave run-up from the local authorities at a sea dike on the German North Sea Island of Pellworm are presented and compared against calculated wave run-up using the EurOtop (2016) approach.

  11. Minimal algorithm for running an internal combustion engine

    NASA Astrophysics Data System (ADS)

    Stoica, V.; Borborean, A.; Ciocan, A.; Manciu, C.

    2018-01-01

    The internal combustion engine control is a well-known topic within automotive industry and is widely used. However, in research laboratories and universities the use of a control system trading is not the best solution because of predetermined operating algorithms, and calibrations (accessible only by the manufacturer) without allowing massive intervention from outside. Laboratory solutions on the market are very expensive. Consequently, in the paper we present a minimal algorithm required to start-up and run an internal combustion engine. The presented solution can be adapted to function on performance microcontrollers available on the market at the present time and at an affordable price. The presented algorithm was implemented in LabView and runs on a CompactRIO hardware platform.

  12. The procedure execution manager and its application to Advanced Photon Source operation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borland, M.

    1997-06-01

    The Procedure Execution Manager (PEM) combines a complete scripting environment for coding accelerator operation procedures with a manager application for executing and monitoring the procedures. PEM is based on Tcl/Tk, a supporting widget library, and the dp-tcl extension for distributed processing. The scripting environment provides support for distributed, parallel execution of procedures along with join and abort operations. Nesting of procedures is supported, permitting the same code to run as a top-level procedure under operator control or as a subroutine under control of another procedure. The manager application allows an operator to execute one or more procedures in automatic, semi-automatic,more » or manual modes. It also provides a standard way for operators to interact with procedures. A number of successful applications of PEM to accelerator operations have been made to date. These include start-up, shutdown, and other control of the positron accumulator ring (PAR), low-energy transport (LET) lines, and the booster rf systems. The PAR/LET procedures make nested use of PEM`s ability to run parallel procedures. There are also a number of procedures to guide and assist tune-up operations, to make accelerator physics measurements, and to diagnose equipment. Because of the success of the existing procedures, expanded use of PEM is planned.« less

  13. Simulation Study of Evacuation Control Center Operations Analysis

    DTIC Science & Technology

    2011-06-01

    28 4.3 Baseline Manning (Runs 1, 2, & 3) . . . . . . . . . . . . 30 4.3.1 Baseline Statistics Interpretation...46 Appendix B. Key Statistic Matrix: Runs 1-12 . . . . . . . . . . . . . 48 Appendix C. Blue Dart...Completion Time . . . 33 11. Paired T result - Run 5 v. Run 6: ECC Completion Time . . . 35 12. Key Statistics : Run 3 vs. Run 9

  14. Automated processing of fluorescence in-situ hybridization slides for HER2 testing in breast and gastro-esophageal carcinomas.

    PubMed

    Tafe, Laura J; Allen, Samantha F; Steinmetz, Heather B; Dokus, Betty A; Cook, Leanne J; Marotti, Jonathan D; Tsongalis, Gregory J

    2014-08-01

    HER2 fluorescence in-situ hybridization (FISH) is used in breast and gastro-esophageal carcinoma for determining HER2 gene amplification and patients' eligibility for HER2 targeted therapeutics. Traditional manual processing of the FISH slides is labor intensive because of multiple steps that require hands on manipulation of the slides and specifically timed intervals between steps. This highly manual processing also introduces inter-run and inter-operator variability that may affect the quality of the FISH result. Therefore, we sought to incorporate an automated processing instrument into our FISH workflow. Twenty-six cases including breast (20) and gastro-esophageal (6) cancer comprising 23 biopsies and three excision specimens were tested for HER2 FISH (Pathvysion, Abbott) using the Thermobrite Elite (TBE) system (Leica). Up to 12 slides can be run simultaneously. All cases were previously tested by the Pathvysion HER2 FISH assay with manual preparation. Twenty cells were counted by two observers for each case; five cases were tested on three separate runs by different operators to evaluate the precision and inter-operator variability. There was 100% concordance in the scoring between the manual and TBE methods as well as among the five cases that were tested on three runs. Only one case failed due to poor probe hybridization. In total, seven cases were positive for HER2 amplification (HER2:CEP17 ratio >2.2) and the remaining 19 were negative (HER2:CEP17 ratio <1.8) utilizing the 2007 ASCO/CAP scoring criteria. Due to the automated denaturation and hybridization, for each run, there was a reduction in labor of 3.5h which could then be dedicated to other lab functions. The TBE is a walk away pre- and post-hybridization system that automates FISH slide processing, improves work flow and consistency and saves approximately 3.5h of technologist time. The instrument has a small footprint thus occupying minimal counter space. TBE processed slides performed exceptionally well in comparison to the manual technique with no disagreement in HER2 amplification status. Copyright © 2014 Elsevier Inc. All rights reserved.

  15. Time and Space Partitioning the EagleEye Reference Misson

    NASA Astrophysics Data System (ADS)

    Bos, Victor; Mendham, Peter; Kauppinen, Panu; Holsti, Niklas; Crespo, Alfons; Masmano, Miguel; de la Puente, Juan A.; Zamorano, Juan

    2013-08-01

    We discuss experiences gained by porting a Software Validation Facility (SVF) and a satellite Central Software (CSW) to a platform with support for Time and Space Partitioning (TSP). The SVF and CSW are part of the EagleEye Reference mission of the European Space Agency (ESA). As a reference mission, EagleEye is a perfect candidate to evaluate practical aspects of developing satellite CSW for and on TSP platforms. The specific TSP platform we used consists of a simulated LEON3 CPU controlled by the XtratuM separation micro-kernel. On top of this, we run five separate partitions. Each partition runs its own real-time operating system or Ada run-time kernel, which in turn are running the application software of the CSW. We describe issues related to partitioning; inter-partition communication; scheduling; I/O; and fault-detection, isolation, and recovery (FDIR).

  16. Performance Comparison of EPICS IOC and MARTe in a Hard Real-Time Control Application

    NASA Astrophysics Data System (ADS)

    Barbalace, Antonio; Manduchi, Gabriele; Neto, A.; De Tommasi, G.; Sartori, F.; Valcarcel, D. F.

    2011-12-01

    EPICS is used worldwide mostly for controlling accelerators and large experimental physics facilities. Although EPICS is well fit for the design and development of automation systems, which are typically VME or PLC-based systems, and for soft real-time systems, it may present several drawbacks when used to develop hard real-time systems/applications especially when general purpose operating systems as plain Linux are chosen. This is in particular true in fusion research devices typically employing several hard real-time systems, such as the magnetic control systems, that may require strict determinism, and high performance in terms of jitter and latency. Serious deterioration of important plasma parameters may happen otherwise, possibly leading to an abrupt termination of the plasma discharge. The MARTe framework has been recently developed to fulfill the demanding requirements for such real-time systems that are alike to run on general purpose operating systems, possibly integrated with the low-latency real-time preemption patches. MARTe has been adopted to develop a number of real-time systems in different Tokamaks. In this paper, we first summarize differences and similarities between EPICS IOC and MARTe. Then we report on a set of performance measurements executed on an x86 64 bit multicore machine running Linux with an IO control algorithm implemented in an EPICS IOC and in MARTe.

  17. High Speed, High Temperature, Fault Tolerant Operation of a Combination Magnetic-Hydrostatic Bearing Rotor Support System for Turbomachinery

    NASA Technical Reports Server (NTRS)

    Jansen, Mark; Montague, Gerald; Provenza, Andrew; Palazzolo, Alan

    2004-01-01

    Closed loop operation of a single, high temperature magnetic radial bearing to 30,000 RPM (2.25 million DN) and 540 C (1000 F) is discussed. Also, high temperature, fault tolerant operation for the three axis system is examined. A novel, hydrostatic backup bearing system was employed to attain high speed, high temperature, lubrication free support of the entire rotor system. The hydrostatic bearings were made of a high lubricity material and acted as journal-type backup bearings. New, high temperature displacement sensors were successfully employed to monitor shaft position throughout the entire temperature range and are described in this paper. Control of the system was accomplished through a stand alone, high speed computer controller and it was used to run both the fault-tolerant PID and active vibration control algorithms.

  18. Developing Control System of Electrical Devices with Operational Expense Prediction

    NASA Astrophysics Data System (ADS)

    Sendari, Siti; Wahyu Herwanto, Heru; Rahmawati, Yuni; Mukti Putranto, Dendi; Fitri, Shofiana

    2017-04-01

    The purpose of this research is to develop a system that can monitor and record home electrical device’s electricity usage. This system has an ability to control electrical devices in distance and predict the operational expense. The system was developed using micro-controllers and WiFi modules connected to PC server. The communication between modules is arranged by server via WiFi. Beside of reading home electrical devices electricity usage, the unique point of the proposed-system is the ability of micro-controllers to send electricity data to server for recording the usage of electrical devices. The testing of this research was done by Black-box method to test the functionality of system. Testing system run well with 0% error.

  19. Total hydrocarbon content (THC) testing in liquid oxygen (LOX) systems

    NASA Astrophysics Data System (ADS)

    Meneghelli, B. J.; Obregon, R. E.; Ross, H. R.; Hebert, B. J.; Sass, J. P.; Dirschka, G. E.

    2015-12-01

    The measured Total Hydrocarbon Content (THC) levels in liquid oxygen (LOX) systems at Stennis Space Center (SSC) have shown wide variations. Examples of these variations include the following: 1) differences between vendor-supplied THC values and those obtained using standard SSC analysis procedures; and 2) increasing THC values over time at an active SSC test stand in both storage and run vessels. A detailed analysis of LOX sampling techniques, analytical instrumentation, and sampling procedures will be presented. Additional data obtained on LOX system operations and LOX delivery trailer THC values during the past 12-24 months will also be discussed. Field test results showing THC levels and the distribution of the THC's in the test stand run tank, modified for THC analysis via dip tubes, will be presented.

  20. Crawler Solids Unknown Analysis

    NASA Technical Reports Server (NTRS)

    Frandsen, Athela

    2016-01-01

    Crawler Transporter (CT) #2 has been undergoing refurbishment to carry the Space Launch System (SLS). After returning to normal operation, multiple filters of the gear box lubrication system failed/clogged and went on bypass during a test run to the launch pad. Analysis of the filters was done in large part with polarized light microscopy (PLM) to identify the filter contaminates and the source of origin.

  1. Obfuscated authentication systems, devices, and methods

    DOEpatents

    Armstrong, Robert C; Hutchinson, Robert L

    2013-10-22

    Embodiments of the present invention are directed toward authentication systems, devices, and methods. Obfuscated executable instructions may encode an authentication procedure and protect an authentication key. The obfuscated executable instructions may require communication with a remote certifying authority for operation. In this manner, security may be controlled by the certifying authority without regard to the security of the electronic device running the obfuscated executable instructions.

  2. Superfund Technology Evaluation Report: SITE Program Demonstration Test Shirco Pilot-Scale Infrared Incineration System at the Rose Township Demode Road Superfund Site Volume I

    EPA Science Inventory

    The Shirco Pilot-Scale Infrared Incineration System was evaluated during a series of seventeen test runs under varied operating conditions at the Demode Road Superfund Site located in Rose Township, Michigan. The tests sought to demonstrate the effectiveness of the unit and the t...

  3. Prognostics and Health Monitoring: Application to Electric Vehicles

    NASA Technical Reports Server (NTRS)

    Kulkarni, Chetan S.

    2017-01-01

    As more and more autonomous electric vehicles emerge in our daily operation progressively, a very critical challenge lies in accurate prediction of remaining useful life of the systemssubsystems, specifically the electrical powertrain. In case of electric aircrafts, computing remaining flying time is safety-critical, since an aircraft that runs out of power (battery charge) while in the air will eventually lose control leading to catastrophe. In order to tackle and solve the prediction problem, it is essential to have awareness of the current state and health of the system, especially since it is necessary to perform condition-based predictions. To be able to predict the future state of the system, it is also required to possess knowledge of the current and future operations of the vehicle.Our research approach is to develop a system level health monitoring safety indicator either to the pilotautopilot for the electric vehicles which runs estimation and prediction algorithms to estimate remaining useful life of the vehicle e.g. determine state-of-charge in batteries. Given models of the current and future system behavior, a general approach of model-based prognostics can be employed as a solution to the prediction problem and further for decision making.

  4. Effect of sucrose availability on wheel-running as an operant and as a reinforcing consequence on a multiple schedule: Additive effects of extrinsic and automatic reinforcement.

    PubMed

    Belke, Terry W; Pierce, W David

    2015-07-01

    As a follow up to Belke and Pierce's (2014) study, we assessed the effects of repeated presentation and removal of sucrose solution on the behavior of rats responding on a two-component multiple schedule. Rats completed 15 wheel turns (FR 15) for either 15% or 0% sucrose solution in the manipulated component and lever pressed 10 times on average (VR 10) for an opportunity to complete 15 wheel turns (FR 15) in the other component. In contrast to our earlier study, the components advanced based on time (every 8min) rather than completed responses. Results showed that in the manipulated component wheel-running rates were higher and the latency to initiate running longer when sucrose was present (15%) compared to absent (0% or water); the number of obtained outcomes (sucrose/water), however, did not differ with the presentation and withdrawal of sucrose. For the wheel-running as reinforcement component, rates of wheel turns, overall lever-pressing rates, and obtained wheel-running reinforcements were higher, and postreinforcement pauses shorter, when sucrose was present (15%) than absent (0%) in manipulated component. Overall, our findings suggest that wheel-running rate regardless of its function (operant or reinforcement) is maintained by automatically generated consequences (automatic reinforcement) and is increased as an operant by adding experimentally arranged sucrose reinforcement (extrinsic reinforcement). This additive effect on operant wheel-running generalizes through induction or arousal to the wheel-running as reinforcement component, increasing the rate of responding for opportunities to run and the rate of wheel-running per opportunity. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bartoletti, T.

    SPI/U3.1 consists of five tools used to assess and report the security posture of computers running the UNIX operating system. The tools are: Access Control Test: A rule-based system which identifies sequential dependencies in UNIX access controls. Binary Inspector Tool: Evaluates the release status of system binaries by comparing a crypto-checksum to provide table entries. Change Detection Tool: Maintains and applies a snapshot of critical system files and attributes for purposes of change detection. Configuration Query Language: Accepts CQL-based scripts (provided) to evaluate queries over the status of system files, configuration of services and many other elements of UNIX systemmore » security. Password Security Inspector: Tests for weak or aged passwords. The tools are packaged with a forms-based user interface providing on-line context-sensistive help, job scheduling, parameter management and output report management utilities. Tools may be run independent of the UI.« less

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bartoletti, Tony

    SPI/U3.2 consists of five tools used to assess and report the security posture of computers running the UNIX operating system. The tools are: Access Control Test: A rule-based system which identifies sequential dependencies in UNIX access controls. Binary Authentication Tool: Evaluates the release status of system binaries by comparing a crypto-checksum to provide table entries. Change Detection Tool: Maintains and applies a snapshot of critical system files and attributes for purposes of change detection. Configuration Query Language: Accepts CQL-based scripts (provided) to evaluate queries over the status of system files, configuration of services and many other elements of UNIX systemmore » security. Password Security Inspector: Tests for weak or aged passwords. The tools are packaged with a forms-based user interface providing on-line context-sensistive help, job scheduling, parameter management and output report management utilities. Tools may be run independent of the UI.« less

  7. The Web Based Monitoring Project at the CMS Experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lopez-Perez, Juan Antonio; Badgett, William; Behrens, Ulf

    The Compact Muon Solenoid is a large a complex general purpose experiment at the CERN Large Hadron Collider (LHC), built and maintained by many collaborators from around the world. Efficient operation of the detector requires widespread and timely access to a broad range of monitoring and status information. To the end the Web Based Monitoring (WBM) system was developed to present data to users located anywhere from many underlying heterogeneous sources, from real time messaging systems to relational databases. This system provides the power to combine and correlate data in both graphical and tabular formats of interest to the experimenters,more » including data such as beam conditions, luminosity, trigger rates, detector conditions, and many others, allowing for flexibility on the user’s side. This paper describes the WBM system architecture and describes how the system has been used from the beginning of data taking until now (Run1 and Run 2).« less

  8. The web based monitoring project at the CMS experiment

    NASA Astrophysics Data System (ADS)

    Lopez-Perez, Juan Antonio; Badgett, William; Behrens, Ulf; Chakaberia, Irakli; Jo, Youngkwon; Maeshima, Kaori; Maruyama, Sho; Patrick, James; Rapsevicius, Valdas; Soha, Aron; Stankevicius, Mantas; Sulmanas, Balys; Toda, Sachiko; Wan, Zongru

    2017-10-01

    The Compact Muon Solenoid is a large a complex general purpose experiment at the CERN Large Hadron Collider (LHC), built and maintained by many collaborators from around the world. Efficient operation of the detector requires widespread and timely access to a broad range of monitoring and status information. To that end the Web Based Monitoring (WBM) system was developed to present data to users located anywhere from many underlying heterogeneous sources, from real time messaging systems to relational databases. This system provides the power to combine and correlate data in both graphical and tabular formats of interest to the experimenters, including data such as beam conditions, luminosity, trigger rates, detector conditions, and many others, allowing for flexibility on the user’s side. This paper describes the WBM system architecture and describes how the system has been used from the beginning of data taking until now (Run1 and Run 2).

  9. A cyclic ground test of an ion auxiliary propulsion system: Description and operational considerations

    NASA Technical Reports Server (NTRS)

    Ling, Jerri S.; Kramer, Edward H.

    1988-01-01

    The Ion Auxiliary Propulsion System (IAPS) experiment is designed for launch on an Air Force Space Test Program satellite (NASA-TM-78859; AIAA Paper No. 78-647). The primary objective of the experiment is to flight qualify the 8 cm mercury ion thruster system for stationkeeping applications. Secondary objectives are measuring the interactions between operating ion thruster systems and host spacecraft, and confirming the design performance of the thruster systems. Two complete 8 cm mercury ion thruster subsystems will be flown. One of these will be operated for 2557 on and off cycles and 7057 hours at full thrust. Tests are currently under way in support of the IAPS flight experiment. In this test an IAPS thruster is being operated through a series of startup/run/shut-down cycles which simulate thruster operation during the planned flight experiment. A test facility description and operational considerations of this testing using an engineering model 8 cm thruster (S/N 905) is the subject of this paper. Final results will be published at a later date when the ground test has been concluded.

  10. Process Control Migration of 50 LPH Helium Liquefier

    NASA Astrophysics Data System (ADS)

    Panda, U.; Mandal, A.; Das, A.; Behera, M.; Pal, Sandip

    2017-02-01

    Two helium liquefier/refrigerators are operational at VECC while one is dedicated for the Superconducting Cyclotron. The first helium liquefier of 50 LPH capacity from Air Liquide has already completed fifteen years of operation without any major trouble. This liquefier is being controlled by Eurotherm PC3000 make PLC. This PLC has become obsolete since last seven years or so. Though we can still manage to run the PLC system with existing spares, risk of discontinuation of the operation is always there due to unavailability of spare. In order to eliminate the risk, an equivalent PLC control system based on Siemens S7-300 was thought of. For smooth migration, total programming was done keeping the same field input and output interface, nomenclature and graphset. New program is a mix of S7-300 Graph, STL and LAD languages. One to one program verification of the entire process graph was done manually. The total program was run in simulation mode. Matlab mathematical model was also used for plant control simulations. EPICS based SCADA was used for process monitoring. As of now the entire hardware and software is ready for direct replacement with minimum required set up time.

  11. Development of the GEM-MACH-FireWork System: An Air Quality Model with On-line Wildfire Emissions within the Canadian Operational Air Quality Forecast System

    NASA Astrophysics Data System (ADS)

    Pavlovic, Radenko; Chen, Jack; Beaulieu, Paul-Andre; Anselmp, David; Gravel, Sylvie; Moran, Mike; Menard, Sylvain; Davignon, Didier

    2014-05-01

    A wildfire emissions processing system has been developed to incorporate near-real-time emissions from wildfires and large prescribed burns into Environment Canada's real-time GEM-MACH air quality (AQ) forecast system. Since the GEM-MACH forecast domain covers Canada and most of the U.S.A., including Alaska, fire location information is needed for both of these large countries. During AQ model runs, emissions from individual fire sources are injected into elevated model layers based on plume-rise calculations and then transport and chemistry calculations are performed. This "on the fly" approach to the insertion of the fire emissions provides flexibility and efficiency since on-line meteorology is used and computational overhead in emissions pre-processing is reduced. GEM-MACH-FireWork, an experimental wildfire version of GEM-MACH, was run in real-time mode for the summers of 2012 and 2013 in parallel with the normal operational version. 48-hour forecasts were generated every 12 hours (at 00 and 12 UTC). Noticeable improvements in the AQ forecasts for PM2.5 were seen in numerous regions where fire activity was high. Case studies evaluating model performance for specific regions and computed objective scores will be included in this presentation. Using the lessons learned from the last two summers, Environment Canada will continue to work towards the goal of incorporating near-real-time intermittent wildfire emissions into the operational air quality forecast system.

  12. The contaminant analysis automation robot implementation for the automated laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Younkin, J.R.; Igou, R.E.; Urenda, T.D.

    1995-12-31

    The Contaminant Analysis Automation (CAA) project defines the automated laboratory as a series of standard laboratory modules (SLM) serviced by a robotic standard support module (SSM). These SLMs are designed to allow plug-and-play integration into automated systems that perform standard analysis methods (SAM). While the SLMs are autonomous in the execution of their particular chemical processing task, the SAM concept relies on a high-level task sequence controller (TSC) to coordinate the robotic delivery of materials requisite for SLM operations, initiate an SLM operation with the chemical method dependent operating parameters, and coordinate the robotic removal of materials from the SLMmore » when its commands and events has been established to allow ready them for transport operations as well as performing the Supervisor and Subsystems (GENISAS) software governs events from the SLMs and robot. The Intelligent System Operating Environment (ISOE) enables the inter-process communications used by GENISAS. CAA selected the Hewlett-Packard Optimized Robot for Chemical Analysis (ORCA) and its associated Windows based Methods Development Software (MDS) as the robot SSM. The MDS software is used to teach the robot each SLM position and required material port motions. To allow the TSC to command these SLM motions, a hardware and software implementation was required that allowed message passing between different operating systems. This implementation involved the use of a Virtual Memory Extended (VME) rack with a Force CPU-30 computer running VxWorks; a real-time multitasking operating system, and a Radiuses PC compatible VME computer running MDS. A GENISAS server on The Force computer accepts a transport command from the TSC, a GENISAS supervisor, over Ethernet and notifies software on the RadiSys PC of the pending command through VMEbus shared memory. The command is then delivered to the MDS robot control software using a Windows Dynamic Data Exchange conversation.« less

  13. Compressed Air System Optimization: Case Study Food Industry in Indonesia

    NASA Astrophysics Data System (ADS)

    Widayati, Endang; Nuzahar, Hasril

    2016-01-01

    Compressors and compressed air systems was one of the most important utilities in industries or factories. Approximately 10% of the cost of electricity in the industry was used to produce compressed air. Therefore the potential for energy savings in the compressors and compressed air systems had a big challenge. This field was conducted especially in Indonesia food industry or factory. Compressed air system optimization was a technique approach to determine the optimal conditions for the operation of compressors and compressed air systems that included evaluation of the energy needs, supply adjustment, eliminating or reconfiguring the use and operation of inefficient, changing and complementing some equipment and improving operating efficiencies. This technique gave the significant impact for energy saving and costs. The potential savings based on this study through measurement and optimization e.g. system that lowers the pressure of 7.5 barg to 6.8 barg would reduce energy consumption and running costs approximately 4.2%, switch off the compressor GA110 and GA75 was obtained annual savings of USD 52,947 ≈ 455 714 kWh, running GA75 light load or unloaded then obtained annual savings of USD 31,841≈ 270,685 kWh, install new compressor 2x132 kW and 1x 132 kW VSD obtained annual savings of USD 108,325≈ 928,500 kWh. Furthermore it was needed to conduct study of technical aspect of energy saving potential (Investment Grade Audit) and performed Cost Benefit Analysis. This study was one of best practice solutions how to save energy and improve energy performance in compressors and compressed air system.

  14. An operational global ocean forecast system and its applications

    NASA Astrophysics Data System (ADS)

    Mehra, A.; Tolman, H. L.; Rivin, I.; Rajan, B.; Spindler, T.; Garraffo, Z. D.; Kim, H.

    2012-12-01

    A global Real-Time Ocean Forecast System (RTOFS) was implemented in operations at NCEP/NWS/NOAA on 10/25/2011. This system is based on an eddy resolving 1/12 degree global HYCOM (HYbrid Coordinates Ocean Model) and is part of a larger national backbone capability of ocean modeling at NWS in strong partnership with US Navy. The forecast system is run once a day and produces a 6 day long forecast using the daily initialization fields produced at NAVOCEANO using NCODA (Navy Coupled Ocean Data Assimilation), a 3D multi-variate data assimilation methodology. As configured within RTOFS, HYCOM has a horizontal equatorial resolution of 0.08 degrees or ~9 km. The HYCOM grid is on a Mercator projection from 78.64 S to 47 N and north of this it employs an Arctic dipole patch where the poles are shifted over land to avoid a singularity at the North Pole. This gives a mid-latitude (polar) horizontal resolution of approximately 7 km (3.5 km). The coastline is fixed at 10 m isobath with open Bering Straits. This version employs 32 hybrid vertical coordinate surfaces with potential density referenced to 2000 m. Vertical coordinates can be isopycnals, often best for resolving deep water masses, levels of equal pressure (fixed depths), best for the well mixed unstratified upper ocean and sigma-levels (terrain-following), often the best choice in shallow water. The dynamic ocean model is coupled to a thermodynamic energy loan ice model and uses a non-slab mixed layer formulation. The forecast system is forced with 3-hourly momentum, radiation and precipitation fluxes from the operational Global Forecast System (GFS) fields. Results include global sea surface height and three dimensional fields of temperature, salinity, density and velocity fields used for validation and evaluation against available observations. Several downstream applications of this forecast system will also be discussed which include search and rescue operations at US Coast Guard, navigation safety information provided by OPC using real time ocean model guidance from Global RTOFS surface ocean currents, operational guidance on radionuclide dispersion near Fukushima using 3D tracers, boundary conditions for various operational coastal ocean forecast systems (COFS) run by NOS etc.

  15. Assessment of Global Forecast Ocean Assimilation Model (FOAM) using new satellite SST data

    NASA Astrophysics Data System (ADS)

    Ascione Kenov, Isabella; Sykes, Peter; Fiedler, Emma; McConnell, Niall; Ryan, Andrew; Maksymczuk, Jan

    2016-04-01

    There is an increased demand for accurate ocean weather information for applications in the field of marine safety and navigation, water quality, offshore commercial operations, monitoring of oil spills and pollutants, among others. The Met Office, UK, provides ocean forecasts to customers from governmental, commercial and ecological sectors using the Global Forecast Ocean Assimilation Model (FOAM), an operational modelling system which covers the global ocean and runs daily, using the NEMO (Nucleus for European Modelling of the Ocean) ocean model with horizontal resolution of 1/4° and 75 vertical levels. The system assimilates salinity and temperature profiles, sea surface temperature (SST), sea surface height (SSH), and sea ice concentration observations on a daily basis. In this study, the FOAM system is updated to assimilate Advanced Microwave Scanning Radiometer 2 (AMSR2) and the Spinning Enhanced Visible and Infrared Imager (SEVIRI) SST data. Model results from one month trials are assessed against observations using verification tools which provide a quantitative description of model performance and error, based on statistical metrics, including mean error, root mean square error (RMSE), correlation coefficient, and Taylor diagrams. A series of hindcast experiments is used to run the FOAM system with AMSR2 and SEVIRI SST data, using a control run for comparison. Results show that all trials perform well on the global ocean and that largest SST mean errors were found in the Southern hemisphere. The geographic distribution of the model error for SST and temperature profiles are discussed using statistical metrics evaluated over sub-regions of the global ocean.

  16. Design and validation of a portable, inexpensive and multi-beam timing light system using the Nintendo Wii hand controllers.

    PubMed

    Clark, Ross A; Paterson, Kade; Ritchie, Callan; Blundell, Simon; Bryant, Adam L

    2011-03-01

    Commercial timing light systems (CTLS) provide precise measurement of athletes running velocity, however they are often expensive and difficult to transport. In this study an inexpensive, wireless and portable timing light system was created using the infrared camera in Nintendo Wii hand controllers (NWHC). System creation with gold-standard validation. A Windows-based software program using NWHC to replicate a dual-beam timing gate was created. Firstly, data collected during 2m walking and running trials were validated against a 3D kinematic system. Secondly, data recorded during 5m running trials at various intensities from standing or flying starts were compared to a single beam CTLS and the independent and average scores of three handheld stopwatch (HS) operators. Intraclass correlation coefficient and Bland-Altman plots were used to assess validity. Absolute error quartiles and percentage of trials in absolute error threshold ranges were used to determine accuracy. The NWHC system was valid when compared against the 3D kinematic system (ICC=0.99, median absolute error (MAR)=2.95%). For the flying 5m trials the NWHC system possessed excellent validity and precision (ICC=0.97, MAR<3%) when compared with the CTLS. In contrast, the NWHC system and the HS values during standing start trials possessed only modest validity (ICC<0.75) and accuracy (MAR>8%). A NWHC timing light system is inexpensive, portable and valid for assessing running velocity. Errors in the 5m standing start trials may have been due to erroneous event detection by either the commercial or NWHC-based timing light systems. Copyright © 2010 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  17. 30 CFR 7.103 - Safety system control test.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... the temperature sensor in the exhaust gas stream which will automatically activate the safety shutdown... control that might interfere with the evaluation of the operation of the exhaust gas temperature sensor... allowable low water level. Run the engine until the exhaust gas temperature sensor activates the safety...

  18. 30 CFR 7.103 - Safety system control test.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... the temperature sensor in the exhaust gas stream which will automatically activate the safety shutdown... control that might interfere with the evaluation of the operation of the exhaust gas temperature sensor... allowable low water level. Run the engine until the exhaust gas temperature sensor activates the safety...

  19. Flexible server-side processing of climate archives

    NASA Astrophysics Data System (ADS)

    Juckes, Martin; Stephens, Ag; Damasio da Costa, Eduardo

    2014-05-01

    The flexibility and interoperability of OGC Web Processing Services are combined with an extensive range of data processing operations supported by the Climate Data Operators (CDO) library to facilitate processing of the CMIP5 climate data archive. The challenges posed by this peta-scale archive allow us to test and develop systems which will help us to deal with approaching exa-scale challenges. The CEDA WPS package allows users to manipulate data in the archive and export the results without first downloading the data -- in some cases this can drastically reduce the data volumes which need to be transferred and greatly reduce the time needed for the scientists to get their results. Reductions in data transfer are achieved at the expense of an additional computational load imposed on the archive (or near-archive) infrastructure. This is managed with a load balancing system. Short jobs may be run in near real-time, longer jobs will be queued. When jobs are queued the user is provided with a web dashboard displaying job status. A clean split between the data manipulation software and the request management software is achieved by exploiting the extensive CDO library. This library has a long history of development to support the needs of the climate science community. Use of the library ensures that operations run on data by the system can be reproduced by users using the same operators installed on their own computers. Examples using the system deployed for the CMIP5 archive will be shown and issues which need to be addressed as archive volumes expand into the exa-scale will be discussed.

  20. Flexible server-side processing of climate archives

    NASA Astrophysics Data System (ADS)

    Juckes, M. N.; Stephens, A.; da Costa, E. D.

    2013-12-01

    The flexibility and interoperability of OGC Web Processing Services are combined with an extensive range of data processing operations supported by the Climate Data Operators (CDO) library to facilitate processing of the CMIP5 climate data archive. The challenges posed by this peta-scale archive allow us to test and develop systems which will help us to deal with approaching exa-scale challenges. The CEDA WPS package allows users to manipulate data in the archive and export the results without first downloading the data -- in some cases this can drastically reduce the data volumes which need to be transferred and greatly reduce the time needed for the scientists to get their results. Reductions in data transfer are achieved at the expense of an additional computational load imposed on the archive (or near-archive) infrastructure. This is managed with a load balancing system. Short jobs may be run in near real-time, longer jobs will be queued. When jobs are queued the user is provided with a web dashboard displaying job status. A clean split between the data manipulation software and the request management software is achieved by exploiting the extensive CDO library. This library has a long history of development to support the needs of the climate science community. Use of the library ensures that operations run on data by the system can be reproduced by users using the same operators installed on their own computers. Examples using the system deployed for the CMIP5 archive will be shown and issues which need to be addressed as archive volumes expand into the exa-scale will be discussed.

  1. Operational environmental assessment "Prestige" (a recent application of the MOCASSIM system).

    NASA Astrophysics Data System (ADS)

    Vitorino, J.; Rusu, E.; Almeida, S.; Monteiro, M.; Lermusiaux, P.; Haley, P.; Leslie, W.; Miller, P.; Coelho, E.; Signell, R.

    2003-04-01

    The sinking of tanker "Prestige", on the 19th November 2002, offshore the northwestern coasts of Spain and Portugal, has lead to a major environmental disaster. In this contribution we present several aspects of the operational environmental assessment "Prestige" conducted by Instituto Hidrografico (IH) in close colaboration with Instituto de Meteorologia (IM), the Harvard University, the Plymouth Marine Laboratory (PML) and the Saclancentre. The operational system MOCASSIM, which is presently being developed at IH, was used to provide forecasts of the evolution of oceanographic conditions offshore the NW Iberian coast. The system integrates a primitive equation model with data assimilation (the Harvard Ocean Prediction System - HOPS) and two wave models (the SWAN and WW3 models). The numerical domains used in both HOPS and SWAN models covered the area bewteen 40ºN and 46ºN and from 7ºW to 15ºW, and included the sinking area as well as the coastal regions more directly exposed to the oil spill. The models were run with atmospheric forcing conditions provided by the limited area model ALADIN, run operationally at IM, complemented with NOGAPS wind fields from the NATO METOC site of Rota. The HOPS simulations included assimilation of several data available for region. These data sets included CTD casts from the Northern Spanish shelf and slope (made available by University of Baleares) and SST data processed at the Remote Sensing Group of the PML. Results from both models were used in oil spill models and allowed an estimation of the impacts on the coastal areas.

  2. Recent results of PADReS, the Photon Analysis Delivery and REduction System, from the FERMI FEL commissioning and user operations.

    PubMed

    Zangrando, Marco; Cocco, Daniele; Fava, Claudio; Gerusina, Simone; Gobessi, Riccardo; Mahne, Nicola; Mazzucco, Eric; Raimondi, Lorenzo; Rumiz, Luca; Svetina, Cristian

    2015-05-01

    The Photon Analysis Delivery and REduction System of FERMI (PADReS) has been routinely used during the machine commissioning and operations of FERMI since 2011. It has also served the needs of several user runs at the facility from late 2012. The system is endowed with online and shot-to-shot diagnostics giving information about intensity, spatial-angular distribution, spectral content, as well as other diagnostics to determine coherence, pulse length etc. Moreover, PADReS is capable of manipulating the beam in terms of intensity and optical parameters. Regarding the optics, besides a standard refocusing system based on an ellipsoidal mirror, the Kirkpatrick-Baez active optics systems are key elements and have been used intensively to meet users' requirements. A general description of the system is given, together with some selected results from the commissioning/operations/user beam time.

  3. [Construction and operation status of management system of laboratories of schistosomiasis control institutions in Hubei Province].

    PubMed

    Zhao-Hui, Zheng; Jun, Qin; Li, Chen; Hong, Zhu; Li, Tang; Zu-Wu, Tu; Ming-Xing, Zeng; Qian, Sun; Shun-Xiang, Cai

    2016-10-09

    To analyze the construction and operation status of management system of laboratories of schistosomiasis control institutions in Hubei Province, so as to provide the reference for the standardized detection and management of schistosomiasis laboratories. According to the laboratory standard of schistosomiasis at provincial, municipal and county levels, the management system construction and operation status of 60 schistosomiasis control institutions was assessed by the acceptance examination method from 2013 to 2015. The management system was already occupied over all the laboratories of schistosomiasis control institutions and was officially running. There were 588 non-conformities and the inconsistency rate was 19.60%. The non-conformity rate of the management system of laboratory quality control was 38.10% (224 cases) and the non-conformity rate of requirements of instrument and equipment was 23.81% (140 cases). The management system has played an important role in the standardized management of schistosomiasis laboratories.

  4. Operating system for a real-time multiprocessor propulsion system simulator. User's manual

    NASA Technical Reports Server (NTRS)

    Cole, G. L.

    1985-01-01

    The NASA Lewis Research Center is developing and evaluating experimental hardware and software systems to help meet future needs for real-time, high-fidelity simulations of air-breathing propulsion systems. Specifically, the real-time multiprocessor simulator project focuses on the use of multiple microprocessors to achieve the required computing speed and accuracy at relatively low cost. Operating systems for such hardware configurations are generally not available. A real time multiprocessor operating system (RTMPOS) that supports a variety of multiprocessor configurations was developed at Lewis. With some modification, RTMPOS can also support various microprocessors. RTMPOS, by means of menus and prompts, provides the user with a versatile, user-friendly environment for interactively loading, running, and obtaining results from a multiprocessor-based simulator. The menu functions are described and an example simulation session is included to demonstrate the steps required to go from the simulation loading phase to the execution phase.

  5. Muon Physics at Run-I and its upgrade plan

    NASA Astrophysics Data System (ADS)

    Benekos, Nektarios Chr.

    2015-05-01

    The Large Hadron Collider (LHC) and its multi-purpose Detector, ATLAS, has been operated successfully at record centre-of-mass energies of 7 and TeV. After this successful LHC Run-1, plans are actively advancing for a series of upgrades, culminating roughly 10 years from now in the high luminosity LHC (HL-LHC) project, delivering of order five times the LHC nominal instantaneous luminosity along with luminosity leveling. The final goal is to extend the data set from about few hundred fb-1 expected for LHC running to 3000 fb-1 by around 2030. To cope with the corresponding rate increase, the ATLAS detector needs to be upgraded. The upgrade will proceed in two steps: Phase I in the LHC shutdown 2018/19 and Phase II in 2023-25. The largest of the ATLAS Phase-1 upgrades concerns the replacement of the first muon station of the highrapidity region, the so called New Small Wheel. This configuration copes with the highest rates expected in Phase II and considerably enhances the performance of the forward muon system by adding triggering functionality to the first muon station. Prospects for the ongoing and future data taking are presented. This article presents the main muon physics results from LHC Run-1 based on a total luminosity of 30 fb^-1. Prospects for the ongoing and future data taking are also presented. We will conclude with an update of the status of the project and the steps towards a complete operational system, ready to be installed in ATLAS in 2018/19.

  6. An overview of Booster and AGS polarized proton operation during Run 15

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zeno, K.

    2015-10-20

    This note is an overview of the Booster and AGS for the 2015 Polarized Proton RHIC run from an operations perspective. There are some notable differences between this and previous runs. In particular, the polarized source intensity was expected to be, and was, higher this year than in previous RHIC runs. The hope was to make use of this higher input intensity by allowing the beam to be scraped down more in the Booster to provide a brighter and smaller beam for the AGS and RHIC. The RHIC intensity requirements were also higher this run than in previous runs, whichmore » caused additional challenges because the AGS polarization and emittance are normally intensity dependent.« less

  7. Fuzzy Neuron: Method and Hardware Realization

    NASA Technical Reports Server (NTRS)

    Krasowski, Michael J.; Prokop, Norman F.

    2014-01-01

    This innovation represents a method by which single-to-multi-input, single-to-many-output system transfer functions can be estimated from input/output data sets. This innovation can be run in the background while a system is operating under other means (e.g., through human operator effort), or may be utilized offline using data sets created from observations of the estimated system. It utilizes a set of fuzzy membership functions spanning the input space for each input variable. Linear combiners associated with combinations of input membership functions are used to create the output(s) of the estimator. Coefficients are adjusted online through the use of learning algorithms.

  8. The arrangement of deformation monitoring project and analysis of monitoring data of a hydropower engineering safety monitoring system

    NASA Astrophysics Data System (ADS)

    Wang, Wanshun; Chen, Zhuo; Li, Xiuwen

    2018-03-01

    The safety monitoring is very important in the operation and management of water resources and hydropower projects. It is the important means to understand the dam running status, to ensure the dam safety, to safeguard people’s life and property security, and to make full use of engineering benefits. This paper introduces the arrangement of engineering safety monitoring system based on the example of a water resource control project. The monitoring results of each monitoring project are analyzed intensively to show the operating status of the monitoring system and to provide useful reference for similar projects.

  9. AGATE: Adversarial Game Analysis for Tactical Evaluation

    NASA Technical Reports Server (NTRS)

    Huntsberger, Terrance L.

    2013-01-01

    AGATE generates a set of ranked strategies that enables an autonomous vehicle to track/trail another vehicle that is trying to break the contact using evasive tactics. The software is efficient (can be run on a laptop), scales well with environmental complexity, and is suitable for use onboard an autonomous vehicle. The software will run in near-real-time (2 Hz) on most commercial laptops. Existing software is usually run offline in a planning mode, and is not used to control an unmanned vehicle actively. JPL has developed a system for AGATE that uses adversarial game theory (AGT) methods (in particular, leader-follower and pursuit-evasion) to enable an autonomous vehicle (AV) to maintain tracking/ trailing operations on a target that is employing evasive tactics. The AV trailing, tracking, and reacquisition operations are characterized by imperfect information, and are an example of a non-zero sum game (a positive payoff for the AV is not necessarily an equal loss for the target being tracked and, potentially, additional adversarial boats). Previously, JPL successfully applied the Nash equilibrium method for onboard control of an autonomous ground vehicle (AGV) travelling over hazardous terrain.

  10. Configuring the HYSPLIT Model for National Weather Service Forecast Office and Spaceflight Meteorology Group Applications

    NASA Technical Reports Server (NTRS)

    Dreher, Joseph G.

    2009-01-01

    For expedience in delivering dispersion guidance in the diversity of operational situations, National Weather Service Melbourne (MLB) and Spaceflight Meteorology Group (SMG) are becoming increasingly reliant on the PC-based version of the HYSPLIT model run through a graphical user interface (GUI). While the GUI offers unique advantages when compared to traditional methods, it is difficult for forecasters to run and manage in an operational environment. To alleviate the difficulty in providing scheduled real-time trajectory and concentration guidance, the Applied Meteorology Unit (AMU) configured a Linux version of the Hybrid Single-Particle Lagrangian Integrated Trajectory (HYSPLIT) (HYSPLIT) model that ingests the National Centers for Environmental Prediction (NCEP) guidance, such as the North American Mesoscale (NAM) and the Rapid Update Cycle (RUC) models. The AMU configured the HYSPLIT system to automatically download the NCEP model products, convert the meteorological grids into HYSPLIT binary format, run the model from several pre-selected latitude/longitude sites, and post-process the data to create output graphics. In addition, the AMU configured several software programs to convert local Weather Research and Forecast (WRF) model output into HYSPLIT format.

  11. Test of Hydrogen-Oxygen PEM Fuel Cell Stack at NASA Glenn Research Center

    NASA Technical Reports Server (NTRS)

    Bents, David J.; Scullin, Vincent J.; Chang, Bei-Jiann; Johnson, Donald W.; Garcia, Christopher P.; Jakupca, Ian J.

    2003-01-01

    This paper describes performance characterization tests of a 64 cell hydrogen oxygen PEM fuel cell stack at NASA Glenn Research Center in February 2003. The tests were part of NASA's ongoing effort to develop a regenerative fuel cell for aerospace energy storage applications. The purpose of the tests was to verify capability of this stack to operate within a regenerative fuel cell, and to compare performance with earlier test results recorded by the stack developer. Test results obtained include polarization performance of the stack at 50 and 100 psig system pressure, and a steady state endurance run at 100 psig. A maximum power output of 4.8 kWe was observed during polarization runs, and the stack sustained a steady power output of 4.0 kWe during the endurance run. The performance data obtained from these tests compare reasonably close to the stack developer's results although some additional spread between best to worst performing cell voltages was observed. Throughout the tests, the stack demonstrated the consistent performance and repeatable behavior required for regenerative fuel cell operation.

  12. TWOS - TIME WARP OPERATING SYSTEM, VERSION 2.5.1

    NASA Technical Reports Server (NTRS)

    Bellenot, S. F.

    1994-01-01

    The Time Warp Operating System (TWOS) is a special-purpose operating system designed to support parallel discrete-event simulation. TWOS is a complete implementation of the Time Warp mechanism, a distributed protocol for virtual time synchronization based on process rollback and message annihilation. Version 2.5.1 supports simulations and other computations using both virtual time and dynamic load balancing; it does not support general time-sharing or multi-process jobs using conventional message synchronization and communication. The program utilizes the underlying operating system's resources. TWOS runs a single simulation at a time, executing it concurrently on as many processors of a distributed system as are allocated. The simulation needs only to be decomposed into objects (logical processes) that interact through time-stamped messages. TWOS provides transparent synchronization. The user does not have to add any more special logic to aid in synchronization, nor give any synchronization advice, nor even understand much about how the Time Warp mechanism works. The Time Warp Simulator (TWSIM) subdirectory contains a sequential simulation engine that is interface compatible with TWOS. This means that an application designer and programmer who wish to use TWOS can prototype code on TWSIM on a single processor and/or workstation before having to deal with the complexity of working on a distributed system. TWSIM also provides statistics about the application which may be helpful for determining the correctness of an application and for achieving good performance on TWOS. Version 2.5.1 has an updated interface that is not compatible with 2.0. The program's user manual assists the simulation programmer in the design, coding, and implementation of discrete-event simulations running on TWOS. The manual also includes a practical user's guide to the TWOS application benchmark, Colliding Pucks. TWOS supports simulations written in the C programming language. It is designed to run on the Sun3/Sun4 series computers and the BBN "Butterfly" GP-1000 computer. The standard distribution medium for this package is a .25 inch tape cartridge in TAR format. TWOS was developed in 1989 and updated in 1991. This program is a copyrighted work with all copyright vested in NASA. Sun3 and Sun4 are trademarks of Sun Microsystems, Inc.

  13. Linux Kernel Co-Scheduling and Bulk Synchronous Parallelism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, Terry R

    2012-01-01

    This paper describes a kernel scheduling algorithm that is based on coscheduling principles and that is intended for parallel applications running on 1000 cores or more. Experimental results for a Linux implementation on a Cray XT5 machine are presented. The results indicate that Linux is a suitable operating system for this new scheduling scheme, and that this design provides a dramatic improvement in scaling performance for synchronizing collective operations at scale.

  14. OpCost: an open-source system for estimating costs of stand-level forest operations

    Treesearch

    Conor K. Bell; Robert F. Keefe; Jeremy S. Fried

    2017-01-01

    This report describes and documents the OpCost forest operations cost model, a key component of the BioSum analysis framework. OpCost is available in two editions: as a callable module for use with BioSum, and in a stand-alone edition that can be run directly from R. OpCost model logic and assumptions for this open-source tool are explained, references to the...

  15. Bulk Shielding Facility quarterly report, April, May and June 1984

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Corbett, B.L.; Lance, E.D.

    1984-12-01

    The BSR operated at an average power level of 1310 kW for 3.8% of the time during April, May, and June. Water-quality control in both the reactor primary and secondary cooling systems was satisfactory. The PCA was used in training startups and was operated on five occasions for the NBS and HEDL recheck of a previous experiment run on the LWR pressure vessel surveillance dosimetry improvement program.

  16. Ada 9X Project Report, A Study of Implementation-Dependent Pragmas and Attributes in Ada

    DTIC Science & Technology

    1989-11-01

    here communicatons with the vendor were often required to firmly establish the behavior of some implementation-dependent features CMU-SEI-SR-89-19 3 2.2...compilers), by potential market penetration (percent coverage of all surveyed implementations), and by cross-compiler influence (percentage of cross...operations in the context of a tightly integrated development environment, specific underlying operating system services (beneath the Ada run- time kernel

  17. Collaborative Autonomous Unmanned Aerial - Ground Vehicle Systems for Field Operations

    DTIC Science & Technology

    2007-08-31

    very limited payload capabilities of small UVs, sacrificing minimal computational power and run time, adhering at the same time to the low cost...configuration has been chosen because of its high computational capabilities, low power consumption, multiple I/O ports, size, low heat emission and cost. This...due to their high power to weight ratio, small packaging, and wide operating temperatures. Power distribution is controlled by the 120 Watt ATX power

  18. The Chimera II Real-Time Operating System for advanced sensor-based control applications

    NASA Technical Reports Server (NTRS)

    Stewart, David B.; Schmitz, Donald E.; Khosla, Pradeep K.

    1992-01-01

    Attention is given to the Chimera II Real-Time Operating System, which has been developed for advanced sensor-based control applications. The Chimera II provides a high-performance real-time kernel and a variety of IPC features. The hardware platform required to run Chimera II consists of commercially available hardware, and allows custom hardware to be easily integrated. The design allows it to be used with almost any type of VMEbus-based processors and devices. It allows radially differing hardware to be programmed using a common system, thus providing a first and necessary step towards the standardization of reconfigurable systems that results in a reduction of development time and cost.

  19. Decision Support Systems for Launch and Range Operations Using Jess

    NASA Technical Reports Server (NTRS)

    Thirumalainambi, Rajkumar

    2007-01-01

    The virtual test bed for launch and range operations developed at NASA Ames Research Center consists of various independent expert systems advising on weather effects, toxic gas dispersions and human health risk assessment during space-flight operations. An individual dedicated server supports each expert system and the master system gather information from the dedicated servers to support the launch decision-making process. Since the test bed is based on the web system, reducing network traffic and optimizing the knowledge base is critical to its success of real-time or near real-time operations. Jess, a fast rule engine and powerful scripting environment developed at Sandia National Laboratory has been adopted to build the expert systems providing robustness and scalability. Jess also supports XML representation of knowledge base with forward and backward chaining inference mechanism. Facts added - to working memory during run-time operations facilitates analyses of multiple scenarios. Knowledge base can be distributed with one inference engine performing the inference process. This paper discusses details of the knowledge base and inference engine using Jess for a launch and range virtual test bed.

  20. Distributed analysis in ATLAS

    NASA Astrophysics Data System (ADS)

    Dewhurst, A.; Legger, F.

    2015-12-01

    The ATLAS experiment accumulated more than 140 PB of data during the first run of the Large Hadron Collider (LHC) at CERN. The analysis of such an amount of data is a challenging task for the distributed physics community. The Distributed Analysis (DA) system of the ATLAS experiment is an established and stable component of the ATLAS distributed computing operations. About half a million user jobs are running daily on DA resources, submitted by more than 1500 ATLAS physicists. The reliability of the DA system during the first run of the LHC and the following shutdown period has been high thanks to the continuous automatic validation of the distributed analysis sites and the user support provided by a dedicated team of expert shifters. During the LHC shutdown, the ATLAS computing model has undergone several changes to improve the analysis workflows, including the re-design of the production system, a new analysis data format and event model, and the development of common reduction and analysis frameworks. We report on the impact such changes have on the DA infrastructure, describe the new DA components, and include recent performance measurements.

  1. A Survey of Research in Supervisory Control and Data Acquisition (SCADA)

    DTIC Science & Technology

    2014-09-01

    distance learning .2 The data acquired may be operationally oriented and used to better run the system, or it could be strategic in nature and used to...Technically the SCADA system is composed of the information technology (IT) that provides the human- machine interface (HMI) and stores and analyzes the data...systems work by learning what normal or benign traffic is and reporting on any abnormal traffic. These systems have the potential to detect zero-day

  2. Effects of intrinsic aerobic capacity and ovariectomy on voluntary wheel running and nucleus accumbens dopamine receptor gene expression

    PubMed Central

    Park, Young-Min; Kanaley, Jill A.; Padilla, Jaume; Zidon, Terese; Welly, Rebecca J.; Will, Matthew J.; Britton, Steven L.; Koch, Lauren G.; Ruegsegger, Gregory N.; Booth, Frank W.; Thyfault, John P.; Vieira-Potter, Victoria J.

    2016-01-01

    Rats selectively bred for high (HCR) and low (LCR) aerobic capacity show a stark divergence in wheel running behavior, which may be associated with dopamine (DA) system in the brain. HCR possess greater motivation for voluntary running along with greater brain DA activity compared to LCR. We recently demonstrated that HCR are not immune to ovariectomy (OVX)-associated reductions in spontaneous cage (i.e. locomotor) activity. Whether HCR and LCR rats differ in their OVX-mediated voluntary wheel running response is unknown. PURPOSE To determine whether HCR are protected from OVX-associated reduction in voluntary wheel running. METHODS Forty female HCR and LCR rats (age ~27 weeks) had either SHM or OVX operations, and given access to a running wheel for 11 weeks. Weekly wheel running distance was monitored throughout the intervention. Nucleus accumbens (NAc) was assessed for mRNA expression of DA receptors at sacrifice. RESULTS Compared to LCR, HCR ran greater distance and had greater ratio of excitatory/inhibitory DA mRNA expression (both line main effects, P<0.05). Wheel running distance was significantly, positively correlated with the ratio of excitatory/inhibitory DA mRNA expression across animals. In both lines, OVX reduced wheel running (P<0.05). Unexpectedly, although HCR started with significantly greater voluntary wheel running, they had greater OVX-induced reduction in wheel running than LCR such that no differences were found 11 weeks after OVX between HCROVX and LCROVX (interaction, P<0.05). This significant reduction in wheel running in HCR was associated with an OVX-mediated reduction in the ratio of excitatory/inhibitory DA mRNA expression. CONCLUSION DA system in the NAc region may play a significant role in motivation to run in female rats. Compared to LCR, HCR rats run significantly more, which associates with greater ratio of excitatory/inhibitory DA mRNA expression. However, despite greater inherent motivation to run and an associated brain DA mRNA expression profile, these HCR rats are not protected against OVX-induced reduction in wheel running. The impairment in wheel running in HCR rats may be partially explained by their reduced ratio of excitatory/inhibitory DA receptor mRNA expression. PMID:27297873

  3. A CAMAC-VME-Macintosh data acquisition system for nuclear experiments

    NASA Astrophysics Data System (ADS)

    Anzalone, A.; Giustolisi, F.

    1989-10-01

    A multiprocessor system for data acquisition and analysis in low-energy nuclear physics has been realized. The system is built around CAMAC, the VMEbus, and the Macintosh PC. Multiprocessor software has been developed, using RTF, MACsys, and CERN cross-software. The execution of several programs that run on several VME CPUs and on an external PC is coordinated by a mailbox protocol. No operating system is used on the VME CPUs. The hardware, software, and system performance are described.

  4. 40 CFR 63.4371 - What definitions apply to this subpart?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... wiper blades. Thus, it includes any cleaning material used in the web coating and printing subcategory... process operation run at atmospheric pressure would be a different operating scenario from the same dyeing process operation run under pressure. Organic HAP content means the mass of organic HAP per mass of solids...

  5. 40 CFR 63.4371 - What definitions apply to this subpart?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... wiper blades. Thus, it includes any cleaning material used in the web coating and printing subcategory... process operation run at atmospheric pressure would be a different operating scenario from the same dyeing process operation run under pressure. Organic HAP content means the mass of organic HAP per mass of solids...

  6. 40 CFR 63.4371 - What definitions apply to this subpart?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... wiper blades. Thus, it includes any cleaning material used in the web coating and printing subcategory... process operation run at atmospheric pressure would be a different operating scenario from the same dyeing process operation run under pressure. Organic HAP content means the mass of organic HAP per mass of solids...

  7. 40 CFR 63.4371 - What definitions apply to this subpart?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... wiper blades. Thus, it includes any cleaning material used in the web coating and printing subcategory... process operation run at atmospheric pressure would be a different operating scenario from the same dyeing process operation run under pressure. Organic HAP content means the mass of organic HAP per mass of solids...

  8. 40 CFR 63.4371 - What definitions apply to this subpart?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... wiper blades. Thus, it includes any cleaning material used in the web coating and printing subcategory... process operation run at atmospheric pressure would be a different operating scenario from the same dyeing process operation run under pressure. Organic HAP content means the mass of organic HAP per mass of solids...

  9. Intelligent operations of the data acquisition system of the ATLAS experiment at LHC

    NASA Astrophysics Data System (ADS)

    Anders, G.; Avolio, G.; Lehmann Miotto, G.; Magnoni, L.

    2015-05-01

    The ATLAS experiment at the Large Hadron Collider at CERN relies on a complex and highly distributed Trigger and Data Acquisition (TDAQ) system to gather and select particle collision data obtained at unprecedented energy and rates. The Run Control (RC) system is the component steering the data acquisition by starting and stopping processes and by carrying all data-taking elements through well-defined states in a coherent way. Taking into account all the lessons learnt during LHC's Run 1, the RC has been completely re-designed and re-implemented during the LHC Long Shutdown 1 (LS1) phase. As a result of the new design, the RC is assisted by the Central Hint and Information Processor (CHIP) service that can be truly considered its “brain”. CHIP is an intelligent system able to supervise the ATLAS data taking, take operational decisions and handle abnormal conditions. In this paper, the design, implementation and performances of the RC/CHIP system will be described. A particular emphasis will be put on the way the RC and CHIP cooperate and on the huge benefits brought by the Complex Event Processing engine. Additionally, some error recovery scenarios will be analysed for which the intervention of human experts is now rendered unnecessary.

  10. Recent advances in the multimodel hydrologic ensemble forecasting using the HydroProg system in the Nysa Klodzka river basin (southwestern Poland)

    NASA Astrophysics Data System (ADS)

    Niedzielski, Tomasz; Mizinski, Bartlomiej; Swierczynska-Chlasciak, Malgorzata

    2017-04-01

    The HydroProg system, the real-time multimodel hydrologic ensemble system elaborated at the University of Wroclaw (Poland) in frame of the research grant no. 2011/01/D/ST10/04171 financed by National Science Centre of Poland, has been experimentally launched in 2013 in the Nysa Klodzka river basin (southwestern Poland). Since that time the system has been working operationally to provide water level predictions in real time. At present, depending on a hydrologic gauge, up to eight hydrologic models are run. They are data- and physically-based solutions, with the majority of them being the data-based ones. The paper aims to report on the performance of the implementation of the HydroProg system for the basin in question. We focus on several high flows episodes and discuss the skills of the individual models in forecasting them. In addition, we present the performance of the multimodel ensemble solution. We also introduce a new prognosis which is determined in the following way: for a given lead time we select the most skillful prediction (from the set of all individual models running at a given gauge and their multimodel ensemble) using the performance statistics computed operationally in real time as a function of lead time.

  11. An improved cellular automata model for train operation simulation with dynamic acceleration

    NASA Astrophysics Data System (ADS)

    Li, Wen-Jun; Nie, Lei

    2018-03-01

    Urban rail transit plays an important role in the urban public traffic because of its advantages of fast speed, large transport capacity, high safety, reliability and low pollution. This study proposes an improved cellular automaton (CA) model by considering the dynamic characteristic of the train acceleration to analyze the energy consumption and train running time. Constructing an effective model for calculating energy consumption to aid train operation improvement is the basis for studying and analyzing energy-saving measures for urban rail transit system operation.

  12. Space Shuttle Orbiter auxiliary power unit

    NASA Technical Reports Server (NTRS)

    Mckenna, R.; Wicklund, L.; Baughman, J.; Weary, D.

    1982-01-01

    The Space Shuttle Orbiter auxiliary power units (APUs) provide hydraulic power for the Orbiter vehicle control surfaces (rudder/speed brake, body flap, and elevon actuation systems), main engine gimbaling during ascent, landing gear deployment and steering and braking during landing. Operation occurs during launch/ascent, in-space exercise, reentry/descent, and landing/rollout. Operational effectiveness of the APU is predicated on reliable, failure-free operation during each flight, mission life (reusability) and serviceability between flights (turnaround). Along with the accumulating flight data base, the status and results of efforts to achieve these long-run objectives is presented.

  13. Platform-Independence and Scheduling In a Multi-Threaded Real-Time Simulation

    NASA Technical Reports Server (NTRS)

    Sugden, Paul P.; Rau, Melissa A.; Kenney, P. Sean

    2001-01-01

    Aviation research often relies on real-time, pilot-in-the-loop flight simulation as a means to develop new flight software, flight hardware, or pilot procedures. Often these simulations become so complex that a single processor is incapable of performing the necessary computations within a fixed time-step. Threads are an elegant means to distribute the computational work-load when running on a symmetric multi-processor machine. However, programming with threads often requires operating system specific calls that reduce code portability and maintainability. While a multi-threaded simulation allows a significant increase in the simulation complexity, it also increases the workload of a simulation operator by requiring that the operator determine which models run on which thread. To address these concerns an object-oriented design was implemented in the NASA Langley Standard Real-Time Simulation in C++ (LaSRS++) application framework. The design provides a portable and maintainable means to use threads and also provides a mechanism to automatically load balance the simulation models.

  14. Ripples on Cratered Terrain North of Hesperia Planum

    NASA Technical Reports Server (NTRS)

    1999-01-01

    This is a Mars Orbiter Camera view of the cratered uplands located between the Amenthes Fossae and Hesperia Planum. This ancient, cratered surface sports a covering of windblown dunes and ripples oriented in somewhat different directions. The dunes are bigger and their crests generally run east-west across the image. The ripples are smaller and their crests run in a more north-south direction. The pattern they create together makes some of the dunes almost appear as if they are giant millipedes!This picture covers an area only 3 kilometers (1.9 miles) wide. Illumination is from the top.

    Malin Space Science Systems and the California Institute of Technology built the MOC using spare hardware from the Mars Observer mission. MSSS operates the camera from its facilities in San Diego, CA. The Jet Propulsion Laboratory's Mars Surveyor Operations Project operates the Mars Global Surveyor spacecraft with its industrial partner, Lockheed Martin Astronautics, from facilities in Pasadena, CA and Denver, CO.

  15. Remote Operations of Laser Guide Star Systems: Gemini Observatory.

    NASA Astrophysics Data System (ADS)

    Oram, Richard J.; Fesquet, Vincent; Wyman, Robert; D'Orgeville, Celine

    2011-03-01

    The Gemini North telescope, equipped with a 14W laser, has been providing Laser Guide Star Adaptive Optics (LGS AO) regular science queue observations for worldwide astronomers since February 2007. The new 55W laser system for MCAO was installed on the Gemini South telescope in May 2010. In this paper, we comment on how Gemini Observatory developed regular remote operation of the Laser Guide Star Facility and high-power solid-state laser as routine normal operations. Fully remote operation of the LGSF from the Hilo base facility HBF was initially trialed and then optimized and became the standard operating procedure (SOP) for LGS operation in December 2008. From an engineering perspective remote operation demands stable, well characterized and base-lined equipment sets. In the effort to produce consistent, stable and controlled laser parameters (power, wavelength and beam quality) we completed a failure mode effect analysis of the laser system and sub systems that initiated a campaign of hardware upgrades and procedural improvements to the routine maintenance operations. Finally, we provide an overview of normal operation procedures during LGS runs and present a snapshot of data accumulated over several years that describes the overall LGS AO observing efficiency at the Gemini North telescope.

  16. A Rapid Method for Optimizing Running Temperature of Electrophoresis through Repetitive On-Chip CE Operations

    PubMed Central

    Kaneda, Shohei; Ono, Koichi; Fukuba, Tatsuhiro; Nojima, Takahiko; Yamamoto, Takatoki; Fujii, Teruo

    2011-01-01

    In this paper, a rapid and simple method to determine the optimal temperature conditions for denaturant electrophoresis using a temperature-controlled on-chip capillary electrophoresis (CE) device is presented. Since on-chip CE operations including sample loading, injection and separation are carried out just by switching the electric field, we can repeat consecutive run-to-run CE operations on a single on-chip CE device by programming the voltage sequences. By utilizing the high-speed separation and the repeatability of the on-chip CE, a series of electrophoretic operations with different running temperatures can be implemented. Using separations of reaction products of single-stranded DNA (ssDNA) with a peptide nucleic acid (PNA) oligomer, the effectiveness of the presented method to determine the optimal temperature conditions required to discriminate a single-base substitution (SBS) between two different ssDNAs is demonstrated. It is shown that a single run for one temperature condition can be executed within 4 min, and the optimal temperature to discriminate the SBS could be successfully found using the present method. PMID:21845077

  17. Doppler Lidar System Design via Interdisciplinary Design Concept at NASA Langley Research Center - Part II

    NASA Technical Reports Server (NTRS)

    Crasner, Aaron I.; Scola,Salvatore; Beyon, Jeffrey Y.; Petway, Larry B.

    2014-01-01

    Optimized designs of the Navigation Doppler Lidar (NDL) instrument for Autonomous Landing Hazard Avoidance Technology (ALHAT) were accomplished via Interdisciplinary Design Concept (IDEC) at NASA Langley Research Center during the summer of 2013. Three branches in the Engineering Directorate and three students were involved in this joint task through the NASA Langley Aerospace Research Summer Scholars (LARSS) Program. The Laser Remote Sensing Branch (LRSB), Mechanical Systems Branch (MSB), and Structural and Thermal Systems Branch (STSB) were engaged to achieve optimal designs through iterative and interactive collaborative design processes. A preliminary design iteration was able to reduce the power consumption, mass, and footprint by removing redundant components and replacing inefficient components with more efficient ones. A second design iteration reduced volume and mass by replacing bulky components with excessive performance with smaller components custom-designed for the power system. Thermal modeling software was used to run steady state thermal analyses, which were used to both validate the designs and recommend further changes. Analyses were run on each redesign, as well as the original system. Thermal Desktop was used to run trade studies to account for uncertainty and assumptions about fan performance and boundary conditions. The studies suggested that, even if the assumptions were significantly wrong, the redesigned systems would remain within operating temperature limits.

  18. Applying the System Component and Operationally Relevant Evaluation (SCORE) Framework to Evaluate Advanced Military Technologies

    DTIC Science & Technology

    2010-03-01

    and charac- terize the actions taken by the soldier (e.g., running, walking, climbing stairs ). Real-time image capture and exchange N The ability of...multimedia information sharing among soldiers in the field, two-way speech translation systems, and autonomous robotic platforms. Key words: Emerging...soldiers in the field, two-way speech translation systems, and autonomous robotic platforms. It has been the foundation for 10 technology evaluations

  19. Formal Specification and Verification of Concurrent Programs

    DTIC Science & Technology

    1993-02-01

    of examples from the emerging theory of This book describes operating systems in general programming languages. via the construction of MINIX , a UNIX...look-alike that runs on IBM-PC compatibles. The book con- Wegner72 tains a complete MINIX manual and a complete Wegnerflisting of its C codie. egner

  20. Technology Demonstration Summary Shirco Electric Infrared Incineration At The Peak Oil Superfund Site

    EPA Science Inventory

    Under the auspices of the Superfund Innovative Technology Evaluation or SITE Program, a critical assessment is made of the performance of the transportable Shirco Infrared Thermal Destruction System during three separate test runs at an operating feed rate of 100 tons per day. Th...

  1. 77 FR 65801 - Airworthiness Directives; BAE SYSTEMS (OPERATIONS) LIMITED Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-31

    ... hydraulic fluid or hydraulic vapor from entering the passenger compartment, possibly resulting in injury to... pipe failures were caused by a combination of seam welded pipes, bends in the pipe runs with small bend... compartment, possibly resulting in injury to the occupants. For the reasons described above, this [European...

  2. Autoshaping and Automaintenance: A Neural-Network Approach

    ERIC Educational Resources Information Center

    Burgos, Jose E.

    2007-01-01

    This article presents an interpretation of autoshaping, and positive and negative automaintenance, based on a neural-network model. The model makes no distinction between operant and respondent learning mechanisms, and takes into account knowledge of hippocampal and dopaminergic systems. Four simulations were run, each one using an "A-B-A" design…

  3. Jet Engines as High-Capacity Vacuum Pumps

    NASA Technical Reports Server (NTRS)

    Wojciechowski, C. J.

    1983-01-01

    Large diffuser operations envelope and long run times possible. Jet engine driven ejector/diffuser system combines two turbojet engines and variable-area-ratio ejector in two stages. Applications in such industrial proesses as handling corrosive fumes, evaporation of milk and fruit juices, petroleum distillation, and dehydration of blood plasma and penicillin.

  4. The Need for Vendor Source Code at NAS. Revised

    NASA Technical Reports Server (NTRS)

    Carter, Russell; Acheson, Steve; Blaylock, Bruce; Brock, David; Cardo, Nick; Ciotti, Bob; Poston, Alan; Wong, Parkson; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    The Numerical Aerodynamic Simulation (NAS) Facility has a long standing practice of maintaining buildable source code for installed hardware. There are two reasons for this: NAS's designated pathfinding role, and the need to maintain a smoothly running operational capacity given the widely diversified nature of the vendor installations. NAS has a need to maintain support capabilities when vendors are not able; diagnose and remedy hardware or software problems where applicable; and to support ongoing system software development activities whether or not the relevant vendors feel support is justified. This note provides an informal history of these activities at NAS, and brings together the general principles that drive the requirement that systems integrated into the NAS environment run binaries built from source code, onsite.

  5. Development of the CELSS Emulator at NASA JSC

    NASA Technical Reports Server (NTRS)

    Cullingford, Hatice S.

    1989-01-01

    The Controlled Ecological Life Support System (CELSS) Emulator is under development at the NASA Johnson Space Center (JSC) with the purpose to investigate computer simulations of integrated CELSS operations involving humans, plants, and process machinery. This paper describes Version 1.0 of the CELSS Emulator that was initiated in 1988 on the JSC Multi Purpose Applications Console Test Bed as the simulation framework. The run module of the simulation system now contains a CELSS model called BLSS. The CELSS Emulator makes it possible to generate model data sets, store libraries of results for further analysis, and also display plots of model variables as a function of time. The progress of the project is presented with sample test runs and simulation display pages.

  6. Development of the CELSS emulator at NASA. Johnson Space Center

    NASA Technical Reports Server (NTRS)

    Cullingford, Hatice S.

    1990-01-01

    The Closed Ecological Life Support System (CELSS) Emulator is under development. It will be used to investigate computer simulations of integrated CELSS operations involving humans, plants, and process machinery. Described here is Version 1.0 of the CELSS Emulator that was initiated in 1988 on the Johnson Space Center (JSC) Multi Purpose Applications Console Test Bed as the simulation framework. The run model of the simulation system now contains a CELSS model called BLSS. The CELSS simulator empowers us to generate model data sets, store libraries of results for further analysis, and also display plots of model variables as a function of time. The progress of the project is presented with sample test runs and simulation display pages.

  7. A performance comparison of the Cray-2 and the Cray X-MP

    NASA Technical Reports Server (NTRS)

    Schmickley, Ronald; Bailey, David H.

    1986-01-01

    A suite of thirteen large Fortran benchmark codes were run on Cray-2 and Cray X-MP supercomputers. These codes were a mix of compute-intensive scientific application programs (mostly Computational Fluid Dynamics) and some special vectorized computation exercise programs. For the general class of programs tested on the Cray-2, most of which were not specially tuned for speed, the floating point operation rates varied under a variety of system load configurations from 40 percent up to 125 percent of X-MP performance rates. It is concluded that the Cray-2, in the original system configuration studied (without memory pseudo-banking) will run untuned Fortran code, on average, about 70 percent of X-MP speeds.

  8. Altitude Scaling of Thermal Ice Protection Systems in Running Wet Operation

    NASA Technical Reports Server (NTRS)

    Orchard, D. M.; Addy, H. E.; Wright, W. B.; Tsao, J.

    2017-01-01

    A study into the effects of altitude on an aircraft thermal Ice Protection System (IPS) performance has been conducted by the National Research Council Canada (NRC) in collaboration with the NASA Glenn Icing Branch. The study included tests of an airfoil model, with a heated-air IPS, installed in the NRCs Altitude Icing Wind Tunnel (AIWT) at altitude and ground level conditions.

  9. ART/Ada design project, phase 1: Project plan

    NASA Technical Reports Server (NTRS)

    Allen, Bradley P.

    1988-01-01

    The plan and schedule for Phase 1 of the Ada based ESBT Design Research Project is described. The main platform for the project is a DEC Ada compiler on VAX mini-computers and VAXstations running the Virtual Memory System (VMS) operating system. The Ada effort and lines of code are given in tabular form. A chart is given of the entire project life cycle.

  10. A database for coconut crop improvement.

    PubMed

    Rajagopal, Velamoor; Manimekalai, Ramaswamy; Devakumar, Krishnamurthy; Rajesh; Karun, Anitha; Niral, Vittal; Gopal, Murali; Aziz, Shamina; Gunasekaran, Marimuthu; Kumar, Mundappurathe Ramesh; Chandrasekar, Arumugam

    2005-12-08

    Coconut crop improvement requires a number of biotechnology and bioinformatics tools. A database containing information on CG (coconut germplasm), CCI (coconut cultivar identification), CD (coconut disease), MIFSPC (microbial information systems in plantation crops) and VO (vegetable oils) is described. The database was developed using MySQL and PostgreSQL running in Linux operating system. The database interface is developed in PHP, HTML and JAVA. http://www.bioinfcpcri.org.

  11. FOS: A Factored Operating Systems for High Assurance and Scalability on Multicores

    DTIC Science & Technology

    2012-08-01

    computing. It builds on previous work in distributed and microkernel OSes by factoring services out of the kernel, and then further distributing each...2 3.0 Methods, Assumptions, and Procedures (System Design) .................................................. 4 3.1 Microkernel ...cooperating servers. We term such a service a fleet. Figure 2 shows the high-level architecture of fos. A small microkernel runs on every core

  12. Effects of running time of a cattle-cooling system on core body temperature of cows on dairy farms in an arid environment.

    PubMed

    Ortiz, X A; Smith, J F; Bradford, B J; Harner, J P; Oddy, A

    2010-10-01

    Two experiments were conducted on a commercial dairy farm to describe the effects of a reduction in Korral Kool (KK; Korral Kool Inc., Mesa, AZ) system operating time on core body temperature (CBT) of primiparous and multiparous cows. In the first experiment, KK systems were operated for 18, 21, or 24 h/d while CBT of 63 multiparous Holstein dairy cows was monitored. All treatments started at 0600 h, and KK systems were turned off at 0000 h and 0300 h for the 18-h and 21-h treatments, respectively. Animals were housed in 9 pens and assigned randomly to treatment sequences in a 3 × 3 Latin square design. In the second experiment, 21 multiparous and 21 primiparous cows were housed in 6 pens and assigned randomly to treatment sequences (KK operated for 21 or 24 h/d) in a switchback design. All treatments started at 0600 h, and KK systems were turned off at 0300 h for the 21-h treatments. In experiment 1, cows in the 24-h treatment had a lower mean CBT than cows in the 18- and 21-h treatments (38.97, 39.08, and 39.03±0.04°C, respectively). The significant treatment by time interaction showed that the greatest treatment effects occurred at 0600 h; treatment means at this time were 39.43, 39.37, and 38.88±0.18°C for 18-, 21-, and 24-h treatments, respectively. These results demonstrate that a reduction in KK system running time of ≥3 h/d will increase CBT. In experiment 2, a significant parity by treatment interaction was found. Multiparous cows on the 24-h treatment had lower mean CBT than cows on the 21-h treatment (39.23 and 39.45±0.17°C, respectively), but treatment had no effect on mean CBT of primiparous cows (39.50 and 39.63±0.20°C for 21- and 24-h treatments, respectively). A significant treatment by time interaction was observed, with the greatest treatment effects occurring at 0500 h; treatment means at this time were 39.57, 39.23, 39.89, and 39.04±0.24°C for 21-h primiparous, 24-h primiparous, 21-h multiparous, and 24-h multiparous cows, respectively. These results demonstrate that multiparous and primiparous cows respond differently when KK system running time decreases from 24 to 21 h. We conclude that in desert climates, the KK system should be operated continuously to decrease heat stress of multiparous dairy cows, but that operating time could be reduced from 24 to 21 h for primiparous cows. Reducing system operation time should be done carefully, however, because CBT was elevated in all treatments. Copyright © 2010 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  13. Case Study: Mobile Photovoltaic System at Bechler Meadows Ranger Station, Yellowstone National Park (Brochure)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None, None

    The mobile PV/generator hybrid system deployed at Bechler Meadows provides a number of advantages. It reduces on-site air emissions from the generator. Batteries allow the generator to operate only at its rated power, reducing run-time and fuel consumption. Energy provided by the solar array reduces fuel consumption and run-time of the generator. The generator is off for most hours providing peace and quiet at the site. Maintenance trips from Mammoth Hot Springs to the remote site are reduced. The frequency of intrusive fuel deliveries to the pristine site is reduced. And the system gives rangers a chance to interpret Greenmore » Park values to the visiting public. As an added bonus, the system provides all these benefits at a lower cost than the basecase of using only a propane-fueled generator, reducing life cycle cost by about 26%.« less

  14. Case Study: Mobile Photovoltaic System at Bechler Meadows Ranger Station, Yellowstone National Park

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andy Walker

    The mobile PV/generator hybrid system deployed at Bechler Meadows provides a number of advantages. It reduces on-site air emissions from the generator. Batteries allow the generator to operate only at its rated power, reducing run-time and fuel consumption. Energy provided by the solar array reduces fuel consumption and run-time of the generator. The generator is off for most hours providing peace and quiet at the site. Maintenance trips from Mammoth Hot Springs to the remote site are reduced. The frequency of intrusive fuel deliveries to the pristine site is reduced. And the system gives rangers a chance to interpret Greenmore » Park values to the visiting public. As an added bonus, the system provides all these benefits at a lower cost than the basecase of using only a propane-fueled generator, reducing life cycle cost by about 26%.« less

  15. Adaptive DFT-based Interferometer Fringe Tracking

    NASA Technical Reports Server (NTRS)

    Wilson, Edward; Pedretti, Ettore; Bregman, Jesse; Mah, Robert W.; Traub, Wesley A.

    2004-01-01

    An automatic interferometer fringe tracking system has been developed, implemented, and tested at the Infrared Optical Telescope Array (IOTA) observatory at Mt. Hopkins, Arizona. The system can minimize the optical path differences (OPDs) for all three baselines of the Michelson stellar interferometer at IOTA. Based on sliding window discrete Fourier transform (DFT) calculations that were optimized for computational efficiency and robustness to atmospheric disturbances, the algorithm has also been tested extensively on off-line data. Implemented in ANSI C on the 266 MHz PowerPC processor running the VxWorks real-time operating system, the algorithm runs in approximately 2.0 milliseconds per scan (including all three interferograms), using the science camera and piezo scanners to measure and correct the OPDs. The adaptive DFT-based tracking algorithm should be applicable to other systems where there is a need to detect or track a signal with an approximately constant-frequency carrier pulse.

  16. Certification Strategies using Run-Time Safety Assurance for Part 23 Autopilot Systems

    NASA Technical Reports Server (NTRS)

    Hook, Loyd R.; Clark, Matthew; Sizoo, David; Skoog, Mark A.; Brady, James

    2016-01-01

    Part 23 aircraft operation, and in particular general aviation, is relatively unsafe when compared to other common forms of vehicle travel. Currently, there exists technologies that could increase safety statistics for these aircraft; however, the high burden and cost of performing the requisite safety critical certification processes for these systems limits their proliferation. For this reason, many entities, including the Federal Aviation Administration, NASA, and the US Air Force, are considering new options for certification for technologies that will improve aircraft safety. Of particular interest, are low cost autopilot systems for general aviation aircraft, as these systems have the potential to positively and significantly affect safety statistics. This paper proposes new systems and techniques, leveraging run-time verification, for the assurance of general aviation autopilot systems, which would be used to supplement the current certification process and provide a viable path for near-term low-cost implementation. In addition, discussions on preliminary experimentation and building the assurance case for a system, based on these principles, is provided.

  17. The ATLAS Production System Evolution: New Data Processing and Analysis Paradigm for the LHC Run2 and High-Luminosity

    NASA Astrophysics Data System (ADS)

    Barreiro, F. H.; Borodin, M.; De, K.; Golubkov, D.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Padolski, S.; Wenaus, T.; ATLAS Collaboration

    2017-10-01

    The second generation of the ATLAS Production System called ProdSys2 is a distributed workload manager that runs daily hundreds of thousands of jobs, from dozens of different ATLAS specific workflows, across more than hundred heterogeneous sites. It achieves high utilization by combining dynamic job definition based on many criteria, such as input and output size, memory requirements and CPU consumption, with manageable scheduling policies and by supporting different kind of computational resources, such as GRID, clouds, supercomputers and volunteer-computers. The system dynamically assigns a group of jobs (task) to a group of geographically distributed computing resources. Dynamic assignment and resources utilization is one of the major features of the system, it didn’t exist in the earliest versions of the production system where Grid resources topology was predefined using national or/and geographical pattern. Production System has a sophisticated job fault-recovery mechanism, which efficiently allows to run multi-Terabyte tasks without human intervention. We have implemented “train” model and open-ended production which allow to submit tasks automatically as soon as new set of data is available and to chain physics groups data processing and analysis with central production by the experiment. We present an overview of the ATLAS Production System and its major components features and architecture: task definition, web user interface and monitoring. We describe the important design decisions and lessons learned from an operational experience during the first year of LHC Run2. We also report the performance of the designed system and how various workflows, such as data (re)processing, Monte-Carlo and physics group production, users analysis, are scheduled and executed within one production system on heterogeneous computing resources.

  18. Develop and test fuel cell powered on-site integrated total energy systems. Phase 3: Full-scale power plant development

    NASA Technical Reports Server (NTRS)

    Kaufman, A.; Pudick, S.; Wang, C. L.; Werth, J.; Whelan, J. A.

    1984-01-01

    Two 25-cell, 13 inch x 23 inch (4kW) stacks were started up to evaluate the reliability of component and stack technology developed through the end of 1983. Both stacks started up well and are running satisfactorily on hydrogen-air after 1900 hours and 800 hours, respectively. A synthetic-reformat mixing station is nearing completion, and both stacks will be operated on reformate fuel. A stack-protection control system was placed in operation for Stack No. 2, and a similar set-up is in preparation for Stack No. 1. This system serves to change operating conditions or shut the stack down to avoid deleterious effects from nonstack-related upsets. The capability will greatly improve changes of obtaining meaningful long-term test data.

  19. Nowcasting system MeteoExpert at Irkutsk airport

    NASA Astrophysics Data System (ADS)

    Bazlova, Tatiana; Bocharnikov, Nikolai; Solonin, Alexander

    2016-04-01

    Airport operations are significantly impacted by low visibility concerned with fog. Generation of accurate and timely nowcast products is a basis of early warning automated system providing information about significant weather conditions for decision-makers. Nowcasting system MeteoExpert has been developed that provides aviation forecasters with 0-6 hour nowcasts of the weather conditions including fog and low visibility. The system has been put into operation at the airport Irkutsk since August 2014. Aim is to increase an accuracy of fog forecasts, contributing to the airport safety, efficiency and capacity improvement. Designed for operational use numerical model of atmospheric boundary layer runs with a 10-minute update cycle. An important component of the system is the use of AWOS at the airdrome and three additional automatic weather stations at fogging sites in the vicinity of the airdrome. Nowcasts are visualized on a screen of forecaster's workstation and dedicated website. Nowcasts have been verified against actual observations.

  20. Computing shifts to monitor ATLAS distributed computing infrastructure and operations

    NASA Astrophysics Data System (ADS)

    Adam, C.; Barberis, D.; Crépé-Renaudin, S.; De, K.; Fassi, F.; Stradling, A.; Svatos, M.; Vartapetian, A.; Wolters, H.

    2017-10-01

    The ATLAS Distributed Computing (ADC) group established a new Computing Run Coordinator (CRC) shift at the start of LHC Run 2 in 2015. The main goal was to rely on a person with a good overview of the ADC activities to ease the ADC experts’ workload. The CRC shifter keeps track of ADC tasks related to their fields of expertise and responsibility. At the same time, the shifter maintains a global view of the day-to-day operations of the ADC system. During Run 1, this task was accomplished by a person of the expert team called the ADC Manager on Duty (AMOD), a position that was removed during the shutdown period due to the reduced number and availability of ADC experts foreseen for Run 2. The CRC position was proposed to cover some of the AMODs former functions, while allowing more people involved in computing to participate. In this way, CRC shifters help with the training of future ADC experts. The CRC shifters coordinate daily ADC shift operations, including tracking open issues, reporting, and representing ADC in relevant meetings. The CRC also facilitates communication between the ADC experts team and the other ADC shifters. These include the Distributed Analysis Support Team (DAST), which is the first point of contact for addressing all distributed analysis questions, and the ATLAS Distributed Computing Shifters (ADCoS), which check and report problems in central services, sites, Tier-0 export, data transfers and production tasks. Finally, the CRC looks at the level of ADC activities on a weekly or monthly timescale to ensure that ADC resources are used efficiently.

Top