Sample records for idl virtual machine

  1. MISR Center Block Time Tool

    Atmospheric Science Data Center

    2013-04-01

      MISR Center Block Time Tool The misr_time tool calculates the block center times for MISR Level 1B2 files. This is ... version of the IDL package or by using the IDL Virtual Machine application. The IDL Virtual Machine is bundled with IDL and is ...

  2. Introducing PLIA: Planetary Laboratory for Image Analysis

    NASA Astrophysics Data System (ADS)

    Peralta, J.; Hueso, R.; Barrado, N.; Sánchez-Lavega, A.

    2005-08-01

    We present a graphical software tool developed under IDL software to navigate, process and analyze planetary images. The software has a complete Graphical User Interface and is cross-platform. It can also run under the IDL Virtual Machine without the need to own an IDL license. The set of tools included allow image navigation (orientation, centring and automatic limb determination), dynamical and photometric atmospheric measurements (winds and cloud albedos), cylindrical and polar projections, as well as image treatment under several procedures. Being written in IDL, it is modular and easy to modify and grow for adding new capabilities. We show several examples of the software capabilities with Galileo-Venus observations: Image navigation, photometrical corrections, wind profiles obtained by cloud tracking, cylindrical projections and cloud photometric measurements. Acknowledgements: This work has been funded by Spanish MCYT PNAYA2003-03216, fondos FEDER and Grupos UPV 15946/2004. R. Hueso acknowledges a post-doc fellowship from Gobierno Vasco.

  3. Enhanced networked server management with random remote backups

    NASA Astrophysics Data System (ADS)

    Kim, Song-Kyoo

    2003-08-01

    In this paper, the model is focused on available server management in network environments. The (remote) backup servers are hooked up by VPN (Virtual Private Network) and replace broken main severs immediately. A virtual private network (VPN) is a way to use a public network infrastructure and hooks up long-distance servers within a single network infrastructure. The servers can be represent as "machines" and then the system deals with main unreliable and random auxiliary spare (remote backup) machines. When the system performs a mandatory routine maintenance, auxiliary machines are being used for backups during idle periods. Unlike other existing models, the availability of auxiliary machines is changed for each activation in this enhanced model. Analytically tractable results are obtained by using several mathematical techniques and the results are demonstrated in the framework of optimized networked server allocation problems.

  4. GRIDVIEW: Recent Improvements in Research and Education Software for Exploring Mars Topography

    NASA Technical Reports Server (NTRS)

    Roark, J. H.; Masuoka, C. M.; Frey, H. V.

    2004-01-01

    GRIDVIEW is being developed by the GEODYNAMICS Branch at NASA's Goddard Space Flight Center and can be downloaded on the web at http://geodynamics.gsfc.nasa.gov/gridview/. The program is very mature and has been successfully used for more than four years, but is still under development as we add new features for data analysis and visualization. The software can run on any computer supported by the IDL virtual machine application supplied by RSI. The virtual machine application is currently available for recent versions of MS Windows, MacOS X, Red Hat Linux and UNIX. Minimum system memory requirement is 32 MB, however loading large data sets may require larger amounts of RAM to function adequately.

  5. Distributed Object Technology with CORBA and Java: Key Concepts and Implications.

    DTIC Science & Technology

    1997-06-01

    commercial use should be addressed to the SEI Licensing Agent. NO WARRANTY THIS CARNEGIE MELLON UNIVERSITY AND SOFTWARE ENGINEERING INSTITUTE MATERIAL...retrieval. This power is not derived from the language per se, but from the architecture-neutral approach used by Java. The Java Virtual Machine...pattern that is focused on performance considerations, the PCo archi- tecture also uses CORBA interface definition language (IDL) to model the

  6. xdamp Version 6 : an IDL-based data and image manipulation program.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ballard, William Parker

    2012-04-01

    The original DAMP (DAta Manipulation Program) was written by Mark Hedemann of Sandia National Laboratories and used the CA-DISSPLA{trademark} (available from Computer Associates International, Inc., Garden City, NY) graphics package as its engine. It was used to plot, modify, and otherwise manipulate the one-dimensional data waveforms (data vs. time) from a wide variety of accelerators. With the waning of CA-DISSPLA and the increasing popularity of Unix(reg sign)-based workstations, a replacement was needed. This package uses the IDL(reg sign) software, available from Research Systems Incorporated, a Xerox company, in Boulder, Colorado, as the engine, and creates a set of widgets tomore » manipulate the data in a manner similar to the original DAMP and earlier versions of xdamp. IDL is currently supported on a wide variety of Unix platforms such as IBM(reg sign) workstations, Hewlett Packard workstations, SUN(reg sign) workstations, Microsoft(reg sign) Windows{trademark} computers, Macintosh(reg sign) computers and Digital Equipment Corporation VMS(reg sign) and Alpha(reg sign) systems. Thus, xdamp is portable across many platforms. We have verified operation, albeit with some minor IDL bugs, on personal computers using Windows 7 and Windows Vista; Unix platforms; and Macintosh computers. Version 6 is an update that uses the IDL Virtual Machine to resolve the need for licensing IDL.« less

  7. Dynamically allocated virtual clustering management system

    NASA Astrophysics Data System (ADS)

    Marcus, Kelvin; Cannata, Jess

    2013-05-01

    The U.S Army Research Laboratory (ARL) has built a "Wireless Emulation Lab" to support research in wireless mobile networks. In our current experimentation environment, our researchers need the capability to run clusters of heterogeneous nodes to model emulated wireless tactical networks where each node could contain a different operating system, application set, and physical hardware. To complicate matters, most experiments require the researcher to have root privileges. Our previous solution of using a single shared cluster of statically deployed virtual machines did not sufficiently separate each user's experiment due to undesirable network crosstalk, thus only one experiment could be run at a time. In addition, the cluster did not make efficient use of our servers and physical networks. To address these concerns, we created the Dynamically Allocated Virtual Clustering management system (DAVC). This system leverages existing open-source software to create private clusters of nodes that are either virtual or physical machines. These clusters can be utilized for software development, experimentation, and integration with existing hardware and software. The system uses the Grid Engine job scheduler to efficiently allocate virtual machines to idle systems and networks. The system deploys stateless nodes via network booting. The system uses 802.1Q Virtual LANs (VLANs) to prevent experimentation crosstalk and to allow for complex, private networks eliminating the need to map each virtual machine to a specific switch port. The system monitors the health of the clusters and the underlying physical servers and it maintains cluster usage statistics for historical trends. Users can start private clusters of heterogeneous nodes with root privileges for the duration of the experiment. Users also control when to shutdown their clusters.

  8. A comparative analysis of dynamic grids vs. virtual grids using the A3pviGrid framework.

    PubMed

    Shankaranarayanan, Avinas; Amaldas, Christine

    2010-11-01

    With the proliferation of Quad/Multi-core micro-processors in mainstream platforms such as desktops and workstations; a large number of unused CPU cycles can be utilized for running virtual machines (VMs) as dynamic nodes in distributed environments. Grid services and its service oriented business broker now termed cloud computing could deploy image based virtualization platforms enabling agent based resource management and dynamic fault management. In this paper we present an efficient way of utilizing heterogeneous virtual machines on idle desktops as an environment for consumption of high performance grid services. Spurious and exponential increases in the size of the datasets are constant concerns in medical and pharmaceutical industries due to the constant discovery and publication of large sequence databases. Traditional algorithms are not modeled at handing large data sizes under sudden and dynamic changes in the execution environment as previously discussed. This research was undertaken to compare our previous results with running the same test dataset with that of a virtual Grid platform using virtual machines (Virtualization). The implemented architecture, A3pviGrid utilizes game theoretic optimization and agent based team formation (Coalition) algorithms to improve upon scalability with respect to team formation. Due to the dynamic nature of distributed systems (as discussed in our previous work) all interactions were made local within a team transparently. This paper is a proof of concept of an experimental mini-Grid test-bed compared to running the platform on local virtual machines on a local test cluster. This was done to give every agent its own execution platform enabling anonymity and better control of the dynamic environmental parameters. We also analyze performance and scalability of Blast in a multiple virtual node setup and present our findings. This paper is an extension of our previous research on improving the BLAST application framework using dynamic Grids on virtualization platforms such as the virtual box.

  9. 'tomo_display' and 'vol_tools': IDL VM Packages for Tomography Data Reconstruction, Processing, and Visualization

    NASA Astrophysics Data System (ADS)

    Rivers, M. L.; Gualda, G. A.

    2009-05-01

    One of the challenges in tomography is the availability of suitable software for image processing and analysis in 3D. We present here 'tomo_display' and 'vol_tools', two packages created in IDL that enable reconstruction, processing, and visualization of tomographic data. They complement in many ways the capabilities offered by Blob3D (Ketcham 2005 - Geosphere, 1: 32-41, DOI: 10.1130/GES00001.1) and, in combination, allow users without programming knowledge to perform all steps necessary to obtain qualitative and quantitative information using tomographic data. The package 'tomo_display' was created and is maintained by Mark Rivers. It allows the user to: (1) preprocess and reconstruct parallel beam tomographic data, including removal of anomalous pixels, ring artifact reduction, and automated determination of the rotation center, (2) visualization of both raw and reconstructed data, either as individual frames, or as a series of sequential frames. The package 'vol_tools' consists of a series of small programs created and maintained by Guilherme Gualda to perform specific tasks not included in other packages. Existing modules include simple tools for cropping volumes, generating histograms of intensity, sample volume measurement (useful for porous samples like pumice), and computation of volume differences (for differential absorption tomography). The module 'vol_animate' can be used to generate 3D animations using rendered isosurfaces around objects. Both packages use the same NetCDF format '.volume' files created using code written by Mark Rivers. Currently, only 16-bit integer volumes are created and read by the packages, but floating point and 8-bit data can easily be stored in the NetCDF format as well. A simple GUI to convert sequences of tiffs into '.volume' files is available within 'vol_tools'. Both 'tomo_display' and 'vol_tools' include options to (1) generate onscreen output that allows for dynamic visualization in 3D, (2) save sequences of tiffs to disk, and (3) generate MPEG movies for inclusion in presentations, publications, websites, etc. Both are freely available as run-time ('.sav') versions that can be run using the free IDL Virtual Machine TM, available from ITT Visual Information Solutions: http://www.ittvis.com/ProductServices/IDL/VirtualMachine.aspx The run-time versions of 'tomo_display' and 'vol_tools' can be downloaded from: http://cars.uchicago.edu/software/idl/tomography.html http://sites.google.com/site/voltools/

  10. Efficiently Scheduling Multi-core Guest Virtual Machines on Multi-core Hosts in Network Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoginath, Srikanth B; Perumalla, Kalyan S

    2011-01-01

    Virtual machine (VM)-based simulation is a method used by network simulators to incorporate realistic application behaviors by executing actual VMs as high-fidelity surrogates for simulated end-hosts. A critical requirement in such a method is the simulation time-ordered scheduling and execution of the VMs. Prior approaches such as time dilation are less efficient due to the high degree of multiplexing possible when multiple multi-core VMs are simulated on multi-core host systems. We present a new simulation time-ordered scheduler to efficiently schedule multi-core VMs on multi-core real hosts, with a virtual clock realized on each virtual core. The distinguishing features of ourmore » approach are: (1) customizable granularity of the VM scheduling time unit on the simulation time axis, (2) ability to take arbitrary leaps in virtual time by VMs to maximize the utilization of host (real) cores when guest virtual cores idle, and (3) empirically determinable optimality in the tradeoff between total execution (real) time and time-ordering accuracy levels. Experiments show that it is possible to get nearly perfect time-ordered execution, with a slight cost in total run time, relative to optimized non-simulation VM schedulers. Interestingly, with our time-ordered scheduler, it is also possible to reduce the time-ordering error from over 50% of non-simulation scheduler to less than 1% realized by our scheduler, with almost the same run time efficiency as that of the highly efficient non-simulation VM schedulers.« less

  11. A study experiment of auto idle application in the excavator engine performance

    NASA Astrophysics Data System (ADS)

    Purwanto, Wawan; Maksum, Hasan; Putra, Dwi Sudarno; Azmi, Meri; Wahyudi, Retno

    2016-03-01

    The purpose of this study was to analyze the effect of applying auto idle to excavator engine performance, such as machine unitization and fuel consumption in Excavator. Steps to be done are to modify the system JA 44 and 67 in Vehicle Electronic Control Unit (V-ECU). The modifications will be obtained from the pattern of the engine speed. If the excavator attachment is not operated, the engine speed will return to the idle speed automatically. From the experiment results the auto idle reduces fuel consumption in excavator engine.

  12. A study experiment of auto idle application in the excavator engine performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Purwanto, Wawan, E-mail: wawan5527@gmail.com; Maksum, Hasan; Putra, Dwi Sudarno, E-mail: dwisudarnoputra@ft.unp.ac.id

    2016-03-29

    The purpose of this study was to analyze the effect of applying auto idle to excavator engine performance, such as machine unitization and fuel consumption in Excavator. Steps to be done are to modify the system JA 44 and 67 in Vehicle Electronic Control Unit (V-ECU). The modifications will be obtained from the pattern of the engine speed. If the excavator attachment is not operated, the engine speed will return to the idle speed automatically. From the experiment results the auto idle reduces fuel consumption in excavator engine.

  13. A Solution Method of Job-shop Scheduling Problems by the Idle Time Shortening Type Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Ida, Kenichi; Osawa, Akira

    In this paper, we propose a new idle time shortening method for Job-shop scheduling problems (JSPs). We insert its method into a genetic algorithm (GA). The purpose of JSP is to find a schedule with the minimum makespan. We suppose that it is effective to reduce idle time of a machine in order to improve the makespan. The left shift is a famous algorithm in existing algorithms for shortening idle time. The left shift can not arrange the work to idle time. For that reason, some idle times are not shortened by the left shift. We propose two kinds of algorithms which shorten such idle time. Next, we combine these algorithms and the reversal of a schedule. We apply GA with its algorithm to benchmark problems and we show its effectiveness.

  14. Transparent process migration: Design alternatives and the Sprite implementation

    NASA Technical Reports Server (NTRS)

    Douglis, Fred; Ousterhout, John

    1991-01-01

    The Sprite operating system allows executing processes to be moved between hosts at any time. We use this process migration mechanism to offload work onto idle machines, and also to evict migrated processes when idle workstations are reclaimed by their owners. Sprite's migration mechanism provides a high degree of transparency both for migrated processes and for users. Idle machines are identified, and eviction is invoked, automatically by daemon processes. On Sprite it takes up to a few hundred milliseconds on SPARCstation 1 workstations to perform a remote exec, while evictions typically occur in a few seconds. The pmake program uses remote invocation to invoke tasks concurrently. Compilations commonly obtain speedup factors in the range of three to six; they are limited primarily by contention for centralized resources such as file servers. CPU-bound tasks such as simulations can make more effective use of idle hosts, obtaining as much as eight-fold speedup over a period of hours. Process migration has been in regular service for over two years.

  15. Free Factories: Unified Infrastructure for Data Intensive Web Services

    PubMed Central

    Zaranek, Alexander Wait; Clegg, Tom; Vandewege, Ward; Church, George M.

    2010-01-01

    We introduce the Free Factory, a platform for deploying data-intensive web services using small clusters of commodity hardware and free software. Independently administered virtual machines called Freegols give application developers the flexibility of a general purpose web server, along with access to distributed batch processing, cache and storage services. Each cluster exploits idle RAM and disk space for cache, and reserves disks in each node for high bandwidth storage. The batch processing service uses a variation of the MapReduce model. Virtualization allows every CPU in the cluster to participate in batch jobs. Each 48-node cluster can achieve 4-8 gigabytes per second of disk I/O. Our intent is to use multiple clusters to process hundreds of simultaneous requests on multi-hundred terabyte data sets. Currently, our applications achieve 1 gigabyte per second of I/O with 123 disks by scheduling batch jobs on two clusters, one of which is located in a remote data center. PMID:20514356

  16. A new Nawaz-Enscore-Ham-based heuristic for permutation flow-shop problems with bicriteria of makespan and machine idle time

    NASA Astrophysics Data System (ADS)

    Liu, Weibo; Jin, Yan; Price, Mark

    2016-10-01

    A new heuristic based on the Nawaz-Enscore-Ham algorithm is proposed in this article for solving a permutation flow-shop scheduling problem. A new priority rule is proposed by accounting for the average, mean absolute deviation, skewness and kurtosis, in order to fully describe the distribution style of processing times. A new tie-breaking rule is also introduced for achieving effective job insertion with the objective of minimizing both makespan and machine idle time. Statistical tests illustrate better solution quality of the proposed algorithm compared to existing benchmark heuristics.

  17. Hyperspectral Soil Mapper (HYSOMA) software interface: Review and future plans

    NASA Astrophysics Data System (ADS)

    Chabrillat, Sabine; Guillaso, Stephane; Eisele, Andreas; Rogass, Christian

    2014-05-01

    With the upcoming launch of the next generation of hyperspectral satellites that will routinely deliver high spectral resolution images for the entire globe (e.g. EnMAP, HISUI, HyspIRI, HypXIM, PRISMA), an increasing demand for the availability/accessibility of hyperspectral soil products is coming from the geoscience community. Indeed, many robust methods for the prediction of soil properties based on imaging spectroscopy already exist and have been successfully used for a wide range of soil mapping airborne applications. Nevertheless, these methods require expert know-how and fine-tuning, which makes them used sparingly. More developments are needed toward easy-to-access soil toolboxes as a major step toward the operational use of hyperspectral soil products for Earth's surface processes monitoring and modelling, to allow non-experienced users to obtain new information based on non-expensive software packages where repeatability of the results is an important prerequisite. In this frame, based on the EU-FP7 EUFAR (European Facility for Airborne Research) project and EnMAP satellite science program, higher performing soil algorithms were developed at the GFZ German Research Center for Geosciences as demonstrators for end-to-end processing chains with harmonized quality measures. The algorithms were built-in into the HYSOMA (Hyperspectral SOil MApper) software interface, providing an experimental platform for soil mapping applications of hyperspectral imagery that gives the choice of multiple algorithms for each soil parameter. The software interface focuses on fully automatic generation of semi-quantitative soil maps such as soil moisture, soil organic matter, iron oxide, clay content, and carbonate content. Additionally, a field calibration option calculates fully quantitative soil maps provided ground truth soil data are available. Implemented soil algorithms have been tested and validated using extensive in-situ ground truth data sets. The source of the HYSOMA code was developed as standalone IDL software to allow easy implementation in the hyperspectral and non-hyperspectral communities. Indeed, within the hyperspectral community, IDL language is very widely used, and for non-expert users that do not have an ENVI license, such software can be executed as a binary version using the free IDL virtual machine under various operating systems. Based on the growing interest of users in the software interface, the experimental software was adapted for public release version in 2012, and since then ~80 users of hyperspectral soil products downloaded the soil algorithms at www.gfz-potsdam.de/hysoma. The software interface was distributed for free as IDL plug-ins under the IDL-virtual machine. Up-to-now distribution of HYSOMA was based on a close source license model, for non-commercial and educational purposes. Currently, the HYSOMA is being under further development in the context of the EnMAP satellite mission, for extension and implementation in the EnMAP Box as EnSoMAP (EnMAP SOil MAPper). The EnMAP Box is a freely available, platform-independent software distributed under an open source license. In the presentation we will focus on an update of the HYSOMA software interface status and upcoming implementation in the EnMAP Box. Scientific software validation, associated publication record and users responses as well as software management and transition to open source will be discussed.

  18. Taboo search algorithm for item assignment in synchronized zone automated order picking system

    NASA Astrophysics Data System (ADS)

    Wu, Yingying; Wu, Yaohua

    2014-07-01

    The idle time which is part of the order fulfillment time is decided by the number of items in the zone; therefore the item assignment method affects the picking efficiency. Whereas previous studies only focus on the balance of number of kinds of items between different zones but not the number of items and the idle time in each zone. In this paper, an idle factor is proposed to measure the idle time exactly. The idle factor is proven to obey the same vary trend with the idle time, so the object of this problem can be simplified from minimizing idle time to minimizing idle factor. Based on this, the model of item assignment problem in synchronized zone automated order picking system is built. The model is a form of relaxation of parallel machine scheduling problem which had been proven to be NP-complete. To solve the model, a taboo search algorithm is proposed. The main idea of the algorithm is minimizing the greatest idle factor of zones with the 2-exchange algorithm. Finally, the simulation which applies the data collected from a tobacco distribution center is conducted to evaluate the performance of the algorithm. The result verifies the model and shows the algorithm can do a steady work to reduce idle time and the idle time can be reduced by 45.63% on average. This research proposed an approach to measure the idle time in synchronized zone automated order picking system. The approach can improve the picking efficiency significantly and can be seen as theoretical basis when optimizing the synchronized automated order picking systems.

  19. Hybrid Power Management for Office Equipment

    NASA Astrophysics Data System (ADS)

    Gingade, Ganesh P.

    Office machines (such as printers, scanners, fax, and copiers) can consume significant amounts of power. Few studies have been devoted to power management of office equipment. Most office machines have sleep modes to save power. Power management of these machines are usually timeout-based: a machine sleeps after being idle long enough. Setting the timeout duration can be difficult: if it is too long, the machine wastes power during idleness. If it is too short, the machine sleeps too soon and too often--the wakeup delay can significantly degrade productivity. Thus, power management is a tradeoff between saving energy and keeping short response time. Many power management policies have been published and one policy may outperform another in some scenarios. There is no definite conclusion which policy is always better. This thesis describes two methods for office equipment power management. The first method adaptively reduces power based on a constraint of the wakeup delay. The second method is a hybrid with multiple candidate policies and it selects the most appropriate power management policy. Using six months of request traces from 18 different offices, we demonstrate that the hybrid policy outperforms individual policies. We also discover that power management based on business hours does not produce consistent energy savings.

  20. Extending the Virtual Solar Observatory (VSO) to Incorporate Data Analysis Capabilities (III)

    NASA Astrophysics Data System (ADS)

    Csillaghy, A.; Etesi, L.; Dennis, B.; Zarro, D.; Schwartz, R.; Tolbert, K.

    2008-12-01

    We will present a progress report on our activities to extend the data analysis capabilities of the VSO. Our efforts to date have focused on three areas: 1. Extending the data retrieval capabilities by developing a centralized data processing server. The server is built with Java, IDL (Interactive Data Language), and the SSW (Solar SoftWare) package with all SSW-related instrument libraries and required calibration data. When a user requests VSO data that requires preprocessing, the data are transparently sent to the server, processed, and returned to the user's IDL session for viewing and analysis. It is possible to have any Java or IDL client connect to the server. An IDL prototype for preparing and calibrating SOHO/EIT data wll be demonstrated. 2. Improving the solar data search in SHOW SYNOP, a graphical user tool connected to VSO in IDL. We introduce the Java-IDL interface that allows a flexible dynamic, and extendable way of searching the VSO, where all the communication with VSO are managed dynamically by standard Java tools. 3. Improving image overlay capability to support coregistration of solar disk observations obtained from different orbital view angles, position angles, and distances - such as from the twin STEREO spacecraft.

  1. Chapter 24: Programmatic Interfaces - IDL VOlib

    NASA Astrophysics Data System (ADS)

    Miller, C. J.

    In this chapter, we describe a library for working with the VO using IDL (the Interactive Data Language). IDL is a software environment for data analysis, visualization, and cross-platform application development. It has wide-usage in astronomy, including NASA (e.g. http://seadas.gsfc.nasa.gov/), the Sloan Digital Sky Survey (http://www.sdss.org), and the Spitzer Infrared Spectrograph Instrument (http://ssc.spitzer.caltech.edu/archanaly/contributed/smart/). David Stern, the founder of Research Systems, Inc. (RSI), began the development of IDL while working with NASA's Mars Mariner 7 and 9 data at the Laboratory for Atmospheric and Space Physics at the University of Colorado. In 1981, IDL was rewritten in assembly language and FORTRAN for VAX/VMS. IDL's usage has expanded over the last decade into the fields of medical imaging and engineering, among many others. IDL's programming style carries over much of this FORTRAN-legacy, and has a familiar feel to many astronomers who learned their trade using FORTRAN. The spread of IDL-usage amongst astronomers can in part be attributed to the wealth of publicly astronomical libraries. The Goddard Space Flight Center (GSFC) maintains a list of astronomy-related IDL libraries, including the well known Astronomy User's Library (hereafter ASTROLIB2). We will use some of these GSFC IDL libraries. We note that while IDL is a licensed-software product, the source code of user-written procedures are typically freely available to the community. To make the most out of this section as a reader, it is important that many of the data discovery, access, and analysis protocols are understood before reading this chapter. In the next section, we provide an overview of some of the NVO terminology with which the reader should be familiar. The IDL library discussed here is specifically for use with the Virtual Observatory and is named VOlib. IDL's VOlib is available at http://nvo.noao.edu and is included with the software distrubution for this book.

  2. Computer network defense system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Urias, Vincent; Stout, William M. S.; Loverro, Caleb

    A method and apparatus for protecting virtual machines. A computer system creates a copy of a group of the virtual machines in an operating network in a deception network to form a group of cloned virtual machines in the deception network when the group of the virtual machines is accessed by an adversary. The computer system creates an emulation of components from the operating network in the deception network. The components are accessible by the group of the cloned virtual machines as if the group of the cloned virtual machines was in the operating network. The computer system moves networkmore » connections for the group of the virtual machines in the operating network used by the adversary from the group of the virtual machines in the operating network to the group of the cloned virtual machines, enabling protecting the group of the virtual machines from actions performed by the adversary.« less

  3. A multi-group and preemptable scheduling of cloud resource based on HTCondor

    NASA Astrophysics Data System (ADS)

    Jiang, Xiaowei; Zou, Jiaheng; Cheng, Yaodong; Shi, Jingyan

    2017-10-01

    Due to the features of virtual machine-flexibility, easy controlling and various system environments, more and more fields utilize the virtualization technology to construct the distributed system with the virtual resources, also including high energy physics. This paper introduce a method used in high energy physics that supports multiple resource group and preemptable cloud resource scheduling, combining virtual machine with HTCondor (a batch system). It makes resource controlling more flexible and more efficient and makes resource scheduling independent of job scheduling. Firstly, the resources belong to different experiment-groups, and the type of user-groups mapping to resource-groups(same as experiment-group) is one-to-one or many-to-one. In order to make the confused group simply to be managed, we designed the permission controlling component to ensure that the different resource-groups can get the suitable jobs. Secondly, for the purpose of elastically allocating resources for suitable resource-group, it is necessary to schedule resources like scheduling jobs. So this paper designs the cloud resource scheduling to maintain a resource queue and allocate an appropriate amount of virtual resources to the request resource-group. Thirdly, in some kind of situations, because of the resource occupied for a long time, resources need to be preempted. This paper adds the preemption function for the resource scheduling that implement resource preemption based on the group priority. Additionally, the way to preempting is soft that when virtual resources are preempted, jobs will not be killed but also be held and rematched later. It is implemented with the help of HTCondor, storing the held job information in scheduler, releasing the job to idle status and doing second matcher. In IHEP (institute of high energy physics), we have built a batch system based on HTCondor with a virtual resources pool based on Openstack. And this paper will show some cases of experiment JUNO and LHAASO. The result indicates that multi-group and preemptable resource scheduling is efficient to support multi-group and soft preemption. Additionally, the permission controlling component has been used in the local computing cluster, supporting for experiment JUNO, CMS and LHAASO, and the scale will be expanded to more experiments at the first half year, including DYW, BES and so on. Its evidence that the permission controlling is efficient.

  4. A Tale Of 160 Scientists, Three Applications, a Workshop and a Cloud

    NASA Astrophysics Data System (ADS)

    Berriman, G. B.; Brinkworth, C.; Gelino, D.; Wittman, D. K.; Deelman, E.; Juve, G.; Rynge, M.; Kinney, J.

    2013-10-01

    The NASA Exoplanet Science Institute (NExScI) hosts the annual Sagan Workshops, thematic meetings aimed at introducing researchers to the latest tools and methodologies in exoplanet research. The theme of the Summer 2012 workshop, held from July 23 to July 27 at Caltech, was to explore the use of exoplanet light curves to study planetary system architectures and atmospheres. A major part of the workshop was to use hands-on sessions to instruct attendees in the use of three open source tools for the analysis of light curves, especially from the Kepler mission. Each hands-on session involved the 160 attendees using their laptops to follow step-by-step tutorials given by experts. One of the applications, PyKE, is a suite of Python tools designed to reduce and analyze Kepler light curves; these tools can be invoked from the Unix command line or a GUI in PyRAF. The Transit Analysis Package (TAP) uses Markov Chain Monte Carlo (MCMC) techniques to fit light curves under the Interactive Data Language (IDL) environment, and Transit Timing Variations (TTV) uses IDL tools and Java-based GUIs to confirm and detect exoplanets from timing variations in light curve fitting. Rather than attempt to run these diverse applications on the inevitable wide range of environments on attendees laptops, they were run instead on the Amazon Elastic Cloud 2 (EC2). The cloud offers features ideal for this type of short term need: computing and storage services are made available on demand for as long as needed, and a processing environment can be customized and replicated as needed. The cloud environment included an NFS file server virtual machine (VM), 20 client VMs for use by attendees, and a VM to enable ftp downloads of the attendees' results. The file server was configured with a 1 TB Elastic Block Storage (EBS) volume (network-attached storage mounted as a device) containing the application software and attendees home directories. The clients were configured to mount the applications and home directories from the server via NFS. All VMs were built with CentOS version 5.8. Attendees connected their laptops to one of the client VMs using the Virtual Network Computing (VNC) protocol, which enabled them to interact with a remote desktop GUI during the hands-on sessions. We will describe the mechanisms for handling security, failovers, and licensing of commercial software. In particular, IDL licenses were managed through a server at Caltech, connected to the IDL instances running on Amazon EC2 via a Secure Shell (ssh) tunnel. The system operated flawlessly during the workshop.

  5. Virtual Manufacturing Techniques Designed and Applied to Manufacturing Activities in the Manufacturing Integration and Technology Branch

    NASA Technical Reports Server (NTRS)

    Shearrow, Charles A.

    1999-01-01

    One of the identified goals of EM3 is to implement virtual manufacturing by the time the year 2000 has ended. To realize this goal of a true virtual manufacturing enterprise the initial development of a machinability database and the infrastructure must be completed. This will consist of the containment of the existing EM-NET problems and developing machine, tooling, and common materials databases. To integrate the virtual manufacturing enterprise with normal day to day operations the development of a parallel virtual manufacturing machinability database, virtual manufacturing database, virtual manufacturing paradigm, implementation/integration procedure, and testable verification models must be constructed. Common and virtual machinability databases will include the four distinct areas of machine tools, available tooling, common machine tool loads, and a materials database. The machine tools database will include the machine envelope, special machine attachments, tooling capacity, location within NASA-JSC or with a contractor, and availability/scheduling. The tooling database will include available standard tooling, custom in-house tooling, tool properties, and availability. The common materials database will include materials thickness ranges, strengths, types, and their availability. The virtual manufacturing databases will consist of virtual machines and virtual tooling directly related to the common and machinability databases. The items to be completed are the design and construction of the machinability databases, virtual manufacturing paradigm for NASA-JSC, implementation timeline, VNC model of one bridge mill and troubleshoot existing software and hardware problems with EN4NET. The final step of this virtual manufacturing project will be to integrate other production sites into the databases bringing JSC's EM3 into a position of becoming a clearing house for NASA's digital manufacturing needs creating a true virtual manufacturing enterprise.

  6. Modeling and simulation of five-axis virtual machine based on NX

    NASA Astrophysics Data System (ADS)

    Li, Xiaoda; Zhan, Xianghui

    2018-04-01

    Virtual technology in the machinery manufacturing industry has shown the role of growing. In this paper, the Siemens NX software is used to model the virtual CNC machine tool, and the parameters of the virtual machine are defined according to the actual parameters of the machine tool so that the virtual simulation can be carried out without loss of the accuracy of the simulation. How to use the machine builder of the CAM module to define the kinematic chain and machine components of the machine is described. The simulation of virtual machine can provide alarm information of tool collision and over cutting during the process to users, and can evaluate and forecast the rationality of the technological process.

  7. Finding idle machines in a workstation-based distributed system

    NASA Technical Reports Server (NTRS)

    Theimer, Marvin M.; Lantz, Keith A.

    1989-01-01

    The authors describe the design and performance of scheduling facilities for finding idle hosts in a workstation-based distributed system. They focus on the tradeoffs between centralized and decentralized architectures with respect to scalability, fault tolerance, and simplicity of design, as well as several implementation issues of interest when multicast communication is used. They conclude that the principal tradeoff between the two approaches is that a centralized architecture can be scaled to a significantly greater degree and can more easily monitor global system statistics, whereas a decentralized architecture is simpler to implement.

  8. Machine-Learning Based Co-adaptive Calibration: A Perspective to Fight BCI Illiteracy

    NASA Astrophysics Data System (ADS)

    Vidaurre, Carmen; Sannelli, Claudia; Müller, Klaus-Robert; Blankertz, Benjamin

    "BCI illiteracy" is one of the biggest problems and challenges in BCI research. It means that BCI control cannot be achieved by a non-negligible number of subjects (estimated 20% to 25%). There are two main causes for BCI illiteracy in BCI users: either no SMR idle rhythm is observed over motor areas, or this idle rhythm is not attenuated during motor imagery, resulting in a classification performance lower than 70% (criterion level) already for offline calibration data. In a previous work of the same authors, the concept of machine learning based co-adaptive calibration was introduced. This new type of calibration provided substantially improved performance for a variety of users. Here, we use a similar approach and investigate to what extent co-adapting learning enables substantial BCI control for completely novice users and those who suffered from BCI illiteracy before.

  9. Effects of virtualization on a scientific application - Running a hyperspectral radiative transfer code on virtual machines.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tikotekar, Anand A; Vallee, Geoffroy R; Naughton III, Thomas J

    2008-01-01

    The topic of system-level virtualization has recently begun to receive interest for high performance computing (HPC). This is in part due to the isolation and encapsulation offered by the virtual machine. These traits enable applications to customize their environments and maintain consistent software configurations in their virtual domains. Additionally, there are mechanisms that can be used for fault tolerance like live virtual machine migration. Given these attractive benefits to virtualization, a fundamental question arises, how does this effect my scientific application? We use this as the premise for our paper and observe a real-world scientific code running on a Xenmore » virtual machine. We studied the effects of running a radiative transfer simulation, Hydrolight, on a virtual machine. We discuss our methodology and report observations regarding the usage of virtualization with this application.« less

  10. Model for Assembly Line Re-Balancing Considering Additional Capacity and Outsourcing to Face Demand Fluctuations

    NASA Astrophysics Data System (ADS)

    Samadhi, TMAA; Sumihartati, Atin

    2016-02-01

    The most critical stage in a garment industry is sewing process, because generally, it consists of a number of operations and a large number of sewing machines for each operation. Therefore, it requires a balancing method that can assign task to work station with balance workloads. Many studies on assembly line balancing assume a new assembly line, but in reality, due to demand fluctuation and demand increased a re-balancing is needed. To cope with those fluctuating demand changes, additional capacity can be carried out by investing in spare sewing machine and paying for sewing service through outsourcing. This study develops an assembly line balancing (ALB) model on existing line to cope with fluctuating demand change. Capacity redesign is decided if the fluctuation demand exceeds the available capacity through a combination of making investment on new machines and outsourcing while considering for minimizing the cost of idle capacity in the future. The objective of the model is to minimize the total cost of the line assembly that consists of operating costs, machine cost, adding capacity cost, losses cost due to idle capacity and outsourcing costs. The model develop is based on an integer programming model. The model is tested for a set of data of one year demand with the existing number of sewing machines of 41 units. The result shows that additional maximum capacity up to 76 units of machine required when there is an increase of 60% of the average demand, at the equal cost parameters..

  11. Coronal Magnetism and Forward Solarsoft Idl Package

    NASA Astrophysics Data System (ADS)

    Gibson, S. E.

    2014-12-01

    The FORWARD suite of Solar Soft IDL codes is a community resource for model-data comparison, with a particular emphasis on analyzing coronal magnetic fields. FORWARD may be used both to synthesize a broad range of coronal observables, and to access and compare to existing data. FORWARD works with numerical model datacubes, interfaces with the web-served Predictive Science Inc MAS simulation datacubes and the Solar Soft IDL Potential Field Source Surface (PFSS) package, and also includes several analytic models (more can be added). It connects to the Virtual Solar Observatory and other web-served observations to download data in a format directly comparable to model predictions. It utilizes the CHIANTI database in modeling UV/EUV lines, and links to the CLE polarimetry synthesis code for forbidden coronal lines. FORWARD enables "forward-fitting" of specific observations, and helps to build intuition into how the physical properties of coronal magnetic structures translate to observable properties.

  12. Managing virtual machines with Vac and Vcycle

    NASA Astrophysics Data System (ADS)

    McNab, A.; Love, P.; MacMahon, E.

    2015-12-01

    We compare the Vac and Vcycle virtual machine lifecycle managers and our experiences in providing production job execution services for ATLAS, CMS, LHCb, and the GridPP VO at sites in the UK, France and at CERN. In both the Vac and Vcycle systems, the virtual machines are created outside of the experiment's job submission and pilot framework. In the case of Vac, a daemon runs on each physical host which manages a pool of virtual machines on that host, and a peer-to-peer UDP protocol is used to achieve the desired target shares between experiments across the site. In the case of Vcycle, a daemon manages a pool of virtual machines on an Infrastructure-as-a-Service cloud system such as OpenStack, and has within itself enough information to create the types of virtual machines to achieve the desired target shares. Both systems allow unused shares for one experiment to temporarily taken up by other experiements with work to be done. The virtual machine lifecycle is managed with a minimum of information, gathered from the virtual machine creation mechanism (such as libvirt or OpenStack) and using the proposed Machine/Job Features API from WLCG. We demonstrate that the same virtual machine designs can be used to run production jobs on Vac and Vcycle/OpenStack sites for ATLAS, CMS, LHCb, and GridPP, and that these technologies allow sites to be operated in a reliable and robust way.

  13. LHCb experience with running jobs in virtual machines

    NASA Astrophysics Data System (ADS)

    McNab, A.; Stagni, F.; Luzzi, C.

    2015-12-01

    The LHCb experiment has been running production jobs in virtual machines since 2013 as part of its DIRAC-based infrastructure. We describe the architecture of these virtual machines and the steps taken to replicate the WLCG worker node environment expected by user and production jobs. This relies on the uCernVM system for providing root images for virtual machines. We use the CernVM-FS distributed filesystem to supply the root partition files, the LHCb software stack, and the bootstrapping scripts necessary to configure the virtual machines for us. Using this approach, we have been able to minimise the amount of contextualisation which must be provided by the virtual machine managers. We explain the process by which the virtual machine is able to receive payload jobs submitted to DIRAC by users and production managers, and how this differs from payloads executed within conventional DIRAC pilot jobs on batch queue based sites. We describe our operational experiences in running production on VM based sites managed using Vcycle/OpenStack, Vac, and HTCondor Vacuum. Finally we show how our use of these resources is monitored using Ganglia and DIRAC.

  14. Software platform virtualization in chemistry research and university teaching

    PubMed Central

    2009-01-01

    Background Modern chemistry laboratories operate with a wide range of software applications under different operating systems, such as Windows, LINUX or Mac OS X. Instead of installing software on different computers it is possible to install those applications on a single computer using Virtual Machine software. Software platform virtualization allows a single guest operating system to execute multiple other operating systems on the same computer. We apply and discuss the use of virtual machines in chemistry research and teaching laboratories. Results Virtual machines are commonly used for cheminformatics software development and testing. Benchmarking multiple chemistry software packages we have confirmed that the computational speed penalty for using virtual machines is low and around 5% to 10%. Software virtualization in a teaching environment allows faster deployment and easy use of commercial and open source software in hands-on computer teaching labs. Conclusion Software virtualization in chemistry, mass spectrometry and cheminformatics is needed for software testing and development of software for different operating systems. In order to obtain maximum performance the virtualization software should be multi-core enabled and allow the use of multiprocessor configurations in the virtual machine environment. Server consolidation, by running multiple tasks and operating systems on a single physical machine, can lead to lower maintenance and hardware costs especially in small research labs. The use of virtual machines can prevent software virus infections and security breaches when used as a sandbox system for internet access and software testing. Complex software setups can be created with virtual machines and are easily deployed later to multiple computers for hands-on teaching classes. We discuss the popularity of bioinformatics compared to cheminformatics as well as the missing cheminformatics education at universities worldwide. PMID:20150997

  15. Software platform virtualization in chemistry research and university teaching.

    PubMed

    Kind, Tobias; Leamy, Tim; Leary, Julie A; Fiehn, Oliver

    2009-11-16

    Modern chemistry laboratories operate with a wide range of software applications under different operating systems, such as Windows, LINUX or Mac OS X. Instead of installing software on different computers it is possible to install those applications on a single computer using Virtual Machine software. Software platform virtualization allows a single guest operating system to execute multiple other operating systems on the same computer. We apply and discuss the use of virtual machines in chemistry research and teaching laboratories. Virtual machines are commonly used for cheminformatics software development and testing. Benchmarking multiple chemistry software packages we have confirmed that the computational speed penalty for using virtual machines is low and around 5% to 10%. Software virtualization in a teaching environment allows faster deployment and easy use of commercial and open source software in hands-on computer teaching labs. Software virtualization in chemistry, mass spectrometry and cheminformatics is needed for software testing and development of software for different operating systems. In order to obtain maximum performance the virtualization software should be multi-core enabled and allow the use of multiprocessor configurations in the virtual machine environment. Server consolidation, by running multiple tasks and operating systems on a single physical machine, can lead to lower maintenance and hardware costs especially in small research labs. The use of virtual machines can prevent software virus infections and security breaches when used as a sandbox system for internet access and software testing. Complex software setups can be created with virtual machines and are easily deployed later to multiple computers for hands-on teaching classes. We discuss the popularity of bioinformatics compared to cheminformatics as well as the missing cheminformatics education at universities worldwide.

  16. Productivity improvement using discrete events simulation

    NASA Astrophysics Data System (ADS)

    Hazza, M. H. F. Al; Elbishari, E. M. Y.; Ismail, M. Y. Bin; Adesta, E. Y. T.; Rahman, Nur Salihah Binti Abdul

    2018-01-01

    The increasing in complexity of the manufacturing systems has increased the cost of investment in many industries. Furthermore, the theoretical feasibility studies are not enough to take the decision in investing for that particular area. Therefore, the development of the new advanced software is protecting the manufacturer from investing money in production lines that may not be sufficient and effective with their requirement in terms of machine utilization and productivity issue. By conducting a simulation, using accurate model will reduce and eliminate the risk associated with their new investment. The aim of this research is to prove and highlight the importance of simulation in decision-making process. Delmia quest software was used as a simulation program to run a simulation for the production line. A simulation was first done for the existing production line and show that the estimated production rate is 261 units/day. The results have been analysed based on utilization percentage and idle time. Two different scenarios have been proposed based on different objectives. The first scenario is by focusing on low utilization machines and their idle time, this was resulted in minimizing the number of machines used by three with the addition of the works who maintain them without having an effect on the production rate. The second scenario is to increase the production rate by upgrading the curing machine which lead to the increase in the daily productivity by 7% from 261 units to 281 units.

  17. Using virtual machine monitors to overcome the challenges of monitoring and managing virtualized cloud infrastructures

    NASA Astrophysics Data System (ADS)

    Bamiah, Mervat Adib; Brohi, Sarfraz Nawaz; Chuprat, Suriayati

    2012-01-01

    Virtualization is one of the hottest research topics nowadays. Several academic researchers and developers from IT industry are designing approaches for solving security and manageability issues of Virtual Machines (VMs) residing on virtualized cloud infrastructures. Moving the application from a physical to a virtual platform increases the efficiency, flexibility and reduces management cost as well as effort. Cloud computing is adopting the paradigm of virtualization, using this technique, memory, CPU and computational power is provided to clients' VMs by utilizing the underlying physical hardware. Beside these advantages there are few challenges faced by adopting virtualization such as management of VMs and network traffic, unexpected additional cost and resource allocation. Virtual Machine Monitor (VMM) or hypervisor is the tool used by cloud providers to manage the VMs on cloud. There are several heterogeneous hypervisors provided by various vendors that include VMware, Hyper-V, Xen and Kernel Virtual Machine (KVM). Considering the challenge of VM management, this paper describes several techniques to monitor and manage virtualized cloud infrastructures.

  18. Hardware assisted hypervisor introspection.

    PubMed

    Shi, Jiangyong; Yang, Yuexiang; Tang, Chuan

    2016-01-01

    In this paper, we introduce hypervisor introspection, an out-of-box way to monitor the execution of hypervisors. Similar to virtual machine introspection which has been proposed to protect virtual machines in an out-of-box way over the past decade, hypervisor introspection can be used to protect hypervisors which are the basis of cloud security. Virtual machine introspection tools are usually deployed either in hypervisor or in privileged virtual machines, which might also be compromised. By utilizing hardware support including nested virtualization, EPT protection and #BP, we are able to monitor all hypercalls belongs to the virtual machines of one hypervisor, include that of privileged virtual machine and even when the hypervisor is compromised. What's more, hypercall injection method is used to simulate hypercall-based attacks and evaluate the performance of our method. Experiment results show that our method can effectively detect hypercall-based attacks with some performance cost. Lastly, we discuss our furture approaches of reducing the performance cost and preventing the compromised hypervisor from detecting the existence of our introspector, in addition with some new scenarios to apply our hypervisor introspection system.

  19. Research on vehicles and cargos matching model based on virtual logistics platform

    NASA Astrophysics Data System (ADS)

    Zhuang, Yufeng; Lu, Jiang; Su, Zhiyuan

    2018-04-01

    Highway less than truckload (LTL) transportation vehicles and cargos matching problem is a joint optimization problem of typical vehicle routing and loading, which is also a hot issue of operational research. This article based on the demand of virtual logistics platform, for the problem of the highway LTL transportation, the matching model of the idle vehicle and the transportation order is set up and the corresponding genetic algorithm is designed. Then the algorithm is implemented by Java. The simulation results show that the solution is satisfactory.

  20. Performance Analysis of the NAS Y-MP Workload

    NASA Technical Reports Server (NTRS)

    Bergeron, Robert J.; Kutler, Paul (Technical Monitor)

    1997-01-01

    This paper describes the performance characteristics of the computational workloads on the NAS Cray Y-MP machines, a Y-MP 832 and later a Y-MP 8128. Hardware measurements indicated that the Y-MP workload performance matured over time, ultimately sustaining an average throughput of 0.8 GFLOPS and a vector operation fraction of 87%. The measurements also revealed an operation rate exceeding 1 per clock period, a well-balanced architecture featuring a strong utilization of vector functional units, and an efficient memory organization. Introduction of the larger memory 8128 increased throughput by allowing a more efficient utilization of CPUs. Throughput also depended on the metering of the batch queues; low-idle Saturday workloads required a buffer of small jobs to prevent memory starvation of the CPU. UNICOS required about 7% of total CPU time to service the 832 workloads; this overhead decreased to 5% for the 8128 workloads. While most of the system time went to service I/O requests, efficient scheduling prevented excessive idle due to I/O wait. System measurements disclosed no obvious bottlenecks in the response of the machine and UNICOS to the workloads. In most cases, Cray-provided software tools were- quite sufficient for measuring the performance of both the machine and operating, system.

  1. Computer Associates International, CA-ACF2/VM Release 3.1

    DTIC Science & Technology

    1987-09-09

    Associates CA-ACF2/VM Bibliography International Business Machines Corporation, IBM Virtual Machine/Directory Maintenance Program Logic Manual...publication number LY20-0889 International Business Machines International Business Machines Corporation, IBM System/370 Principles of Operation...publication number GA22-7000 International Business Machines Corporation, IBM Virtual Machine/Directory Maintenance Installation and System Administrator’s

  2. An Effective Mechanism for Virtual Machine Placement using Aco in IAAS Cloud

    NASA Astrophysics Data System (ADS)

    Shenbaga Moorthy, Rajalakshmi; Fareentaj, U.; Divya, T. K.

    2017-08-01

    Cloud computing provides an effective way to dynamically provide numerous resources to meet customer demands. A major challenging problem for cloud providers is designing efficient mechanisms for optimal virtual machine Placement (OVMP). Such mechanisms enable the cloud providers to effectively utilize their available resources and obtain higher profits. In order to provide appropriate resources to the clients an optimal virtual machine placement algorithm is proposed. Virtual machine placement is NP-Hard problem. Such NP-Hard problem can be solved using heuristic algorithm. In this paper, Ant Colony Optimization based virtual machine placement is proposed. Our proposed system focuses on minimizing the cost spending in each plan for hosting virtual machines in a multiple cloud provider environment and the response time of each cloud provider is monitored periodically, in such a way to minimize delay in providing the resources to the users. The performance of the proposed algorithm is compared with greedy mechanism. The proposed algorithm is simulated in Eclipse IDE. The results clearly show that the proposed algorithm minimizes the cost, response time and also number of migrations.

  3. Paging memory from random access memory to backing storage in a parallel computer

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Inglett, Todd A; Ratterman, Joseph D; Smith, Brian E

    2013-05-21

    Paging memory from random access memory (`RAM`) to backing storage in a parallel computer that includes a plurality of compute nodes, including: executing a data processing application on a virtual machine operating system in a virtual machine on a first compute node; providing, by a second compute node, backing storage for the contents of RAM on the first compute node; and swapping, by the virtual machine operating system in the virtual machine on the first compute node, a page of memory from RAM on the first compute node to the backing storage on the second compute node.

  4. The Dirt on E-Waste

    ERIC Educational Resources Information Center

    Schaffhauser, Dian

    2009-01-01

    Most smart technology leaders can name multiple efforts they have already taken or expect to pursue in their schools to "green up" IT operations, such as powering off idle computers and virtualizing the data center. One area that many of them may not be so savvy about, however, is hardware disposal: "What to do with the old stuff?" After all, it…

  5. The Virtual Solar Observatory and the Heliophysics Meta-Virtual Observatory

    NASA Astrophysics Data System (ADS)

    Gurman, J. B.; Hourclé, J. A.; Bogart, R. S.; Tian, K.; Hill, F.; Suàrez-Sola, I.; Zarro, D. M.; Davey, A. R.; Martens, P. C.; Yoshimura, K.; Reardon, K. M.

    2006-12-01

    The Virtual Solar Observatory (VSO) has survived its infancy and provides metadata search and data identification for measurements from 45 instrument data sets held at 12 online archives, as well as flare and coronal mass ejection (CME) event lists. Like any toddler, the VSO is good at getting into anything and everything, and is now extending its grasp to more data sets, new missions, and new access methods using its application programming interface (API). We discuss and demonstrate recent changes, including developments for STEREO and SDO, and an IDL-callable interface for the VSO API. We urge the heliophysics community to help civilize this obstreperous youngster by providing input on ways to make the VSO even more useful for system science research in its role as part of the growing cluster of Heliophysics Virtual Observatories.

  6. Still Virtual After All These Years: Recent Developments in the Virtual Solar Observatory

    NASA Astrophysics Data System (ADS)

    Gurman, J. B.; Bogart, R. S.; Davey, A. R.; Hill, F.; Martens, P. C.; Zarro, D. M.; Team, T. v.

    2008-05-01

    While continuing to add access to data from new missions, including Hinode and STEREO, the Virtual Solar Observatory is also being enhanced as a research tool by the addition of new features such as the unified representation of catalogs and event lists (to allow joined searches in two or more catalogs) and workable representation and manipulation of large numbers of search results (as are expected from the Solar Dynamics Observatory database). Working with our RHESSI colleagues, we have also been able to improve the performance of IDL-callable vso_search and vso_get functions, to the point that use of those routines is a practical alternative to reproducing large subsets of mission data on one's own LAN.

  7. Still Virtual After All These Years: Recent Developments in the Virtual Solar Observatory

    NASA Technical Reports Server (NTRS)

    Gurman, Joseph B.; Bogart; Davey; Hill; Masters; Zarro

    2008-01-01

    While continuing to add access to data from new missions, including Hinode and STEREO, the Virtual Solar Observatory is also being enhanced as a research tool by the addition of new features such as the unified representation of catalogs and event lists (to allow joined searches in two or more catalogs) and workable representation and manipulation of large numbers of search results (as are expected from the Solar Dynamics Observatory database). Working with our RHESSI colleagues, we have also been able to improve the performance of IDL-callable vso_search and vso_get functions, to the point that use of those routines is a practical alternative to reproducing large subsets of mission data on one's own LAN.

  8. Future Cyborgs: Human-Machine Interface for Virtual Reality Applications

    DTIC Science & Technology

    2007-04-01

    FUTURE CYBORGS : HUMAN-MACHINE INTERFACE FOR VIRTUAL REALITY APPLICATIONS Robert R. Powell, Major, USAF April 2007 Blue Horizons...SUBTITLE Future Cyborgs : Human-Machine Interface for Virtual Reality Applications 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER...Nicholas Negroponte, Being Digital (New York: Alfred A Knopf, Inc, 1995), 123. 23 Ibid. 24 Andy Clark, Natural-Born Cyborgs (New York: Oxford

  9. An imperialist competitive algorithm for virtual machine placement in cloud computing

    NASA Astrophysics Data System (ADS)

    Jamali, Shahram; Malektaji, Sepideh; Analoui, Morteza

    2017-05-01

    Cloud computing, the recently emerged revolution in IT industry, is empowered by virtualisation technology. In this paradigm, the user's applications run over some virtual machines (VMs). The process of selecting proper physical machines to host these virtual machines is called virtual machine placement. It plays an important role on resource utilisation and power efficiency of cloud computing environment. In this paper, we propose an imperialist competitive-based algorithm for the virtual machine placement problem called ICA-VMPLC. The base optimisation algorithm is chosen to be ICA because of its ease in neighbourhood movement, good convergence rate and suitable terminology. The proposed algorithm investigates search space in a unique manner to efficiently obtain optimal placement solution that simultaneously minimises power consumption and total resource wastage. Its final solution performance is compared with several existing methods such as grouping genetic and ant colony-based algorithms as well as bin packing heuristic. The simulation results show that the proposed method is superior to other tested algorithms in terms of power consumption, resource wastage, CPU usage efficiency and memory usage efficiency.

  10. Where to look? Automating attending behaviors of virtual human characters

    NASA Technical Reports Server (NTRS)

    Chopra Khullar, S.; Badler, N. I.

    2001-01-01

    This research proposes a computational framework for generating visual attending behavior in an embodied simulated human agent. Such behaviors directly control eye and head motions, and guide other actions such as locomotion and reach. The implementation of these concepts, referred to as the AVA, draws on empirical and qualitative observations known from psychology, human factors and computer vision. Deliberate behaviors, the analogs of scanpaths in visual psychology, compete with involuntary attention capture and lapses into idling or free viewing. Insights provided by implementing this framework are: a defined set of parameters that impact the observable effects of attention, a defined vocabulary of looking behaviors for certain motor and cognitive activity, a defined hierarchy of three levels of eye behavior (endogenous, exogenous and idling) and a proposed method of how these types interact.

  11. A Solution Method of Scheduling Problem with Worker Allocation by a Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Osawa, Akira; Ida, Kenichi

    In a scheduling problem with worker allocation (SPWA) proposed by Iima et al, the worker's skill level to each machine is all the same. However, each worker has a different skill level for each machine in the real world. For that reason, we propose a new model of SPWA in which a worker has the different skill level to each machine. To solve the problem, we propose a new GA for SPWA consisting of the following new three procedures, shortening of idle time, modifying infeasible solution to feasible solution, and a new selection method for GA. The effectiveness of the proposed algorithm is clarified by numerical experiments using benchmark problems for job-shop scheduling.

  12. Comparative analysis of machine learning methods in ligand-based virtual screening of large compound libraries.

    PubMed

    Ma, Xiao H; Jia, Jia; Zhu, Feng; Xue, Ying; Li, Ze R; Chen, Yu Z

    2009-05-01

    Machine learning methods have been explored as ligand-based virtual screening tools for facilitating drug lead discovery. These methods predict compounds of specific pharmacodynamic, pharmacokinetic or toxicological properties based on their structure-derived structural and physicochemical properties. Increasing attention has been directed at these methods because of their capability in predicting compounds of diverse structures and complex structure-activity relationships without requiring the knowledge of target 3D structure. This article reviews current progresses in using machine learning methods for virtual screening of pharmacodynamically active compounds from large compound libraries, and analyzes and compares the reported performances of machine learning tools with those of structure-based and other ligand-based (such as pharmacophore and clustering) virtual screening methods. The feasibility to improve the performance of machine learning methods in screening large libraries is discussed.

  13. Integration of the virtual 3D model of a control system with the virtual controller

    NASA Astrophysics Data System (ADS)

    Herbuś, K.; Ociepka, P.

    2015-11-01

    Nowadays the design process includes simulation analysis of different components of a constructed object. It involves the need for integration of different virtual object to simulate the whole investigated technical system. The paper presents the issues related to the integration of a virtual 3D model of a chosen control system of with a virtual controller. The goal of integration is to verify the operation of an adopted object of in accordance with the established control program. The object of the simulation work is the drive system of a tunneling machine for trenchless work. In the first stage of work was created an interactive visualization of functioning of the 3D virtual model of a tunneling machine. For this purpose, the software of the VR (Virtual Reality) class was applied. In the elaborated interactive application were created adequate procedures allowing controlling the drive system of a translatory motion, a rotary motion and the drive system of a manipulator. Additionally was created the procedure of turning on and off the output crushing head, mounted on the last element of the manipulator. In the elaborated interactive application have been established procedures for receiving input data from external software, on the basis of the dynamic data exchange (DDE), which allow controlling actuators of particular control systems of the considered machine. In the next stage of work, the program on a virtual driver, in the ladder diagram (LD) language, was created. The control program was developed on the basis of the adopted work cycle of the tunneling machine. The element integrating the virtual model of the tunneling machine for trenchless work with the virtual controller is the application written in a high level language (Visual Basic). In the developed application was created procedures responsible for collecting data from the running, in a simulation mode, virtual controller and transferring them to the interactive application, in which is verified the operation of the adopted research object. The carried out work allowed foot the integration of the virtual model of the control system of the tunneling machine with the virtual controller, enabling the verification of its operation.

  14. Hybrid polylingual object model: an efficient and seamless integration of Java and native components on the Dalvik virtual machine.

    PubMed

    Huang, Yukun; Chen, Rong; Wei, Jingbo; Pei, Xilong; Cao, Jing; Prakash Jayaraman, Prem; Ranjan, Rajiv

    2014-01-01

    JNI in the Android platform is often observed with low efficiency and high coding complexity. Although many researchers have investigated the JNI mechanism, few of them solve the efficiency and the complexity problems of JNI in the Android platform simultaneously. In this paper, a hybrid polylingual object (HPO) model is proposed to allow a CAR object being accessed as a Java object and as vice in the Dalvik virtual machine. It is an acceptable substitute for JNI to reuse the CAR-compliant components in Android applications in a seamless and efficient way. The metadata injection mechanism is designed to support the automatic mapping and reflection between CAR objects and Java objects. A prototype virtual machine, called HPO-Dalvik, is implemented by extending the Dalvik virtual machine to support the HPO model. Lifespan management, garbage collection, and data type transformation of HPO objects are also handled in the HPO-Dalvik virtual machine automatically. The experimental result shows that the HPO model outweighs the standard JNI in lower overhead on native side, better executing performance with no JNI bridging code being demanded.

  15. The ASSERT Virtual Machine Kernel: Support for Preservation of Temporal Properties

    NASA Astrophysics Data System (ADS)

    Zamorano, J.; de la Puente, J. A.; Pulido, J. A.; Urueña

    2008-08-01

    A new approach to building embedded real-time software has been developed in the ASSERT project. One of its key elements is the concept of a virtual machine preserving the non-functional properties of the system, and especially real-time properties, all the way down from high- level design models down to executable code. The paper describes one instance of the virtual machine concept that provides support for the preservation of temporal properties both at the source code level —by accept- ing only "legal" entities, i.e. software components with statically analysable real-tim behaviour— and at run-time —by monitoring the temporal behaviour of the system. The virtual machine has been validated on several pilot projects carried out by aerospace companies in the framework of the ASSERT project.

  16. Means and method of balancing multi-cylinder reciprocating machines

    DOEpatents

    Corey, John A.; Walsh, Michael M.

    1985-01-01

    A virtual balancing axis arrangement is described for multi-cylinder reciprocating piston machines for effectively balancing out imbalanced forces and minimizing residual imbalance moments acting on the crankshaft of such machines without requiring the use of additional parallel-arrayed balancing shafts or complex and expensive gear arrangements. The novel virtual balancing axis arrangement is capable of being designed into multi-cylinder reciprocating piston and crankshaft machines for substantially reducing vibrations induced during operation of such machines with only minimal number of additional component parts. Some of the required component parts may be available from parts already required for operation of auxiliary equipment, such as oil and water pumps used in certain types of reciprocating piston and crankshaft machine so that by appropriate location and dimensioning in accordance with the teachings of the invention, the virtual balancing axis arrangement can be built into the machine at little or no additional cost.

  17. Hybrid Cloud Computing Environment for EarthCube and Geoscience Community

    NASA Astrophysics Data System (ADS)

    Yang, C. P.; Qin, H.

    2016-12-01

    The NSF EarthCube Integration and Test Environment (ECITE) has built a hybrid cloud computing environment to provides cloud resources from private cloud environments by using cloud system software - OpenStack and Eucalyptus, and also manages public cloud - Amazon Web Service that allow resource synchronizing and bursting between private and public cloud. On ECITE hybrid cloud platform, EarthCube and geoscience community can deploy and manage the applications by using base virtual machine images or customized virtual machines, analyze big datasets by using virtual clusters, and real-time monitor the virtual resource usage on the cloud. Currently, a number of EarthCube projects have deployed or started migrating their projects to this platform, such as CHORDS, BCube, CINERGI, OntoSoft, and some other EarthCube building blocks. To accomplish the deployment or migration, administrator of ECITE hybrid cloud platform prepares the specific needs (e.g. images, port numbers, usable cloud capacity, etc.) of each project in advance base on the communications between ECITE and participant projects, and then the scientists or IT technicians in those projects launch one or multiple virtual machines, access the virtual machine(s) to set up computing environment if need be, and migrate their codes, documents or data without caring about the heterogeneity in structure and operations among different cloud platforms.

  18. Cooperating reduction machines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kluge, W.E.

    1983-11-01

    This paper presents a concept and a system architecture for the concurrent execution of program expressions of a concrete reduction language based on lamda-expressions. If formulated appropriately, these expressions are well-suited for concurrent execution, following a demand-driven model of computation. In particular, recursive program expressions with nonlinear expansion may, at run time, recursively be partitioned into a hierarchy of independent subexpressions which can be reduced by a corresponding hierarchy of virtual reduction machines. This hierarchy unfolds and collapses dynamically, with virtual machines recursively assuming the role of masters that create and eventually terminate, or synchronize with, slaves. The paper alsomore » proposes a nonhierarchically organized system of reduction machines, each featuring a stack architecture, that effectively supports the allocation of virtual machines to the real machines of the system in compliance with their hierarchical order of creation and termination. 25 references.« less

  19. Complementary Machine Intelligence and Human Intelligence in Virtual Teaching Assistant for Tutoring Program Tracing

    ERIC Educational Resources Information Center

    Chou, Chih-Yueh; Huang, Bau-Hung; Lin, Chi-Jen

    2011-01-01

    This study proposes a virtual teaching assistant (VTA) to share teacher tutoring tasks in helping students practice program tracing and proposes two mechanisms of complementing machine intelligence and human intelligence to develop the VTA. The first mechanism applies machine intelligence to extend human intelligence (teacher answers) to evaluate…

  20. "Pack[superscript2]": VM Resource Scheduling for Fine-Grained Application SLAs in Highly Consolidated Environment

    ERIC Educational Resources Information Center

    Sukwong, Orathai

    2013-01-01

    Virtualization enables the ability to consolidate multiple servers on a single physical machine, increasing the infrastructure utilization. Maximizing the ratio of server virtual machines (VMs) to physical machines, namely the consolidation ratio, becomes an important goal toward infrastructure cost saving in a cloud. However, the consolidation…

  1. Holding-time-aware asymmetric spectrum allocation in virtual optical networks

    NASA Astrophysics Data System (ADS)

    Lyu, Chunjian; Li, Hui; Liu, Yuze; Ji, Yuefeng

    2017-10-01

    Virtual optical networks (VONs) have been considered as a promising solution to support current high-capacity dynamic traffic and achieve rapid applications deployment. Since most of the network services (e.g., high-definition video service, cloud computing, distributed storage) in VONs are provisioned by dedicated data centers, needing different amount of bandwidth resources in both directions, the network traffic is mostly asymmetric. The common strategy, symmetric provisioning of traffic in optical networks, leads to a waste of spectrum resources in such traffic patterns. In this paper, we design a holding-time-aware asymmetric spectrum allocation module based on SDON architecture and an asymmetric spectrum allocation algorithm based on the module is proposed. For the purpose of reducing spectrum resources' waste, the algorithm attempts to reallocate the idle unidirectional spectrum slots in VONs, which are generated due to the asymmetry of services' bidirectional bandwidth. This part of resources can be exploited by other requests, such as short-time non-VON requests. We also introduce a two-dimensional asymmetric resource model for maintaining idle spectrum resources information of VON in spectrum and time domains. Moreover, a simulation is designed to evaluate the performance of the proposed algorithm, and results show that our proposed asymmetric spectrum allocation algorithm can improve the resource waste and reduce blocking probability.

  2. Simplified Virtualization in a HEP/NP Environment with Condor

    NASA Astrophysics Data System (ADS)

    Strecker-Kellogg, W.; Caramarcu, C.; Hollowell, C.; Wong, T.

    2012-12-01

    In this work we will address the development of a simple prototype virtualized worker node cluster, using Scientific Linux 6.x as a base OS, KVM and the libvirt API for virtualization, and the Condor batch software to manage virtual machines. The discussion in this paper provides details on our experience with building, configuring, and deploying the various components from bare metal, including the base OS, creation and distribution of the virtualized OS images and the integration of batch services with the virtual machines. Our focus was on simplicity and interoperability with our existing architecture.

  3. Elevating Virtual Machine Introspection for Fine-Grained Process Monitoring: Techniques and Applications

    ERIC Educational Resources Information Center

    Srinivasan, Deepa

    2013-01-01

    Recent rapid malware growth has exposed the limitations of traditional in-host malware-defense systems and motivated the development of secure virtualization-based solutions. By running vulnerable systems as virtual machines (VMs) and moving security software from inside VMs to the outside, the out-of-VM solutions securely isolate the anti-malware…

  4. Hybrid PolyLingual Object Model: An Efficient and Seamless Integration of Java and Native Components on the Dalvik Virtual Machine

    PubMed Central

    Huang, Yukun; Chen, Rong; Wei, Jingbo; Pei, Xilong; Cao, Jing; Prakash Jayaraman, Prem; Ranjan, Rajiv

    2014-01-01

    JNI in the Android platform is often observed with low efficiency and high coding complexity. Although many researchers have investigated the JNI mechanism, few of them solve the efficiency and the complexity problems of JNI in the Android platform simultaneously. In this paper, a hybrid polylingual object (HPO) model is proposed to allow a CAR object being accessed as a Java object and as vice in the Dalvik virtual machine. It is an acceptable substitute for JNI to reuse the CAR-compliant components in Android applications in a seamless and efficient way. The metadata injection mechanism is designed to support the automatic mapping and reflection between CAR objects and Java objects. A prototype virtual machine, called HPO-Dalvik, is implemented by extending the Dalvik virtual machine to support the HPO model. Lifespan management, garbage collection, and data type transformation of HPO objects are also handled in the HPO-Dalvik virtual machine automatically. The experimental result shows that the HPO model outweighs the standard JNI in lower overhead on native side, better executing performance with no JNI bridging code being demanded. PMID:25110745

  5. An incremental anomaly detection model for virtual machines.

    PubMed

    Zhang, Hancui; Chen, Shuyu; Liu, Jun; Zhou, Zhen; Wu, Tianshu

    2017-01-01

    Self-Organizing Map (SOM) algorithm as an unsupervised learning method has been applied in anomaly detection due to its capabilities of self-organizing and automatic anomaly prediction. However, because of the algorithm is initialized in random, it takes a long time to train a detection model. Besides, the Cloud platforms with large scale virtual machines are prone to performance anomalies due to their high dynamic and resource sharing characters, which makes the algorithm present a low accuracy and a low scalability. To address these problems, an Improved Incremental Self-Organizing Map (IISOM) model is proposed for anomaly detection of virtual machines. In this model, a heuristic-based initialization algorithm and a Weighted Euclidean Distance (WED) algorithm are introduced into SOM to speed up the training process and improve model quality. Meanwhile, a neighborhood-based searching algorithm is presented to accelerate the detection time by taking into account the large scale and high dynamic features of virtual machines on cloud platform. To demonstrate the effectiveness, experiments on a common benchmark KDD Cup dataset and a real dataset have been performed. Results suggest that IISOM has advantages in accuracy and convergence velocity of anomaly detection for virtual machines on cloud platform.

  6. An incremental anomaly detection model for virtual machines

    PubMed Central

    Zhang, Hancui; Chen, Shuyu; Liu, Jun; Zhou, Zhen; Wu, Tianshu

    2017-01-01

    Self-Organizing Map (SOM) algorithm as an unsupervised learning method has been applied in anomaly detection due to its capabilities of self-organizing and automatic anomaly prediction. However, because of the algorithm is initialized in random, it takes a long time to train a detection model. Besides, the Cloud platforms with large scale virtual machines are prone to performance anomalies due to their high dynamic and resource sharing characters, which makes the algorithm present a low accuracy and a low scalability. To address these problems, an Improved Incremental Self-Organizing Map (IISOM) model is proposed for anomaly detection of virtual machines. In this model, a heuristic-based initialization algorithm and a Weighted Euclidean Distance (WED) algorithm are introduced into SOM to speed up the training process and improve model quality. Meanwhile, a neighborhood-based searching algorithm is presented to accelerate the detection time by taking into account the large scale and high dynamic features of virtual machines on cloud platform. To demonstrate the effectiveness, experiments on a common benchmark KDD Cup dataset and a real dataset have been performed. Results suggest that IISOM has advantages in accuracy and convergence velocity of anomaly detection for virtual machines on cloud platform. PMID:29117245

  7. Analysis towards VMEM File of a Suspended Virtual Machine

    NASA Astrophysics Data System (ADS)

    Song, Zheng; Jin, Bo; Sun, Yongqing

    With the popularity of virtual machines, forensic investigators are challenged with more complicated situations, among which discovering the evidences in virtualized environment is of significant importance. This paper mainly analyzes the file suffixed with .vmem in VMware Workstation, which stores all pseudo-physical memory into an image. The internal file structure of .vmem file is studied and disclosed. Key information about processes and threads of a suspended virtual machine is revealed. Further investigation into the Windows XP SP3 heap contents is conducted and a proof-of-concept tool is provided. Different methods to obtain forensic memory images are introduced, with both advantages and limits analyzed. We conclude with an outlook.

  8. MISR Instrument Data Visualization

    NASA Technical Reports Server (NTRS)

    Nelson, David; Garay, Michael; Diner, David; Thompson, Charles; Hall, Jeffrey; Rheingans, Brian; Mazzoni, Dominic

    2008-01-01

    The MISR Interactive eXplorer (MINX) software functions both as a general-purpose tool to visualize Multiangle Imaging SpectroRadiometer (MISR) instrument data, and as a specialized tool to analyze properties of smoke, dust, and volcanic plumes. It includes high-level options to create map views of MISR orbit locations; scrollable, single-camera RGB (red-greenblue) images of MISR level 1B2 (L1B2) radiance data; and animations of the nine MISR camera images that provide a 3D perspective of the scenes that MISR has acquired. NASA Tech Briefs, September 2008 55 The plume height capability provides an accurate estimate of the injection height of plumes that is needed by air quality and climate modelers. MISR provides global high-quality stereo height information, and this program uses that information to perform detailed height retrievals of aerosol plumes. Users can interactively digitize smoke, dust, or volcanic plumes and automatically retrieve heights and winds, and can also archive MISR albedos and aerosol properties, as well as fire power and brightness temperatures associated with smoke plumes derived from Moderate Resolution Imaging Spectroradiometer (MODIS) data. Some of the specialized options in MINX enable the user to do other tasks. Users can display plots of top-of-atmosphere bidirectional reflectance factors (BRFs) versus camera-angle for selected pixels. Images and animations can be saved to disk in various formats. Also, users can apply a geometric registration correction to warp camera images when the standard processing correction is inadequate. It is possible to difference the images of two MISR orbits that share a path (identical ground track), as well as to construct pseudo-color images by assigning different combinations of MISR channels (angle or spectral band) to the RGB display channels. This software is an interactive application written in IDL and compiled into an IDL Virtual Machine (VM) ".sav" file.

  9. Classifying Structures in the ISM with Machine Learning Techniques

    NASA Astrophysics Data System (ADS)

    Beaumont, Christopher; Goodman, A. A.; Williams, J. P.

    2011-01-01

    The processes which govern molecular cloud evolution and star formation often sculpt structures in the ISM: filaments, pillars, shells, outflows, etc. Because of their morphological complexity, these objects are often identified manually. Manual classification has several disadvantages; the process is subjective, not easily reproducible, and does not scale well to handle increasingly large datasets. We have explored to what extent machine learning algorithms can be trained to autonomously identify specific morphological features in molecular cloud datasets. We show that the Support Vector Machine algorithm can successfully locate filaments and outflows blended with other emission structures. When the objects of interest are morphologically distinct from the surrounding emission, this autonomous classification achieves >90% accuracy. We have developed a set of IDL-based tools to apply this technique to other datasets.

  10. Feasibility of Virtual Machine and Cloud Computing Technologies for High Performance Computing

    DTIC Science & Technology

    2014-05-01

    Hat Enterprise Linux SaaS software as a service VM virtual machine vNUMA virtual non-uniform memory access WRF weather research and forecasting...previously mentioned in Chapter I Section B1 of this paper, which is used to run the weather research and forecasting ( WRF ) model in their experiments...against a VMware virtualization solution of WRF . The experiment consisted of running WRF in a standard configuration between the D-VTM and VMware while

  11. The HEPiX Virtualisation Working Group: Towards a Grid of Clouds

    NASA Astrophysics Data System (ADS)

    Cass, Tony

    2012-12-01

    The use of virtual machine images, as for example with Cloud services such as Amazon's Elastic Compute Cloud, is attractive for users as they have a guaranteed execution environment, something that cannot today be provided across sites participating in computing grids such as the Worldwide LHC Computing Grid. However, Grid sites often operate within computer security frameworks which preclude the use of remotely generated images. The HEPiX Virtualisation Working Group was setup with the objective to enable use of remotely generated virtual machine images at Grid sites and, to this end, has introduced the idea of trusted virtual machine images which are guaranteed to be secure and configurable by sites such that security policy commitments can be met. This paper describes the requirements and details of these trusted virtual machine images and presents a model for their use to facilitate the integration of Grid- and Cloud-based computing environments for High Energy Physics.

  12. Status and Roadmap of CernVM

    NASA Astrophysics Data System (ADS)

    Berzano, D.; Blomer, J.; Buncic, P.; Charalampidis, I.; Ganis, G.; Meusel, R.

    2015-12-01

    Cloud resources nowadays contribute an essential share of resources for computing in high-energy physics. Such resources can be either provided by private or public IaaS clouds (e.g. OpenStack, Amazon EC2, Google Compute Engine) or by volunteers computers (e.g. LHC@Home 2.0). In any case, experiments need to prepare a virtual machine image that provides the execution environment for the physics application at hand. The CernVM virtual machine since version 3 is a minimal and versatile virtual machine image capable of booting different operating systems. The virtual machine image is less than 20 megabyte in size. The actual operating system is delivered on demand by the CernVM File System. CernVM 3 has matured from a prototype to a production environment. It is used, for instance, to run LHC applications in the cloud, to tune event generators using a network of volunteer computers, and as a container for the historic Scientific Linux 5 and Scientific Linux 4 based software environments in the course of long-term data preservation efforts of the ALICE, CMS, and ALEPH experiments. We present experience and lessons learned from the use of CernVM at scale. We also provide an outlook on the upcoming developments. These developments include adding support for Scientific Linux 7, the use of container virtualization, such as provided by Docker, and the streamlining of virtual machine contextualization towards the cloud-init industry standard.

  13. Human Machine Interfaces for Teleoperators and Virtual Environments

    NASA Technical Reports Server (NTRS)

    Durlach, Nathaniel I. (Compiler); Sheridan, Thomas B. (Compiler); Ellis, Stephen R. (Compiler)

    1991-01-01

    In Mar. 1990, a meeting organized around the general theme of teleoperation research into virtual environment display technology was conducted. This is a collection of conference-related fragments that will give a glimpse of the potential of the following fields and how they interplay: sensorimotor performance; human-machine interfaces; teleoperation; virtual environments; performance measurement and evaluation methods; and design principles and predictive models.

  14. Solid Oxide Fuel Cell Development for Auxiliary Power in Heavy Duty Vehicle Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daniel T. Hennessy

    2010-06-15

    Changing economic and environmental needs of the trucking industry is driving the use of auxiliary power unit (APU) technology for over the road haul trucks. The trucking industry in the United States remains the key to the economy of the nation and one of the major changes affecting the trucking industry is the reduction of engine idling. Delphi Automotive Systems, LLC (Delphi) teamed with heavy-duty truck Original Equipment Manufacturers (OEMs) PACCAR Incorporated (PACCAR), and Volvo Trucks North America (VTNA) to define system level requirements and develop an SOFC based APU. The project defines system level requirements, and subsequently designs andmore » implements an optimized system architecture using an SOFC APU to demonstrate and validate that the APU will meet system level goals. The primary focus is on APUs in the range of 3-5 kW for truck idling reduction. Fuels utilized were derived from low-sulfur diesel fuel. Key areas of study and development included sulfur remediation with reformer operation; stack sensitivity testing; testing of catalyst carbon plugging and combustion start plugging; system pre-combustion; and overall system and electrical integration. This development, once fully implemented and commercialized, has the potential to significantly reduce the fuel idling Class 7/8 trucks consume. In addition, the significant amounts of NOx, CO2 and PM that are produced under these engine idling conditions will be virtually eliminated, inclusive of the noise pollution. The environmental impact will be significant with the added benefit of fuel savings and payback for the vehicle operators / owners.« less

  15. Virtual C Machine and Integrated Development Environment for ATMS Controllers.

    DOT National Transportation Integrated Search

    2000-04-01

    The overall objective of this project is to develop a prototype virtual machine that fits on current Advanced Traffic Management Systems (ATMS) controllers and provides functionality for complex traffic operations.;Prepared in cooperation with Utah S...

  16. System-Level Virtualization Research at Oak Ridge National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scott, Stephen L; Vallee, Geoffroy R; Naughton, III, Thomas J

    2010-01-01

    System-level virtualization is today enjoying a rebirth as a technique to effectively share what were then considered large computing resources to subsequently fade from the spotlight as individual workstations gained in popularity with a one machine - one user approach. One reason for this resurgence is that the simple workstation has grown in capability to rival that of anything available in the past. Thus, computing centers are again looking at the price/performance benefit of sharing that single computing box via server consolidation. However, industry is only concentrating on the benefits of using virtualization for server consolidation (enterprise computing) whereas ourmore » interest is in leveraging virtualization to advance high-performance computing (HPC). While these two interests may appear to be orthogonal, one consolidating multiple applications and users on a single machine while the other requires all the power from many machines to be dedicated solely to its purpose, we propose that virtualization does provide attractive capabilities that may be exploited to the benefit of HPC interests. This does raise the two fundamental questions of: is the concept of virtualization (a machine sharing technology) really suitable for HPC and if so, how does one go about leveraging these virtualization capabilities for the benefit of HPC. To address these questions, this document presents ongoing studies on the usage of system-level virtualization in a HPC context. These studies include an analysis of the benefits of system-level virtualization for HPC, a presentation of research efforts based on virtualization for system availability, and a presentation of research efforts for the management of virtual systems. The basis for this document was material presented by Stephen L. Scott at the Collaborative and Grid Computing Technologies meeting held in Cancun, Mexico on April 12-14, 2007.« less

  17. An element search ant colony technique for solving virtual machine placement problem

    NASA Astrophysics Data System (ADS)

    Srija, J.; Rani John, Rose; Kanaga, Grace Mary, Dr.

    2017-09-01

    The data centres in the cloud environment play a key role in providing infrastructure for ubiquitous computing, pervasive computing, mobile computing etc. This computing technique tries to utilize the available resources in order to provide services. Hence maintaining the resource utilization without wastage of power consumption has become a challenging task for the researchers. In this paper we propose the direct guidance ant colony system for effective mapping of virtual machines to the physical machine with maximal resource utilization and minimal power consumption. The proposed algorithm has been compared with the existing ant colony approach which is involved in solving virtual machine placement problem and thus the proposed algorithm proves to provide better result than the existing technique.

  18. Performance Analysis of Ivshmem for High-Performance Computing in Virtual Machines

    NASA Astrophysics Data System (ADS)

    Ivanovic, Pavle; Richter, Harald

    2018-01-01

    High-Performance computing (HPC) is rarely accomplished via virtual machines (VMs). In this paper, we present a remake of ivshmem which can change this. Ivshmem was a shared memory (SHM) between virtual machines on the same server, with SHM-access synchronization included, until about 5 years ago when newer versions of Linux and its virtualization library libvirt evolved. We restored that SHM-access synchronization feature because it is indispensable for HPC and made ivshmem runnable with contemporary versions of Linux, libvirt, KVM, QEMU and especially MPICH, which is an implementation of MPI - the standard HPC communication library. Additionally, MPICH was transparently modified by us to get ivshmem included, resulting in a three to ten times performance improvement compared to TCP/IP. Furthermore, we have transparently replaced MPI_PUT, a single-side MPICH communication mechanism, by an own MPI_PUT wrapper. As a result, our ivshmem even surpasses non-virtualized SHM data transfers for block lengths greater than 512 KBytes, showing the benefits of virtualization. All improvements were possible without using SR-IOV.

  19. Lightweight scheduling of elastic analysis containers in a competitive cloud environment: a Docked Analysis Facility for ALICE

    NASA Astrophysics Data System (ADS)

    Berzano, D.; Blomer, J.; Buncic, P.; Charalampidis, I.; Ganis, G.; Meusel, R.

    2015-12-01

    During the last years, several Grid computing centres chose virtualization as a better way to manage diverse use cases with self-consistent environments on the same bare infrastructure. The maturity of control interfaces (such as OpenNebula and OpenStack) opened the possibility to easily change the amount of resources assigned to each use case by simply turning on and off virtual machines. Some of those private clouds use, in production, copies of the Virtual Analysis Facility, a fully virtualized and self-contained batch analysis cluster capable of expanding and shrinking automatically upon need: however, resources starvation occurs frequently as expansion has to compete with other virtual machines running long-living batch jobs. Such batch nodes cannot relinquish their resources in a timely fashion: the more jobs they run, the longer it takes to drain them and shut off, and making one-job virtual machines introduces a non-negligible virtualization overhead. By improving several components of the Virtual Analysis Facility we have realized an experimental “Docked” Analysis Facility for ALICE, which leverages containers instead of virtual machines for providing performance and security isolation. We will present the techniques we have used to address practical problems, such as software provisioning through CVMFS, as well as our considerations on the maturity of containers for High Performance Computing. As the abstraction layer is thinner, our Docked Analysis Facilities may feature a more fine-grained sizing, down to single-job node containers: we will show how this approach will positively impact automatic cluster resizing by deploying lightweight pilot containers instead of replacing central queue polls.

  20. Using CORBA to integrate manufacturing cells to a virtual enterprise

    NASA Astrophysics Data System (ADS)

    Pancerella, Carmen M.; Whiteside, Robert A.

    1997-01-01

    It is critical in today's enterprises that manufacturing facilities are not isolated from design, planning, and other business activities and that information flows easily and bidirectionally between these activities. It is also important and cost-effective that COTS software, databases, and corporate legacy codes are well integrated in the information architecture. Further, much of the information generated during manufacturing must be dynamically accessible to engineering and business operations both in a restricted corporate intranet and on the internet. The software integration strategy in the Sandia Agile Manufacturing Testbed supports these enterprise requirements. We are developing a CORBA-based distributed object software system for manufacturing. Each physical machining device is a CORBA object and exports a common IDL interface to allow for rapid and dynamic insertion, deletion, and upgrading within the manufacturing cell. Cell management CORBA components access manufacturing devices without knowledge of any device-specific implementation. To support information flow from design to planning data is accessible to machinists on the shop floor. CORBA allows manufacturing components to be easily accessible to the enterprise. Dynamic clients can be created using web browsers and portable Java GUI's. A CORBA-OLE adapter allows integration to PC desktop applications. Other commercial software can access CORBA network objects in the information architecture through vendor API's.

  1. Efficient Checkpointing of Virtual Machines using Virtual Machine Introspection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aderholdt, Ferrol; Han, Fang; Scott, Stephen L

    Cloud Computing environments rely heavily on system-level virtualization. This is due to the inherent benefits of virtualization including fault tolerance through checkpoint/restart (C/R) mechanisms. Because clouds are the abstraction of large data centers and large data centers have a higher potential for failure, it is imperative that a C/R mechanism for such an environment provide minimal latency as well as a small checkpoint file size. Recently, there has been much research into C/R with respect to virtual machines (VM) providing excellent solutions to reduce either checkpoint latency or checkpoint file size. However, these approaches do not provide both. This papermore » presents a method of checkpointing VMs by utilizing virtual machine introspection (VMI). Through the usage of VMI, we are able to determine which pages of memory within the guest are used or free and are better able to reduce the amount of pages written to disk during a checkpoint. We have validated this work by using various benchmarks to measure the latency along with the checkpoint size. With respect to checkpoint file size, our approach results in file sizes within 24% or less of the actual used memory within the guest. Additionally, the checkpoint latency of our approach is up to 52% faster than KVM s default method.« less

  2. Can a virtual reality assessment of fine motor skill predict successful central line insertion?

    PubMed

    Mohamadipanah, Hossein; Parthiban, Chembian; Nathwani, Jay; Rutherford, Drew; DiMarco, Shannon; Pugh, Carla

    2016-10-01

    Due to the increased use of peripherally inserted central catheter lines, central lines are not performed as frequently. The aim of this study is to evaluate whether a virtual reality (VR)-based assessment of fine motor skills can be used as a valid and objective assessment of central line skills. Surgical residents (N = 43) from 7 general surgery programs performed a subclavian central line in a simulated setting. Then, they participated in a force discrimination task in a VR environment. Hand movements from the subclavian central line simulation were tracked by electromagnetic sensors. Gross movements as monitored by the electromagnetic sensors were compared with the fine motor metrics calculated from the force discrimination tasks in the VR environment. Long periods of inactivity (idle time) during needle insertion and lack of smooth movements, as detected by the electromagnetic sensors, showed a significant correlation with poor force discrimination in the VR environment. Also, long periods of needle insertion time correlated to the poor performance in force discrimination in the VR environment. This study shows that force discrimination in a defined VR environment correlates to needle insertion time, idle time, and hand smoothness when performing subclavian central line placement. Fine motor force discrimination may serve as a valid and objective assessment of the skills required for successful needle insertion when placing central lines. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. System-Level Virtualization for High Performance Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vallee, Geoffroy R; Naughton, III, Thomas J; Engelmann, Christian

    2008-01-01

    System-level virtualization has been a research topic since the 70's but regained popularity during the past few years because of the availability of efficient solution such as Xen and the implementation of hardware support in commodity processors (e.g. Intel-VT, AMD-V). However, a majority of system-level virtualization projects is guided by the server consolidation market. As a result, current virtualization solutions appear to not be suitable for high performance computing (HPC) which is typically based on large-scale systems. On another hand there is significant interest in exploiting virtual machines (VMs) within HPC for a number of other reasons. By virtualizing themore » machine, one is able to run a variety of operating systems and environments as needed by the applications. Virtualization allows users to isolate workloads, improving security and reliability. It is also possible to support non-native environments and/or legacy operating environments through virtualization. In addition, it is possible to balance work loads, use migration techniques to relocate applications from failing machines, and isolate fault systems for repair. This document presents the challenges for the implementation of a system-level virtualization solution for HPC. It also presents a brief survey of the different approaches and techniques to address these challenges.« less

  4. The Machine / Job Features Mechanism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alef, M.; Cass, T.; Keijser, J. J.

    Within the HEPiX virtualization group and the Worldwide LHC Computing Grid’s Machine/Job Features Task Force, a mechanism has been developed which provides access to detailed information about the current host and the current job to the job itself. This allows user payloads to access meta information, independent of the current batch system or virtual machine model. The information can be accessed either locally via the filesystem on a worker node, or remotely via HTTP(S) from a webserver. This paper describes the final version of the specification from 2016 which was published as an HEP Software Foundation technical note, and themore » design of the implementations of this version for batch and virtual machine platforms. We discuss early experiences with these implementations and how they can be exploited by experiment frameworks.« less

  5. The machine/job features mechanism

    NASA Astrophysics Data System (ADS)

    Alef, M.; Cass, T.; Keijser, J. J.; McNab, A.; Roiser, S.; Schwickerath, U.; Sfiligoi, I.

    2017-10-01

    Within the HEPiX virtualization group and the Worldwide LHC Computing Grid’s Machine/Job Features Task Force, a mechanism has been developed which provides access to detailed information about the current host and the current job to the job itself. This allows user payloads to access meta information, independent of the current batch system or virtual machine model. The information can be accessed either locally via the filesystem on a worker node, or remotely via HTTP(S) from a webserver. This paper describes the final version of the specification from 2016 which was published as an HEP Software Foundation technical note, and the design of the implementations of this version for batch and virtual machine platforms. We discuss early experiences with these implementations and how they can be exploited by experiment frameworks.

  6. A three phase optimization method for precopy based VM live migration.

    PubMed

    Sharma, Sangeeta; Chawla, Meenu

    2016-01-01

    Virtual machine live migration is a method of moving virtual machine across hosts within a virtualized datacenter. It provides significant benefits for administrator to manage datacenter efficiently. It reduces service interruption by transferring the virtual machine without stopping at source. Transfer of large number of virtual machine memory pages results in long migration time as well as downtime, which also affects the overall system performance. This situation becomes unbearable when migration takes place over slower network or a long distance migration within a cloud. In this paper, precopy based virtual machine live migration method is thoroughly analyzed to trace out the issues responsible for its performance drops. In order to address these issues, this paper proposes three phase optimization (TPO) method. It works in three phases as follows: (i) reduce the transfer of memory pages in first phase, (ii) reduce the transfer of duplicate pages by classifying frequently and non-frequently updated pages, and (iii) reduce the data sent in last iteration of migration by applying the simple RLE compression technique. As a result, each phase significantly reduces total pages transferred, total migration time and downtime respectively. The proposed TPO method is evaluated using different representative workloads on a Xen virtualized environment. Experimental results show that TPO method reduces total pages transferred by 71 %, total migration time by 70 %, downtime by 3 % for higher workload, and it does not impose significant overhead as compared to traditional precopy method. Comparison of TPO method with other methods is also done for supporting and showing its effectiveness. TPO method and precopy methods are also tested at different number of iterations. The TPO method gives better performance even with less number of iterations.

  7. Self-paced brain-computer interface control of ambulation in a virtual reality environment.

    PubMed

    Wang, Po T; King, Christine E; Chui, Luis A; Do, An H; Nenadic, Zoran

    2012-10-01

    Spinal cord injury (SCI) often leaves affected individuals unable to ambulate. Electroencephalogram (EEG) based brain-computer interface (BCI) controlled lower extremity prostheses may restore intuitive and able-body-like ambulation after SCI. To test its feasibility, the authors developed and tested a novel EEG-based, data-driven BCI system for intuitive and self-paced control of the ambulation of an avatar within a virtual reality environment (VRE). Eight able-bodied subjects and one with SCI underwent the following 10-min training session: subjects alternated between idling and walking kinaesthetic motor imageries (KMI) while their EEG were recorded and analysed to generate subject-specific decoding models. Subjects then performed a goal-oriented online task, repeated over five sessions, in which they utilized the KMI to control the linear ambulation of an avatar and make ten sequential stops at designated points within the VRE. The average offline training performance across subjects was 77.2 ± 11.0%, ranging from 64.3% (p = 0.001 76) to 94.5% (p = 6.26 × 10(-23)), with chance performance being 50%. The average online performance was 8.5 ± 1.1 (out of 10) successful stops and 303 ± 53 s completion time (perfect = 211 s). All subjects achieved performances significantly different than those of random walk (p < 0.05) in 44 of the 45 online sessions. By using a data-driven machine learning approach to decode users' KMI, this BCI-VRE system enabled intuitive and purposeful self-paced control of ambulation after only 10 minutes training. The ability to achieve such BCI control with minimal training indicates that the implementation of future BCI-lower extremity prosthesis systems may be feasible.

  8. Optimizing Virtual Network Functions Placement in Virtual Data Center Infrastructure Using Machine Learning

    NASA Astrophysics Data System (ADS)

    Bolodurina, I. P.; Parfenov, D. I.

    2018-01-01

    We have elaborated a neural network model of virtual network flow identification based on the statistical properties of flows circulating in the network of the data center and characteristics that describe the content of packets transmitted through network objects. This enabled us to establish the optimal set of attributes to identify virtual network functions. We have established an algorithm for optimizing the placement of virtual data functions using the data obtained in our research. Our approach uses a hybrid method of visualization using virtual machines and containers, which enables to reduce the infrastructure load and the response time in the network of the virtual data center. The algorithmic solution is based on neural networks, which enables to scale it at any number of the network function copies.

  9. Protection of Mission-Critical Applications from Untrusted Execution Environment: Resource Efficient Replication and Migration of Virtual Machines

    DTIC Science & Technology

    2015-09-28

    the performance of log-and- replay can degrade significantly for VMs configured with multiple virtual CPUs, since the shared memory communication...whether based on checkpoint replication or log-and- replay , existing HA ap- proaches use in- memory backups. The backup VM sits in the memory of a...efficiently. 15. SUBJECT TERMS High-availability virtual machines, live migration, memory and traffic overheads, application suspension, Java

  10. Harness That S.O.B.: Distributing Remote Sensing Analysis in a Small Office/Business

    NASA Astrophysics Data System (ADS)

    Kramer, J.; Combe, J.; McCord, T. B.

    2009-12-01

    Researchers in a small office/business (SOB) operate with limited funding, equipment, and software availability. To mitigate these issues, we developed a distributed computing framework that: 1) leverages open source software to implement functionality otherwise reliant on proprietary software and 2) harnesses the unused power of (semi-)idle office computers with mixed operating systems (OSes). This abstract outlines some reasons for the effort, its conceptual basis and implementation, and provides brief speedup results. The Multiple-Endmember Linear Spectral Unmixing Model (MELSUM)1 processes remote-sensing (hyper-)spectral images. The algorithm is computationally expensive, sometimes taking a full week or more for a 1 million pixel/100 wavelength image. Analysis of pixels is independent, so a large benefit can be gained from parallel processing techniques. Job concurrency is limited by the number of active processing units. MELSUM was originally written in the Interactive Data Language (IDL). Despite its multi-threading capabilities, an IDL instance executes on a single machine, and so concurrency is limited by the machine's number of central processing units (CPUs). Network distribution can access more CPUs to provide a greater speedup, while also taking advantage of (often) underutilized extant equipment. appropriately integrating open source software magnifies the impact by avoiding the purchase of additional licenses. Our method of distribution breaks into four conceptual parts: 1) the top- or task-level user interface; 2) a mid-level program that manages hosts and jobs, called the distribution server; 3) a low-level executable for individual pixel calculations; and 4) a control program to synchronize sequential sub-tasks. Each part is a separate OS process, passing information via shell commands and/or temporary files. While the control and low-level executables are short-lived, the top-level program and distribution server run (at least) for the entirety of a task. While any language that supports "spawning" of OS processes can serve as the top-level interface, our solution, d-MELSUM, has been integrated with the IDL code. Doing so extracts the core calculating from IDL, but otherwise preserves IDL features and functionality. The distribution server is an extension of ADE2 mobile robot software, written in Java. Network connections rely on a secure shell (SSH) implementation, whether natively available (e.g., Linux or OS X) or user installed (e.g., OpenSSH available via Cygwin on Windows). Both the low-level and control programs are relatively small C++ programs (~54K, or 1500 lines, total) that were developed in-house, and use GNU's g++ compiler. The low-level code also relies on Linear Algebra PACKage (LAPACK) libraries for pixel calculations. Despite performance being contingent on data size, CPU speed, and network communication rate and latency to some degree, results have generally demonstrated a time reduction of a factor proportional to the number of open connections (one per CPU). For example, the task mentioned above requiring a week to process took 18 hours with d-MELSUM, using 10 CPUs on 2 computers. 1 J.-Ph Combe, et al., PSS 56, 2008. 2 J. Kramer and M. Scheutz, IROS2006, 2006.

  11. RHE: A JVM Courseware

    ERIC Educational Resources Information Center

    Liu, S.; Tang, J.; Deng, C.; Li, X.-F.; Gaudiot, J.-L.

    2011-01-01

    Java Virtual Machine (JVM) education has become essential in training embedded software engineers as well as virtual machine researchers and practitioners. However, due to the lack of suitable instructional tools, it is difficult for students to obtain any kind of hands-on experience and to attain any deep understanding of JVM design. To address…

  12. CFCC: A Covert Flows Confinement Mechanism for Virtual Machine Coalitions

    NASA Astrophysics Data System (ADS)

    Cheng, Ge; Jin, Hai; Zou, Deqing; Shi, Lei; Ohoussou, Alex K.

    Normally, virtualization technology is adopted to construct the infrastructure of cloud computing environment. Resources are managed and organized dynamically through virtual machine (VM) coalitions in accordance with the requirements of applications. Enforcing mandatory access control (MAC) on the VM coalitions will greatly improve the security of VM-based cloud computing. However, the existing MAC models lack the mechanism to confine the covert flows and are hard to eliminate the convert channels. In this paper, we propose a covert flows confinement mechanism for virtual machine coalitions (CFCC), which introduces dynamic conflicts of interest based on the activity history of VMs, each of which is attached with a label. The proposed mechanism can be used to confine the covert flows between VMs in different coalitions. We implement a prototype system, evaluate its performance, and show that our mechanism is practical.

  13. Impact of Machine Virtualization on Timing Precision for Performance-critical Tasks

    NASA Astrophysics Data System (ADS)

    Karpov, Kirill; Fedotova, Irina; Siemens, Eduard

    2017-07-01

    In this paper we present a measurement study to characterize the impact of hardware virtualization on basic software timing, as well as on precise sleep operations of an operating system. We investigated how timer hardware is shared among heavily CPU-, I/O- and Network-bound tasks on a virtual machine as well as on the host machine. VMware ESXi and QEMU/KVM have been chosen as commonly used examples of hypervisor- and host-based models. Based on statistical parameters of retrieved distributions, our results provide a very good estimation of timing behavior. It is essential for real-time and performance-critical applications such as image processing or real-time control.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    V.T. Krivoshein; A.V. Makarov

    The sequence of pushing coke ovens is one of the most important aspects of battery operation. The sequence must satisfy a number of technical and process conditions: (1) achieve maximum heating-wall life by avoiding destructive expansion pressure in freshly charged ovens and during pushing of the finished coke; (2) ensure uniform brickwork temperature and prevent overheating by compensating for the high thermal flux in freshly charged ovens due to accumulated heat in adjacent ovens that are in the second half of the coking cycle; (3) ensure the most favorable working conditions and safety for operating personnel; (4) provide additional opportunitiesmore » for repair personnel to perform various types of work, such as replacing coke-machine rails, without interrupting coal production; (5) perform the maximum number of coke-machine operations simultaneously: pushing, charging, and cleaning doors, frames, and standpipe elbows; and (6) reduce electricity consumption by minimizing idle travel of coke machines.« less

  15. VAX CLuster upgrade: Report of a CPC task force

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hanson, J.; Berry, H.; Kessler, P.

    The CSCF VAX cluster provides interactive computing for 100 users during prime time, plus a considerable amount of daytime and overnight batch processing. While this cluster represents less than 10% of the VAX computing power at BNL (6 MIPS out of 70), it has served as an important center for this larger network, supporting special hardware and software too expensive to maintain on every machine. In addition, it is the only unrestricted facility available to VAX/VMS users (other machines are typically dedicated to special projects). This committee's analysis shows that the cpu's on the CSCF cluster are currently badly oversaturated,more » frequently giving extremely poor interactive response. Short batch jobs (a necessary part of interactive work) typically take 3 to 4 times as long to execute as they would on an idle machine. There is also an immediate need for more scratch disk space and user permanent file space.« less

  16. Software architecture standard for simulation virtual machine, version 2.0

    NASA Technical Reports Server (NTRS)

    Sturtevant, Robert; Wessale, William

    1994-01-01

    The Simulation Virtual Machine (SBM) is an Ada architecture which eases the effort involved in the real-time software maintenance and sustaining engineering. The Software Architecture Standard defines the infrastructure which all the simulation models are built from. SVM was developed for and used in the Space Station Verification and Training Facility.

  17. Cloud services for the Fermilab scientific stakeholders

    DOE PAGES

    Timm, S.; Garzoglio, G.; Mhashilkar, P.; ...

    2015-12-23

    As part of the Fermilab/KISTI cooperative research project, Fermilab has successfully run an experimental simulation workflow at scale on a federation of Amazon Web Services (AWS), FermiCloud, and local FermiGrid resources. We used the CernVM-FS (CVMFS) file system to deliver the application software. We established Squid caching servers in AWS as well, using the Shoal system to let each individual virtual machine find the closest squid server. We also developed an automatic virtual machine conversion system so that we could transition virtual machines made on FermiCloud to Amazon Web Services. We used this system to successfully run a cosmic raymore » simulation of the NOvA detector at Fermilab, making use of both AWS spot pricing and network bandwidth discounts to minimize the cost. On FermiCloud we also were able to run the workflow at the scale of 1000 virtual machines, using a private network routable inside of Fermilab. As a result, we present in detail the technological improvements that were used to make this work a reality.« less

  18. Cloud services for the Fermilab scientific stakeholders

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Timm, S.; Garzoglio, G.; Mhashilkar, P.

    As part of the Fermilab/KISTI cooperative research project, Fermilab has successfully run an experimental simulation workflow at scale on a federation of Amazon Web Services (AWS), FermiCloud, and local FermiGrid resources. We used the CernVM-FS (CVMFS) file system to deliver the application software. We established Squid caching servers in AWS as well, using the Shoal system to let each individual virtual machine find the closest squid server. We also developed an automatic virtual machine conversion system so that we could transition virtual machines made on FermiCloud to Amazon Web Services. We used this system to successfully run a cosmic raymore » simulation of the NOvA detector at Fermilab, making use of both AWS spot pricing and network bandwidth discounts to minimize the cost. On FermiCloud we also were able to run the workflow at the scale of 1000 virtual machines, using a private network routable inside of Fermilab. As a result, we present in detail the technological improvements that were used to make this work a reality.« less

  19. Runtime Performance and Virtual Network Control Alternatives in VM-Based High-Fidelity Network Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoginath, Srikanth B; Perumalla, Kalyan S; Henz, Brian J

    2012-01-01

    In prior work (Yoginath and Perumalla, 2011; Yoginath, Perumalla and Henz, 2012), the motivation, challenges and issues were articulated in favor of virtual time ordering of Virtual Machines (VMs) in network simulations hosted on multi-core machines. Two major components in the overall virtualization challenge are (1) virtual timeline establishment and scheduling of VMs, and (2) virtualization of inter-VM communication. Here, we extend prior work by presenting scaling results for the first component, with experiment results on up to 128 VMs scheduled in virtual time order on a single 12-core host. We also explore the solution space of design alternatives formore » the second component, and present performance results from a multi-threaded, multi-queue implementation of inter-VM network control for synchronized execution with VM scheduling, incorporated in our NetWarp simulation system.« less

  20. minimega

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    David Fritz, John Floren

    2013-08-27

    Minimega is a simple emulytics platform for creating testbeds of networked devices. The platform consists of easily deployable tools to facilitate bringing up large networks of virtual machines including Windows, Linux, and Android. Minimega attempts to allow experiments to be brought up quickly with nearly no configuration. Minimega also includes tools for simple cluster management, as well as tools for creating Linux based virtual machine images.

  1. Job Management and Task Bundling

    NASA Astrophysics Data System (ADS)

    Berkowitz, Evan; Jansen, Gustav R.; McElvain, Kenneth; Walker-Loud, André

    2018-03-01

    High Performance Computing is often performed on scarce and shared computing resources. To ensure computers are used to their full capacity, administrators often incentivize large workloads that are not possible on smaller systems. Measurements in Lattice QCD frequently do not scale to machine-size workloads. By bundling tasks together we can create large jobs suitable for gigantic partitions. We discuss METAQ and mpi_jm, software developed to dynamically group computational tasks together, that can intelligently backfill to consume idle time without substantial changes to users' current workflows or executables.

  2. Web Service Distributed Management Framework for Autonomic Server Virtualization

    NASA Astrophysics Data System (ADS)

    Solomon, Bogdan; Ionescu, Dan; Litoiu, Marin; Mihaescu, Mircea

    Virtualization for the x86 platform has imposed itself recently as a new technology that can improve the usage of machines in data centers and decrease the cost and energy of running a high number of servers. Similar to virtualization, autonomic computing and more specifically self-optimization, aims to improve server farm usage through provisioning and deprovisioning of instances as needed by the system. Autonomic systems are able to determine the optimal number of server machines - real or virtual - to use at a given time, and add or remove servers from a cluster in order to achieve optimal usage. While provisioning and deprovisioning of servers is very important, the way the autonomic system is built is also very important, as a robust and open framework is needed. One such management framework is the Web Service Distributed Management (WSDM) system, which is an open standard of the Organization for the Advancement of Structured Information Standards (OASIS). This paper presents an open framework built on top of the WSDM specification, which aims to provide self-optimization for applications servers residing on virtual machines.

  3. Virtual Machine Language Controls Remote Devices

    NASA Technical Reports Server (NTRS)

    2014-01-01

    Kennedy Space Center worked with Blue Sun Enterprises, based in Boulder, Colorado, to enhance the company's virtual machine language (VML) to control the instruments on the Regolith and Environment Science and Oxygen and Lunar Volatiles Extraction mission. Now the NASA-improved VML is available for crewed and uncrewed spacecraft, and has potential applications on remote systems such as weather balloons, unmanned aerial vehicles, and submarines.

  4. A Virtual Astronomical Research Machine in No Time (VARMiNT)

    NASA Astrophysics Data System (ADS)

    Beaver, John

    2012-05-01

    We present early results of using virtual machine software to help make astronomical research computing accessible to a wider range of individuals. Our Virtual Astronomical Research Machine in No Time (VARMiNT) is an Ubuntu Linux virtual machine with free, open-source software already installed and configured (and in many cases documented). The purpose of VARMiNT is to provide a ready-to-go astronomical research computing environment that can be freely shared between researchers, or between amateur and professional, teacher and student, etc., and to circumvent the often-difficult task of configuring a suitable computing environment from scratch. Thus we hope that VARMiNT will make it easier for individuals to engage in research computing even if they have no ready access to the facilities of a research institution. We describe our current version of VARMiNT and some of the ways it is being used at the University of Wisconsin - Fox Valley, a two-year teaching campus of the University of Wisconsin System, as a means to enhance student independent study research projects and to facilitate collaborations with researchers at other locations. We also outline some future plans and prospects.

  5. Human-machine interface for a VR-based medical imaging environment

    NASA Astrophysics Data System (ADS)

    Krapichler, Christian; Haubner, Michael; Loesch, Andreas; Lang, Manfred K.; Englmeier, Karl-Hans

    1997-05-01

    Modern 3D scanning techniques like magnetic resonance imaging (MRI) or computed tomography (CT) produce high- quality images of the human anatomy. Virtual environments open new ways to display and to analyze those tomograms. Compared with today's inspection of 2D image sequences, physicians are empowered to recognize spatial coherencies and examine pathological regions more facile, diagnosis and therapy planning can be accelerated. For that purpose a powerful human-machine interface is required, which offers a variety of tools and features to enable both exploration and manipulation of the 3D data. Man-machine communication has to be intuitive and efficacious to avoid long accustoming times and to enhance familiarity with and acceptance of the interface. Hence, interaction capabilities in virtual worlds should be comparable to those in the real work to allow utilization of our natural experiences. In this paper the integration of hand gestures and visual focus, two important aspects in modern human-computer interaction, into a medical imaging environment is shown. With the presented human- machine interface, including virtual reality displaying and interaction techniques, radiologists can be supported in their work. Further, virtual environments can even alleviate communication between specialists from different fields or in educational and training applications.

  6. Open multi-agent control architecture to support virtual-reality-based man-machine interfaces

    NASA Astrophysics Data System (ADS)

    Freund, Eckhard; Rossmann, Juergen; Brasch, Marcel

    2001-10-01

    Projective Virtual Reality is a new and promising approach to intuitively operable man machine interfaces for the commanding and supervision of complex automation systems. The user interface part of Projective Virtual Reality heavily builds on latest Virtual Reality techniques, a task deduction component and automatic action planning capabilities. In order to realize man machine interfaces for complex applications, not only the Virtual Reality part has to be considered but also the capabilities of the underlying robot and automation controller are of great importance. This paper presents a control architecture that has proved to be an ideal basis for the realization of complex robotic and automation systems that are controlled by Virtual Reality based man machine interfaces. The architecture does not just provide a well suited framework for the real-time control of a multi robot system but also supports Virtual Reality metaphors and augmentations which facilitate the user's job to command and supervise a complex system. The developed control architecture has already been used for a number of applications. Its capability to integrate sensor information from sensors of different levels of abstraction in real-time helps to make the realized automation system very responsive to real world changes. In this paper, the architecture will be described comprehensively, its main building blocks will be discussed and one realization that is built based on an open source real-time operating system will be presented. The software design and the features of the architecture which make it generally applicable to the distributed control of automation agents in real world applications will be explained. Furthermore its application to the commanding and control of experiments in the Columbus space laboratory, the European contribution to the International Space Station (ISS), is only one example which will be described.

  7. Pyglidein - A Simple HTCondor Glidein Service

    NASA Astrophysics Data System (ADS)

    Schultz, D.; Riedel, B.; Merino, G.

    2017-10-01

    A major challenge for data processing and analysis at the IceCube Neutrino Observatory presents itself in connecting a large set of individual clusters together to form a computing grid. Most of these clusters do not provide a “standard” grid interface. Using a local account on each submit machine, HTCondor glideins can be submitted to virtually any type of scheduler. The glideins then connect back to a main HTCondor pool, where jobs can run normally with no special syntax. To respond to dynamic load, a simple server advertises the number of idle jobs in the queue and the resources they request. The submit script can query this server to optimize glideins to what is needed, or not submit if there is no demand. Configuring HTCondor dynamic slots in the glideins allows us to efficiently handle varying memory requirements as well as whole-node jobs. One step of the IceCube simulation chain, photon propagation in the ice, heavily relies on GPUs for faster execution. Therefore, one important requirement for any workload management system in IceCube is to handle GPU resources properly. Within the pyglidein system, we have successfully configured HTCondor glideins to use any GPU allocated to it, with jobs using the standard HTCondor GPU syntax to request and use a GPU. This mechanism allows us to seamlessly integrate our local GPU cluster with remote non-Grid GPU clusters, including specially allocated resources at XSEDE supercomputers.

  8. Staghorn: An Automated Large-Scale Distributed System Analysis Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gabert, Kasimir; Burns, Ian; Elliott, Steven

    2016-09-01

    Conducting experiments on large-scale distributed computing systems is becoming significantly easier with the assistance of emulation. Researchers can now create a model of a distributed computing environment and then generate a virtual, laboratory copy of the entire system composed of potentially thousands of virtual machines, switches, and software. The use of real software, running at clock rate in full virtual machines, allows experiments to produce meaningful results without necessitating a full understanding of all model components. However, the ability to inspect and modify elements within these models is bound by the limitation that such modifications must compete with the model,more » either running in or alongside it. This inhibits entire classes of analyses from being conducted upon these models. We developed a mechanism to snapshot an entire emulation-based model as it is running. This allows us to \\freeze time" and subsequently fork execution, replay execution, modify arbitrary parts of the model, or deeply explore the model. This snapshot includes capturing packets in transit and other input/output state along with the running virtual machines. We were able to build this system in Linux using Open vSwitch and Kernel Virtual Machines on top of Sandia's emulation platform Firewheel. This primitive opens the door to numerous subsequent analyses on models, including state space exploration, debugging distributed systems, performance optimizations, improved training environments, and improved experiment repeatability.« less

  9. The Implications of Virtual Machine Introspection for Digital Forensics on Nonquiescent Virtual Machines

    DTIC Science & Technology

    2011-06-01

    metacity [ 2788] gnome-panel [ 2790] nautilus [ 2794] bonobo -activati [ 2797] gnome-vfs-daemo [ 2799] eggcups [ 2800] gnome-volume-ma [ 2809] bt...xrdb [ 2784] metacity [ 2788] gnome-panel [ 2790] nautilus [ 2794] bonobo -activati [ 2797] gnome-vfs-daemo [ 2799] eggcups [ 2800] gnome-volume...gnome-keyring-d [ 2764] gnome-settings- [ 2780] xrdb [ 2784] metacity [ 2788] gnome-panel [ 2790] nautilus [ 2794] bonobo -activati [ 2797] gnome

  10. Slot Machines: Pursuing Responsible Gaming Practices for Virtual Reels and Near Misses

    ERIC Educational Resources Information Center

    Harrigan, Kevin A.

    2009-01-01

    Since 1983, slot machines in North America have used a computer and virtual reels to determine the odds. Since at least 1988, a technique called clustering has been used to create a high number of near misses, failures that are close to wins. The result is that what the player sees does not represent the underlying probabilities and randomness,…

  11. Virtual screening by a new Clustering-based Weighted Similarity Extreme Learning Machine approach

    PubMed Central

    Kudisthalert, Wasu

    2018-01-01

    Machine learning techniques are becoming popular in virtual screening tasks. One of the powerful machine learning algorithms is Extreme Learning Machine (ELM) which has been applied to many applications and has recently been applied to virtual screening. We propose the Weighted Similarity ELM (WS-ELM) which is based on a single layer feed-forward neural network in a conjunction of 16 different similarity coefficients as activation function in the hidden layer. It is known that the performance of conventional ELM is not robust due to random weight selection in the hidden layer. Thus, we propose a Clustering-based WS-ELM (CWS-ELM) that deterministically assigns weights by utilising clustering algorithms i.e. k-means clustering and support vector clustering. The experiments were conducted on one of the most challenging datasets–Maximum Unbiased Validation Dataset–which contains 17 activity classes carefully selected from PubChem. The proposed algorithms were then compared with other machine learning techniques such as support vector machine, random forest, and similarity searching. The results show that CWS-ELM in conjunction with support vector clustering yields the best performance when utilised together with Sokal/Sneath(1) coefficient. Furthermore, ECFP_6 fingerprint presents the best results in our framework compared to the other types of fingerprints, namely ECFP_4, FCFP_4, and FCFP_6. PMID:29652912

  12. Conservation of the abscission signaling peptide IDA during Angiosperm evolution: withstanding genome duplications and gain and loss of the receptors HAE/HSL2

    PubMed Central

    Stø, Ida M.; Orr, Russell J. S.; Fooyontphanich, Kim; Jin, Xu; Knutsen, Jonfinn M. B.; Fischer, Urs; Tranbarger, Timothy J.; Nordal, Inger; Aalen, Reidunn B.

    2015-01-01

    The peptide INFLORESCENCE DEFICIENT IN ABSCISSION (IDA), which signals through the leucine-rich repeat receptor-like kinases HAESA (HAE) and HAESA-LIKE2 (HSL2), controls different cell separation events in Arabidopsis thaliana. We hypothesize the involvement of this signaling module in abscission processes in other plant species even though they may shed other organs than A. thaliana. As the first step toward testing this hypothesis from an evolutionarily perspective we have identified genes encoding putative orthologs of IDA and its receptors by BLAST searches of publically available protein, nucleotide and genome databases for angiosperms. Genes encoding IDA or IDA-LIKE (IDL) peptides and HSL proteins were found in all investigated species, which were selected as to represent each angiosperm order with available genomic sequences. The 12 amino acids representing the bioactive peptide in A. thaliana have virtually been unchanged throughout the evolution of the angiosperms; however, the number of IDL and HSL genes varies between different orders and species. The phylogenetic analyses suggest that IDA, HSL2, and the related HSL1 gene, were present in the species that gave rise to the angiosperms. HAE has arisen from HSL1 after a genome duplication that took place after the monocot—eudicots split. HSL1 has also independently been duplicated in the monocots, while HSL2 has been lost in gingers (Zingiberales) and grasses (Poales). IDA has been duplicated in eudicots to give rise to functionally divergent IDL peptides. We postulate that the high number of IDL homologs present in the core eudicots is a result of multiple whole genome duplications (WGD). We substantiate the involvement of IDA and HAE/HSL2 homologs in abscission by providing gene expression data of different organ separation events from various species. PMID:26579174

  13. sRNAtoolboxVM: Small RNA Analysis in a Virtual Machine.

    PubMed

    Gómez-Martín, Cristina; Lebrón, Ricardo; Rueda, Antonio; Oliver, José L; Hackenberg, Michael

    2017-01-01

    High-throughput sequencing (HTS) data for small RNAs (noncoding RNA molecules that are 20-250 nucleotides in length) can now be routinely generated by minimally equipped wet laboratories; however, the bottleneck in HTS-based research has shifted now to the analysis of such huge amount of data. One of the reasons is that many analysis types require a Linux environment but computers, system administrators, and bioinformaticians suppose additional costs that often cannot be afforded by small to mid-sized groups or laboratories. Web servers are an alternative that can be used if the data is not subjected to privacy issues (what very often is an important issue with medical data). However, in any case they are less flexible than stand-alone programs limiting the number of workflows and analysis types that can be carried out.We show in this protocol how virtual machines can be used to overcome those problems and limitations. sRNAtoolboxVM is a virtual machine that can be executed on all common operating systems through virtualization programs like VirtualBox or VMware, providing the user with a high number of preinstalled programs like sRNAbench for small RNA analysis without the need to maintain additional servers and/or operating systems.

  14. Performance Acceleration on Production Machines Using the Overall Equipment Effectiveness (OEE) Approach

    NASA Astrophysics Data System (ADS)

    Mansur, A.; Rayendra, R.; Mastur, MI

    2016-01-01

    Mistakes during working can trigger a decrease in production level that may lead financial loss to the company. The factors that affect the mistakes are called losses, such as breakdown loss, set up/ adjustment loss, idling and minor stoppage loss, reduced speed loss, reduced yield loss, and rework loss. The objective of the research is to accelerate the performance of the JSW 330T machine in PT. YogyaPresisiTehnikatamaIndustri. JSW 330T is a machine that has the highest downtime numbers. The method for measuring the effectiveness is using the Overall Equipment Effectiveness (OEE). The results of the research show that the JWQ 330T has average rate of the effectiveness (OEE) of 52.66%, availability ratioof 73.43%, performance efficiency rate of 83.58% and quality rate of 84.6%. From the six big losses calculation, the factor that affects the most on the low score of OEE is the breakdown loss which is 58.85% with total time loss of 929.65 hours in a year.

  15. An Analysis of Hardware-Assisted Virtual Machine Based Rootkits

    DTIC Science & Technology

    2014-06-01

    certain aspects of TPM implementation just to name a few. HyperWall is an architecture proposed by Szefer and Lee to protect guest VMs from...DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words) The use of virtual machine (VM) technology has expanded rapidly since AMD and Intel implemented ...Intel VT-x implementations of Blue Pill to identify commonalities in the respective versions’ attack methodologies from both a functional and technical

  16. Active tactile exploration using a brain-machine-brain interface.

    PubMed

    O'Doherty, Joseph E; Lebedev, Mikhail A; Ifft, Peter J; Zhuang, Katie Z; Shokur, Solaiman; Bleuler, Hannes; Nicolelis, Miguel A L

    2011-10-05

    Brain-machine interfaces use neuronal activity recorded from the brain to establish direct communication with external actuators, such as prosthetic arms. It is hoped that brain-machine interfaces can be used to restore the normal sensorimotor functions of the limbs, but so far they have lacked tactile sensation. Here we report the operation of a brain-machine-brain interface (BMBI) that both controls the exploratory reaching movements of an actuator and allows signalling of artificial tactile feedback through intracortical microstimulation (ICMS) of the primary somatosensory cortex. Monkeys performed an active exploration task in which an actuator (a computer cursor or a virtual-reality arm) was moved using a BMBI that derived motor commands from neuronal ensemble activity recorded in the primary motor cortex. ICMS feedback occurred whenever the actuator touched virtual objects. Temporal patterns of ICMS encoded the artificial tactile properties of each object. Neuronal recordings and ICMS epochs were temporally multiplexed to avoid interference. Two monkeys operated this BMBI to search for and distinguish one of three visually identical objects, using the virtual-reality arm to identify the unique artificial texture associated with each. These results suggest that clinical motor neuroprostheses might benefit from the addition of ICMS feedback to generate artificial somatic perceptions associated with mechanical, robotic or even virtual prostheses.

  17. Will Anything Useful Come Out of Virtual Reality? Examination of a Naval Application

    DTIC Science & Technology

    1993-05-01

    The term virtual reality can encompass varying meanings, but some generally accepted attributes of a virtual environment are that it is immersive...technology, but at present there are few practical applications which are utilizing the broad range of virtual reality technology. This paper will discuss an...Operability, operator functions, Virtual reality , Man-machine interface, Decision aids/decision making, Decision support. ASW.

  18. Migrating EO/IR sensors to cloud-based infrastructure as service architectures

    NASA Astrophysics Data System (ADS)

    Berglie, Stephen T.; Webster, Steven; May, Christopher M.

    2014-06-01

    The Night Vision Image Generator (NVIG), a product of US Army RDECOM CERDEC NVESD, is a visualization tool used widely throughout Army simulation environments to provide fully attributed synthesized, full motion video using physics-based sensor and environmental effects. The NVIG relies heavily on contemporary hardware-based acceleration and GPU processing techniques, which push the envelope of both enterprise and commodity-level hypervisor support for providing virtual machines with direct access to hardware resources. The NVIG has successfully been integrated into fully virtual environments where system architectures leverage cloudbased technologies to various extents in order to streamline infrastructure and service management. This paper details the challenges presented to engineers seeking to migrate GPU-bound processes, such as the NVIG, to virtual machines and, ultimately, Cloud-Based IAS architectures. In addition, it presents the path that led to success for the NVIG. A brief overview of Cloud-Based infrastructure management tool sets is provided, and several virtual desktop solutions are outlined. A discrimination is made between general purpose virtual desktop technologies compared to technologies that expose GPU-specific capabilities, including direct rendering and hard ware-based video encoding. Candidate hypervisor/virtual machine configurations that nominally satisfy the virtualized hardware-level GPU requirements of the NVIG are presented , and each is subsequently reviewed in light of its implications on higher-level Cloud management techniques. Implementation details are included from the hardware level, through the operating system, to the 3D graphics APls required by the NVIG and similar GPU-bound tools.

  19. Conceptual Design of a Basic Production Facility for the XM587E2/XM724 Electronic Time Fuzes

    DTIC Science & Technology

    1977-11-01

    blue side up, and then staked. The spri.ng pin is pressed in position and probed for the 1. 644-0. 010- inch dimension. See figure 33. 4.6.7.2 Parts...fitting subassembly. The detonator 69 IDLE HOPPER FEED -PROBE STAKE SPRING PIN PROBE PRESENCE STAKE LEADL IPROB LEADID ~ ASSEMBLY14 3 1 BLUE SIDE UP...automatic shutoffs. * Warning lights /alarms/ signs /’Jecals where necessary. * Electrical grounding of machine. [ 98 0 Noise levels below 85 decibals at

  20. The Impact of Mobile Multimedia Applications on Data Center Consolidation

    DTIC Science & Technology

    2012-10-01

    Dell Netbook Device Used in Experiments Application Condition 1 Condition 2 Condition 3 SPEECH 0.057 s 1.04 s 4.08 s FACE 0.30 s 3.92 s N/A Figure 3...and disk capacity are secondary. Our experiments use a Dell Latitude 2102 as the mobile device. This small netbook machine is more powerful than a...device incurs the highest power dissipation. Note that the netbook platform has a high baseline idle power dissipation (around 10W), 9 Mobile 1WiFi East

  1. A Virtual Sensor for Online Fault Detection of Multitooth-Tools

    PubMed Central

    Bustillo, Andres; Correa, Maritza; Reñones, Anibal

    2011-01-01

    The installation of suitable sensors close to the tool tip on milling centres is not possible in industrial environments. It is therefore necessary to design virtual sensors for these machines to perform online fault detection in many industrial tasks. This paper presents a virtual sensor for online fault detection of multitooth tools based on a Bayesian classifier. The device that performs this task applies mathematical models that function in conjunction with physical sensors. Only two experimental variables are collected from the milling centre that performs the machining operations: the electrical power consumption of the feed drive and the time required for machining each workpiece. The task of achieving reliable signals from a milling process is especially complex when multitooth tools are used, because each kind of cutting insert in the milling centre only works on each workpiece during a certain time window. Great effort has gone into designing a robust virtual sensor that can avoid re-calibration due to, e.g., maintenance operations. The virtual sensor developed as a result of this research is successfully validated under real conditions on a milling centre used for the mass production of automobile engine crankshafts. Recognition accuracy, calculated with a k-fold cross validation, had on average 0.957 of true positives and 0.986 of true negatives. Moreover, measured accuracy was 98%, which suggests that the virtual sensor correctly identifies new cases. PMID:22163766

  2. Modeling and Analysis Compute Environments, Utilizing Virtualization Technology in the Climate and Earth Systems Science domain

    NASA Astrophysics Data System (ADS)

    Michaelis, A.; Nemani, R. R.; Wang, W.; Votava, P.; Hashimoto, H.

    2010-12-01

    Given the increasing complexity of climate modeling and analysis tools, it is often difficult and expensive to build or recreate an exact replica of the software compute environment used in past experiments. With the recent development of new technologies for hardware virtualization, an opportunity exists to create full modeling, analysis and compute environments that are “archiveable”, transferable and may be easily shared amongst a scientific community or presented to a bureaucratic body if the need arises. By encapsulating and entire modeling and analysis environment in a virtual machine image, others may quickly gain access to the fully built system used in past experiments, potentially easing the task and reducing the costs of reproducing and verify past results produced by other researchers. Moreover, these virtual machine images may be used as a pedagogical tool for others that are interested in performing an academic exercise but don't yet possess the broad expertise required. We built two virtual machine images, one with the Community Earth System Model (CESM) and one with Weather Research Forecast Model (WRF), then ran several small experiments to assess the feasibility, performance overheads costs, reusability, and transferability. We present a list of the pros and cons as well as lessoned learned from utilizing virtualization technology in the climate and earth systems modeling domain.

  3. A virtual sensor for online fault detection of multitooth-tools.

    PubMed

    Bustillo, Andres; Correa, Maritza; Reñones, Anibal

    2011-01-01

    The installation of suitable sensors close to the tool tip on milling centres is not possible in industrial environments. It is therefore necessary to design virtual sensors for these machines to perform online fault detection in many industrial tasks. This paper presents a virtual sensor for online fault detection of multitooth tools based on a bayesian classifier. The device that performs this task applies mathematical models that function in conjunction with physical sensors. Only two experimental variables are collected from the milling centre that performs the machining operations: the electrical power consumption of the feed drive and the time required for machining each workpiece. The task of achieving reliable signals from a milling process is especially complex when multitooth tools are used, because each kind of cutting insert in the milling centre only works on each workpiece during a certain time window. Great effort has gone into designing a robust virtual sensor that can avoid re-calibration due to, e.g., maintenance operations. The virtual sensor developed as a result of this research is successfully validated under real conditions on a milling centre used for the mass production of automobile engine crankshafts. Recognition accuracy, calculated with a k-fold cross validation, had on average 0.957 of true positives and 0.986 of true negatives. Moreover, measured accuracy was 98%, which suggests that the virtual sensor correctly identifies new cases.

  4. Alternative Fuels Data Center: Idle Reduction Related Links

    Science.gov Websites

    and the windshield free of snow and ice for hours without idling. Bergstrom, Inc. Bergstrom more than 12 hours of idle-free temperature control, while also providing: fuel savings (up to 2,500 configurations. Idle Free Systems, Inc. Idle Free Systems, Inc. is a provider of year-round idle elimination

  5. A Comprehensive Availability Modeling and Analysis of a Virtualized Servers System Using Stochastic Reward Nets

    PubMed Central

    Kim, Dong Seong; Park, Jong Sou

    2014-01-01

    It is important to assess availability of virtualized systems in IT business infrastructures. Previous work on availability modeling and analysis of the virtualized systems used a simplified configuration and assumption in which only one virtual machine (VM) runs on a virtual machine monitor (VMM) hosted on a physical server. In this paper, we show a comprehensive availability model using stochastic reward nets (SRN). The model takes into account (i) the detailed failures and recovery behaviors of multiple VMs, (ii) various other failure modes and corresponding recovery behaviors (e.g., hardware faults, failure and recovery due to Mandelbugs and aging-related bugs), and (iii) dependency between different subcomponents (e.g., between physical host failure and VMM, etc.) in a virtualized servers system. We also show numerical analysis on steady state availability, downtime in hours per year, transaction loss, and sensitivity analysis. This model provides a new finding on how to increase system availability by combining both software rejuvenations at VM and VMM in a wise manner. PMID:25165732

  6. Cloud-based opportunities in scientific computing: insights from processing Suomi National Polar-Orbiting Partnership (S-NPP) Direct Broadcast data

    NASA Astrophysics Data System (ADS)

    Evans, J. D.; Hao, W.; Chettri, S.

    2013-12-01

    The cloud is proving to be a uniquely promising platform for scientific computing. Our experience with processing satellite data using Amazon Web Services highlights several opportunities for enhanced performance, flexibility, and cost effectiveness in the cloud relative to traditional computing -- for example: - Direct readout from a polar-orbiting satellite such as the Suomi National Polar-Orbiting Partnership (S-NPP) requires bursts of processing a few times a day, separated by quiet periods when the satellite is out of receiving range. In the cloud, by starting and stopping virtual machines in minutes, we can marshal significant computing resources quickly when needed, but not pay for them when not needed. To take advantage of this capability, we are automating a data-driven approach to the management of cloud computing resources, in which new data availability triggers the creation of new virtual machines (of variable size and processing power) which last only until the processing workflow is complete. - 'Spot instances' are virtual machines that run as long as one's asking price is higher than the provider's variable spot price. Spot instances can greatly reduce the cost of computing -- for software systems that are engineered to withstand unpredictable interruptions in service (as occurs when a spot price exceeds the asking price). We are implementing an approach to workflow management that allows data processing workflows to resume with minimal delays after temporary spot price spikes. This will allow systems to take full advantage of variably-priced 'utility computing.' - Thanks to virtual machine images, we can easily launch multiple, identical machines differentiated only by 'user data' containing individualized instructions (e.g., to fetch particular datasets or to perform certain workflows or algorithms) This is particularly useful when (as is the case with S-NPP data) we need to launch many very similar machines to process an unpredictable number of data files concurrently. Our experience shows the viability and flexibility of this approach to workflow management for scientific data processing. - Finally, cloud computing is a promising platform for distributed volunteer ('interstitial') computing, via mechanisms such as the Berkeley Open Infrastructure for Network Computing (BOINC) popularized with the SETI@Home project and others such as ClimatePrediction.net and NASA's Climate@Home. Interstitial computing faces significant challenges as commodity computing shifts from (always on) desktop computers towards smartphones and tablets (untethered and running on scarce battery power); but cloud computing offers significant slack capacity. This capacity includes virtual machines with unused RAM or underused CPUs; virtual storage volumes allocated (& paid for) but not full; and virtual machines that are paid up for the current hour but whose work is complete. We are devising ways to facilitate the reuse of these resources (i.e., cloud-based interstitial computing) for satellite data processing and related analyses. We will present our findings and research directions on these and related topics.

  7. A discrete Fourier transform for virtual memory machines

    NASA Technical Reports Server (NTRS)

    Galant, David C.

    1992-01-01

    An algebraic theory of the Discrete Fourier Transform is developed in great detail. Examination of the details of the theory leads to a computationally efficient fast Fourier transform for the use on computers with virtual memory. Such an algorithm is of great use on modern desktop machines. A FORTRAN coded version of the algorithm is given for the case when the sequence of numbers to be transformed is a power of two.

  8. Productive High Performance Parallel Programming with Auto-tuned Domain-Specific Embedded Languages

    DTIC Science & Technology

    2013-01-02

    Compilation JVM Java Virtual Machine KB Kilobyte KDT Knowledge Discovery Toolbox LAPACK Linear Algebra Package LLVM Low-Level Virtual Machine LOC Lines...different starting points. Leo Meyerovich also helped solidify some of the ideas here in discussions during Par Lab retreats. I would also like to thank...multi-timestep computations by blocking in both time and space. 88 Implementation Output Approx DSL Type Language Language Parallelism LoC Graphite

  9. Robust Airborne Networking Extensions (RANGE)

    DTIC Science & Technology

    2008-02-01

    IMUNES [13] project, which provides an entire network stack virtualization and topology control inside a single FreeBSD machine . The emulated topology...Multicast versus broadcast in a manet.” in ADHOC-NOW, 2004, pp. 14–27. [9] J. Mukherjee, R. Atwood , “ Rendezvous point relocation in protocol independent...computer with an Ethernet connection, or a Linux virtual machine on some other (e.g., Windows) operating system, should work. 2.1 Patching the source code

  10. Lifelong personal health data and application software via virtual machines in the cloud.

    PubMed

    Van Gorp, Pieter; Comuzzi, Marco

    2014-01-01

    Personal Health Records (PHRs) should remain the lifelong property of patients, who should be able to show them conveniently and securely to selected caregivers and institutions. In this paper, we present MyPHRMachines, a cloud-based PHR system taking a radically new architectural solution to health record portability. In MyPHRMachines, health-related data and the application software to view and/or analyze it are separately deployed in the PHR system. After uploading their medical data to MyPHRMachines, patients can access them again from remote virtual machines that contain the right software to visualize and analyze them without any need for conversion. Patients can share their remote virtual machine session with selected caregivers, who will need only a Web browser to access the pre-loaded fragments of their lifelong PHR. We discuss a prototype of MyPHRMachines applied to two use cases, i.e., radiology image sharing and personalized medicine.

  11. The IDL astronomy user's library

    NASA Technical Reports Server (NTRS)

    Landsman, W. B.

    1992-01-01

    IDL (Interactive Data Language) is a commercial programming, plotting, and image display language, which is widely used in astronomy. The IDL Astronomy User's Library is a central repository of over 400 astronomy-related IDL procedures accessible via anonymous FTP. The author will overview the use of IDL within the astronomical community and discuss recent enhancements at the IDL astronomy library. These enhancements include a fairly complete I/O package for FITS images and tables, an image deconvolution package and an image mosaic package, and access to IDL Open Windows/Motif widgets interface. The IDL Astronomy Library is funded by NASA through the Astrophysics Software and Research Aids Program.

  12. Virtualization for Cost-Effective Teaching of Assembly Language Programming

    ERIC Educational Resources Information Center

    Cadenas, José O.; Sherratt, R. Simon; Howlett, Des; Guy, Chris G.; Lundqvist, Karsten O.

    2015-01-01

    This paper describes a virtual system that emulates an ARM-based processor machine, created to replace a traditional hardware-based system for teaching assembly language. The virtual system proposed here integrates, in a single environment, all the development tools necessary to deliver introductory or advanced courses on modern assembly language…

  13. Proposal of Modification Strategy of NC Program in the Virtual Manufacturing Environment

    NASA Astrophysics Data System (ADS)

    Narita, Hirohisa; Chen, Lian-Yi; Fujimoto, Hideo; Shirase, Keiichi; Arai, Eiji

    Virtual manufacturing will be a key technology in process planning, because there are no evaluation tools for cutting conditions. Therefore, virtual machining simulator (VMSim), which can predict end milling processes, has been developed. The modification strategy of NC program using VMSim is proposed in this paper.

  14. Active Gaming: Is "Virtual" Reality Right for Your Physical Education Program?

    ERIC Educational Resources Information Center

    Hansen, Lisa; Sanders, Stephen W.

    2012-01-01

    Active gaming is growing in popularity and the idea of increasing children's physical activity by using technology is largely accepted by physical educators. Teachers nationwide have been providing active gaming equipment such as virtual bikes, rhythmic dance machines, virtual sporting games, martial arts simulators, balance boards, and other…

  15. Simulation Platform: a cloud-based online simulation environment.

    PubMed

    Yamazaki, Tadashi; Ikeno, Hidetoshi; Okumura, Yoshihiro; Satoh, Shunji; Kamiyama, Yoshimi; Hirata, Yutaka; Inagaki, Keiichiro; Ishihara, Akito; Kannon, Takayuki; Usui, Shiro

    2011-09-01

    For multi-scale and multi-modal neural modeling, it is needed to handle multiple neural models described at different levels seamlessly. Database technology will become more important for these studies, specifically for downloading and handling the neural models seamlessly and effortlessly. To date, conventional neuroinformatics databases have solely been designed to archive model files, but the databases should provide a chance for users to validate the models before downloading them. In this paper, we report our on-going project to develop a cloud-based web service for online simulation called "Simulation Platform". Simulation Platform is a cloud of virtual machines running GNU/Linux. On a virtual machine, various software including developer tools such as compilers and libraries, popular neural simulators such as GENESIS, NEURON and NEST, and scientific software such as Gnuplot, R and Octave, are pre-installed. When a user posts a request, a virtual machine is assigned to the user, and the simulation starts on that machine. The user remotely accesses to the machine through a web browser and carries out the simulation, without the need to install any software but a web browser on the user's own computer. Therefore, Simulation Platform is expected to eliminate impediments to handle multiple neural models that require multiple software. Copyright © 2011 Elsevier Ltd. All rights reserved.

  16. Reprint of: Simulation Platform: a cloud-based online simulation environment.

    PubMed

    Yamazaki, Tadashi; Ikeno, Hidetoshi; Okumura, Yoshihiro; Satoh, Shunji; Kamiyama, Yoshimi; Hirata, Yutaka; Inagaki, Keiichiro; Ishihara, Akito; Kannon, Takayuki; Usui, Shiro

    2011-11-01

    For multi-scale and multi-modal neural modeling, it is needed to handle multiple neural models described at different levels seamlessly. Database technology will become more important for these studies, specifically for downloading and handling the neural models seamlessly and effortlessly. To date, conventional neuroinformatics databases have solely been designed to archive model files, but the databases should provide a chance for users to validate the models before downloading them. In this paper, we report our on-going project to develop a cloud-based web service for online simulation called "Simulation Platform". Simulation Platform is a cloud of virtual machines running GNU/Linux. On a virtual machine, various software including developer tools such as compilers and libraries, popular neural simulators such as GENESIS, NEURON and NEST, and scientific software such as Gnuplot, R and Octave, are pre-installed. When a user posts a request, a virtual machine is assigned to the user, and the simulation starts on that machine. The user remotely accesses to the machine through a web browser and carries out the simulation, without the need to install any software but a web browser on the user's own computer. Therefore, Simulation Platform is expected to eliminate impediments to handle multiple neural models that require multiple software. Copyright © 2011 Elsevier Ltd. All rights reserved.

  17. Comparative study of state-of-the-art myoelectric controllers for multigrasp prosthetic hands.

    PubMed

    Segil, Jacob L; Controzzi, Marco; Weir, Richard F ff; Cipriani, Christian

    2014-01-01

    A myoelectric controller should provide an intuitive and effective human-machine interface that deciphers user intent in real-time and is robust enough to operate in daily life. Many myoelectric control architectures have been developed, including pattern recognition systems, finite state machines, and more recently, postural control schemes. Here, we present a comparative study of two types of finite state machines and a postural control scheme using both virtual and physical assessment procedures with seven nondisabled subjects. The Southampton Hand Assessment Procedure (SHAP) was used in order to compare the effectiveness of the controllers during activities of daily living using a multigrasp artificial hand. Also, a virtual hand posture matching task was used to compare the controllers when reproducing six target postures. The performance when using the postural control scheme was significantly better (p < 0.05) than the finite state machines during the physical assessment when comparing within-subject averages using the SHAP percent difference metric. The virtual assessment results described significantly greater completion rates (97% and 99%) for the finite state machines, but the movement time tended to be faster (2.7 s) for the postural control scheme. Our results substantiate that postural control schemes rival other state-of-the-art myoelectric controllers.

  18. Prospects for Evidence -Based Software Assurance: Models and Analysis

    DTIC Science & Technology

    2015-09-01

    virtual machine is much lighter than the workstation. The virtual machine doesn’t need to run anti- virus , firewalls, intrusion preven- tion systems...34] Maiorca, D., Corona , I., and Giacinto, G. Looking at the bag is not enough to find the bomb: An evasion of structural methods for malicious PDF...CCS ’13, ACM, pp. 119–130. [35] Maiorca, D., Giacinto, G., and Corona , I. A pattern recognition system for malicious PDF files detection. In

  19. Mortality Trajectories at Exceptionally High Ages: A Study of Supercentenarians

    PubMed Central

    Gavrilova, Natalia S.; Gavrilov, Leonid A.; Krut'ko, Vyacheslav N.

    2017-01-01

    The growing number of persons surviving to age 100 years and beyond raises questions about the shape of mortality trajectories at exceptionally high ages, and this problem may become significant for actuaries in the near future. However, such studies are scarce because of the difficulties in obtaining reliable age estimates at exceptionally high ages. The current view about mortality beyond age 110 years suggests that death rates do not grow with age and are virtually flat. The same assumption is made in the new actuarial VBT tables. In this paper, we test the hypothesis that the mortality of supercentenarians (persons living 110+ years) is constant and does not grow with age, and we analyze mortality trajectories at these exceptionally high ages. Death records of supercentenarians were taken from the International Database on Longevity (IDL). All ages of supercentenarians in the database were subjected to careful validation. We used IDL records for persons belonging to extinct birth cohorts (born before 1895) since the last deaths in IDL were observed in 2007. We also compared our results based on IDL data with a more contemporary database maintained by the Gerontology Research Group (GRG). First we attempted to replicate findings by Gampe (2010), who analyzed IDL data and came to the conclusion that “human mortality after age 110 is flat.” We split IDL data into two groups: cohorts born before 1885 and cohorts born in 1885 and later. Hazard rate estimates were conducted using the standard procedure available in Stata software. We found that mortality in both groups grows with age, although in older cohorts, growth was slower compared with more recent cohorts and not statistically significant. Mortality analysis of more numerous 1884–1894 birth cohort with the Akaike goodness-of-fit criterion showed better fit for the Gompertz model than for the exponential model (flat mortality). Mortality analyses with GRG data produced similar results. The remaining life expectancy for the 1884–1894 birth cohort demonstrates rapid decline with age. This decline is similar to the computer-simulated trajectory expected for the Gompertz model, rather than the extremely slow decline in the case of the exponential model. These results demonstrate that hazard rates after age 110 years do not stay constant and suggest that mortality deceleration at older ages is not a universal phenomenon. These findings may represent a challenge to the existing theories of aging and longevity, which predict constant mortality in the late stages of life. One possibility for reconciliation of the observed phenomenon and the existing theoretical consideration is a possibility of mortality deceleration and mortality plateau at very high yet unobservable ages. PMID:29170764

  20. Mortality Trajectories at Exceptionally High Ages: A Study of Supercentenarians.

    PubMed

    Gavrilova, Natalia S; Gavrilov, Leonid A; Krut'ko, Vyacheslav N

    2017-01-01

    The growing number of persons surviving to age 100 years and beyond raises questions about the shape of mortality trajectories at exceptionally high ages, and this problem may become significant for actuaries in the near future. However, such studies are scarce because of the difficulties in obtaining reliable age estimates at exceptionally high ages. The current view about mortality beyond age 110 years suggests that death rates do not grow with age and are virtually flat. The same assumption is made in the new actuarial VBT tables. In this paper, we test the hypothesis that the mortality of supercentenarians (persons living 110+ years) is constant and does not grow with age, and we analyze mortality trajectories at these exceptionally high ages. Death records of supercentenarians were taken from the International Database on Longevity (IDL). All ages of supercentenarians in the database were subjected to careful validation. We used IDL records for persons belonging to extinct birth cohorts (born before 1895) since the last deaths in IDL were observed in 2007. We also compared our results based on IDL data with a more contemporary database maintained by the Gerontology Research Group (GRG). First we attempted to replicate findings by Gampe (2010), who analyzed IDL data and came to the conclusion that "human mortality after age 110 is flat." We split IDL data into two groups: cohorts born before 1885 and cohorts born in 1885 and later. Hazard rate estimates were conducted using the standard procedure available in Stata software. We found that mortality in both groups grows with age, although in older cohorts, growth was slower compared with more recent cohorts and not statistically significant. Mortality analysis of more numerous 1884-1894 birth cohort with the Akaike goodness-of-fit criterion showed better fit for the Gompertz model than for the exponential model (flat mortality). Mortality analyses with GRG data produced similar results. The remaining life expectancy for the 1884-1894 birth cohort demonstrates rapid decline with age. This decline is similar to the computer-simulated trajectory expected for the Gompertz model, rather than the extremely slow decline in the case of the exponential model. These results demonstrate that hazard rates after age 110 years do not stay constant and suggest that mortality deceleration at older ages is not a universal phenomenon. These findings may represent a challenge to the existing theories of aging and longevity, which predict constant mortality in the late stages of life. One possibility for reconciliation of the observed phenomenon and the existing theoretical consideration is a possibility of mortality deceleration and mortality plateau at very high yet unobservable ages.

  1. Nonlinear engine model for idle speed control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Livshiz, M.; Sanvido, D.J.; Stiles, S.D.

    1994-12-31

    This paper describes a nonlinear model of an engine used for the design of idle speed control and prediction in a broad range of idle speeds and operational conditions. Idle speed control systems make use of both spark advance and the idle air actuator to control engine speed for improved response relative to variations in the target idle speed due to load disturbances. The control system at idle can be presented by a multiple input multiple output (MIMO) nonlinear model. Information of nonlinearities helps to improve performance of the system over the whole range of engine speeds. A proposed simplemore » nonlinear model of the engine at idle was applied for design of optimal controllers and predictors for improved steady state, load rejection and transition from and to idle. This paper describes vehicle results of engine speed prediction based on the described model.« less

  2. Final Report: Enabling Exascale Hardware and Software Design through Scalable System Virtualization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bridges, Patrick G.

    2015-02-01

    In this grant, we enhanced the Palacios virtual machine monitor to increase its scalability and suitability for addressing exascale system software design issues. This included a wide range of research on core Palacios features, large-scale system emulation, fault injection, perfomrance monitoring, and VMM extensibility. This research resulted in large number of high-impact publications in well-known venues, the support of a number of students, and the graduation of two Ph.D. students and one M.S. student. In addition, our enhanced version of the Palacios virtual machine monitor has been adopted as a core element of the Hobbes operating system under active DOE-fundedmore » research and development.« less

  3. minimega v. 3.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crussell, Jonathan; Erickson, Jeremy; Fritz, David

    minimega is an emulytics platform for creating testbeds of networked devices. The platoform consists of easily deployable tools to facilitate bringing up large networks of virtual machines including Windows, Linux, and Android. minimega allows experiments to be brought up quickly with almost no configuration. minimega also includes tools for simple cluster, management, as well as tools for creating Linux-based virtual machines. This release of minimega includes new emulated sensors for Android devices to improve the fidelity of testbeds that include mobile devices. Emulated sensors include GPS and

  4. Phenomenology tools on cloud infrastructures using OpenStack

    NASA Astrophysics Data System (ADS)

    Campos, I.; Fernández-del-Castillo, E.; Heinemeyer, S.; Lopez-Garcia, A.; Pahlen, F.; Borges, G.

    2013-04-01

    We present a new environment for computations in particle physics phenomenology employing recent developments in cloud computing. On this environment users can create and manage "virtual" machines on which the phenomenology codes/tools can be deployed easily in an automated way. We analyze the performance of this environment based on "virtual" machines versus the utilization of physical hardware. In this way we provide a qualitative result for the influence of the host operating system on the performance of a representative set of applications for phenomenology calculations.

  5. A Cooperative Approach to Virtual Machine Based Fault Injection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naughton III, Thomas J; Engelmann, Christian; Vallee, Geoffroy R

    Resilience investigations often employ fault injection (FI) tools to study the effects of simulated errors on a target system. It is important to keep the target system under test (SUT) isolated from the controlling environment in order to maintain control of the experiement. Virtual machines (VMs) have been used to aid these investigations due to the strong isolation properties of system-level virtualization. A key challenge in fault injection tools is to gain proper insight and context about the SUT. In VM-based FI tools, this challenge of target con- text is increased due to the separation between host and guest (VM).more » We discuss an approach to VM-based FI that leverages virtual machine introspection (VMI) methods to gain insight into the target s context running within the VM. The key to this environment is the ability to provide basic information to the FI system that can be used to create a map of the target environment. We describe a proof- of-concept implementation and a demonstration of its use to introduce simulated soft errors into an iterative solver benchmark running in user-space of a guest VM.« less

  6. Multiplexing Low and High QoS Workloads in Virtual Environments

    NASA Astrophysics Data System (ADS)

    Verboven, Sam; Vanmechelen, Kurt; Broeckhove, Jan

    Virtualization technology has introduced new ways for managing IT infrastructure. The flexible deployment of applications through self-contained virtual machine images has removed the barriers for multiplexing, suspending and migrating applications with their entire execution environment, allowing for a more efficient use of the infrastructure. These developments have given rise to an important challenge regarding the optimal scheduling of virtual machine workloads. In this paper, we specifically address the VM scheduling problem in which workloads that require guaranteed levels of CPU performance are mixed with workloads that do not require such guarantees. We introduce a framework to analyze this scheduling problem and evaluate to what extent such mixed service delivery is beneficial for a provider of virtualized IT infrastructure. Traditionally providers offer IT resources under a guaranteed and fixed performance profile, which can lead to underutilization. The findings of our simulation study show that through proper tuning of a limited set of parameters, the proposed scheduling algorithm allows for a significant increase in utilization without sacrificing on performance dependability.

  7. Dual lead-crowning for helical gears with anti-twist tooth flanks on the internal gear honing machine

    NASA Astrophysics Data System (ADS)

    Tran, Van-Quyet; Wu, Yu-Ren

    2017-12-01

    For some specific purposes, a helical gear with wide face-width is applied for meshing with two other gears simultaneously, such as the idle pinions in the vehicle differential. However, due to the fact of gear deformation, the tooth edge contact and stress concentration might occur. Single lead-crowning is no more suitable for such a case to get the appropriate position of contact pattern and improve the load distribution on tooth surfaces. Therefore, a novel *Email: method is proposed in this paper to achieve the wide-face-width helical gears with the dual lead-crowned and the anti-twisted tooth surfaces by controlling the swivel angle and the rotation angle of the honing wheel respectively on an internal gear honing machine. Numerical examples are practiced to illustrate and verified the merits of the proposed method.

  8. In Vivo Pattern Classification of Ingestive Behavior in Ruminants Using FBG Sensors and Machine Learning.

    PubMed

    Pegorini, Vinicius; Karam, Leandro Zen; Pitta, Christiano Santos Rocha; Cardoso, Rafael; da Silva, Jean Carlos Cardozo; Kalinowski, Hypolito José; Ribeiro, Richardson; Bertotti, Fábio Luiz; Assmann, Tangriani Simioni

    2015-11-11

    Pattern classification of ingestive behavior in grazing animals has extreme importance in studies related to animal nutrition, growth and health. In this paper, a system to classify chewing patterns of ruminants in in vivo experiments is developed. The proposal is based on data collected by optical fiber Bragg grating sensors (FBG) that are processed by machine learning techniques. The FBG sensors measure the biomechanical strain during jaw movements, and a decision tree is responsible for the classification of the associated chewing pattern. In this study, patterns associated with food intake of dietary supplement, hay and ryegrass were considered. Additionally, two other important events for ingestive behavior were monitored: rumination and idleness. Experimental results show that the proposed approach for pattern classification is capable of differentiating the five patterns involved in the chewing process with an overall accuracy of 94%.

  9. In Vivo Pattern Classification of Ingestive Behavior in Ruminants Using FBG Sensors and Machine Learning

    PubMed Central

    Pegorini, Vinicius; Karam, Leandro Zen; Pitta, Christiano Santos Rocha; Cardoso, Rafael; da Silva, Jean Carlos Cardozo; Kalinowski, Hypolito José; Ribeiro, Richardson; Bertotti, Fábio Luiz; Assmann, Tangriani Simioni

    2015-01-01

    Pattern classification of ingestive behavior in grazing animals has extreme importance in studies related to animal nutrition, growth and health. In this paper, a system to classify chewing patterns of ruminants in in vivo experiments is developed. The proposal is based on data collected by optical fiber Bragg grating sensors (FBG) that are processed by machine learning techniques. The FBG sensors measure the biomechanical strain during jaw movements, and a decision tree is responsible for the classification of the associated chewing pattern. In this study, patterns associated with food intake of dietary supplement, hay and ryegrass were considered. Additionally, two other important events for ingestive behavior were monitored: rumination and idleness. Experimental results show that the proposed approach for pattern classification is capable of differentiating the five patterns involved in the chewing process with an overall accuracy of 94%. PMID:26569250

  10. Throttle pneumatic impact mechanism equipped with afterburner idle-stroke chamber

    NASA Astrophysics Data System (ADS)

    Dedov, Alexey; Frantseva, Eleanor; Dmitriev, Mikhail

    2017-01-01

    Pneumatic impact mechanisms are widely used in construction, mining and other economic sectors of a country. Such mechanisms are a base for a wide range of machines of various types and dimensions from hand-held tools to mounted piling hammers with impact energy up to 10 000 J. This paper is aimed at creation of pneumatic impact mechanism with the improved characteristics, including operation, energy use, weight and size which is especially important in space-limited working conditions. The research methods include development of computer mathematical model that can solve equations system and test a prototype model at the experimental stand. As a result of conducted research the pneumatic impact mechanism with the improved characteristics was developed. An engineering method for calculating throttle pneumatic impact mechanisms with a preset value of impact energy from 1 to 20 000 was investigated. This method allows creating percussive machines of a wide range of application.

  11. Virtual reality for intelligent and interactive operating, training, and visualization systems

    NASA Astrophysics Data System (ADS)

    Freund, Eckhard; Rossmann, Juergen; Schluse, Michael

    2000-10-01

    Virtual Reality Methods allow a new and intuitive way of communication between man and machine. The basic idea of Virtual Reality (VR) is the generation of artificial computer simulated worlds, which the user not only can look at but also can interact with actively using data glove and data helmet. The main emphasis for the use of such techniques at the IRF is the development of a new generation of operator interfaces for the control of robots and other automation components and for intelligent training systems for complex tasks. The basic idea of the methods developed at the IRF for the realization of Projective Virtual Reality is to let the user work in the virtual world as he would act in reality. The user actions are recognized by the Virtual reality System and by means of new and intelligent control software projected onto the automation components like robots which afterwards perform the necessary actions in reality to execute the users task. In this operation mode the user no longer has to be a robot expert to generate tasks for robots or to program them, because intelligent control software recognizes the users intention and generated automatically the commands for nearly every automation component. Now, Virtual Reality Methods are ideally suited for universal man-machine-interfaces for the control and supervision of a big class of automation components, interactive training and visualization systems. The Virtual Reality System of the IRF-COSIMIR/VR- forms the basis for different projects starting with the control of space automation systems in the projects CIROS, VITAL and GETEX, the realization of a comprehensive development tool for the International Space Station and last but not least with the realistic simulation fire extinguishing, forest machines and excavators which will be presented in the final paper in addition to the key ideas of this Virtual Reality System.

  12. Dynamic provisioning of local and remote compute resources with OpenStack

    NASA Astrophysics Data System (ADS)

    Giffels, M.; Hauth, T.; Polgart, F.; Quast, G.

    2015-12-01

    Modern high-energy physics experiments rely on the extensive usage of computing resources, both for the reconstruction of measured events as well as for Monte-Carlo simulation. The Institut fur Experimentelle Kernphysik (EKP) at KIT is participating in both the CMS and Belle experiments with computing and storage resources. In the upcoming years, these requirements are expected to increase due to growing amount of recorded data and the rise in complexity of the simulated events. It is therefore essential to increase the available computing capabilities by tapping into all resource pools. At the EKP institute, powerful desktop machines are available to users. Due to the multi-core nature of modern CPUs, vast amounts of CPU time are not utilized by common desktop usage patterns. Other important providers of compute capabilities are classical HPC data centers at universities or national research centers. Due to the shared nature of these installations, the standardized software stack required by HEP applications cannot be installed. A viable way to overcome this constraint and offer a standardized software environment in a transparent manner is the usage of virtualization technologies. The OpenStack project has become a widely adopted solution to virtualize hardware and offer additional services like storage and virtual machine management. This contribution will report on the incorporation of the institute's desktop machines into a private OpenStack Cloud. The additional compute resources provisioned via the virtual machines have been used for Monte-Carlo simulation and data analysis. Furthermore, a concept to integrate shared, remote HPC centers into regular HEP job workflows will be presented. In this approach, local and remote resources are merged to form a uniform, virtual compute cluster with a single point-of-entry for the user. Evaluations of the performance and stability of this setup and operational experiences will be discussed.

  13. Graphical user interface for image acquisition and processing

    DOEpatents

    Goldberg, Kenneth A.

    2002-01-01

    An event-driven GUI-based image acquisition interface for the IDL programming environment designed for CCD camera control and image acquisition directly into the IDL environment where image manipulation and data analysis can be performed, and a toolbox of real-time analysis applications. Running the image acquisition hardware directly from IDL removes the necessity of first saving images in one program and then importing the data into IDL for analysis in a second step. Bringing the data directly into IDL creates an opportunity for the implementation of IDL image processing and display functions in real-time. program allows control over the available charge coupled device (CCD) detector parameters, data acquisition, file saving and loading, and image manipulation and processing, all from within IDL. The program is built using IDL's widget libraries to control the on-screen display and user interface.

  14. Locomotive Emission and Engine Idle Reduction Technology Demonstration Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    John R. Archer

    2005-03-14

    In response to a United States Department of Energy (DOE) solicitation, the Maryland Energy Administration (MEA), in partnership with CSX Transportation, Inc. (CSXT), submitted a proposal to DOE to support the demonstration of Auxiliary Power Unit (APU) technology on fifty-six CSXT locomotives. The project purpose was to demonstrate the idle fuel savings, the Nitrous Oxide (NOX) emissions reduction and the noise reduction capabilities of the APU. Fifty-six CSXT Baltimore Division locomotives were equipped with APUs, Engine Run Managers (ERM) and communications equipment to permit GPS tracking and data collection from the locomotives. Throughout the report there is mention of themore » percent time spent in the State of Maryland. The fifty-six locomotives spent most of their time inside the borders of Maryland and some spent all their time inside the state borders. Usually when a locomotive traveled beyond the Maryland State border it was into an adjoining state. They were divided into four groups according to assignment: (1) Power Unit/Switcher Mate units, (2) Remote Control units, (3) SD50 Pusher units and (4) Other units. The primary data of interest were idle data plus the status of the locomotive--stationary or moving. Also collected were main engine off, idling or working. Idle data were collected by county location, by locomotive status (stationary or moving) and type of idle (Idle 1, main engine idling, APU off; Idle 2, main engine off, APU on; Idle 3, main engine off, APU off; Idle 4, main engine idle, APU on). Desirable main engine idle states are main engine off and APU off or main engine off and APU on. Measuring the time the main engine spends in these desirable states versus the total time it could spend in an engine idling state allows the calculation of Percent Idle Management Effectiveness (%IME). IME is the result of the operation of the APU plus the implementation of CSXT's Warm Weather Shutdown Policy. It is difficult to separate the two. The units demonstrated an IME of 64% at stationary idle for the test period. The data collected during calendar year 2004 demonstrated that 707,600 gallons of fuel were saved and 285 tons of NOX were not emitted as a result of idle management in stationary idle, which translates to 12,636 gallons and 5.1 tons of NOx per unit respectively. The noise reduction capabilities of the APU demonstrated that at 150 feet from the locomotive the loaded APU with the main engine shut down generated noise that was only marginally above ambient noise level.« less

  15. Idling Reduction for Personal Vehicles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2015-05-07

    Fact sheet on reducing engine idling in personal vehicles. Idling your vehicle--running your engine when you're not driving it--truly gets you nowhere. Idling reduces your vehicle's fuel economy, costs you money, and creates pollution. Idling for more than 10 seconds uses more fuel and produces more emissions that contribute to smog and climate change than stopping and restarting your engine does.

  16. Virtualization for the LHCb Online system

    NASA Astrophysics Data System (ADS)

    Bonaccorsi, Enrico; Brarda, Loic; Moine, Gary; Neufeld, Niko

    2011-12-01

    Virtualization has long been advertised by the IT-industry as a way to cut down cost, optimise resource usage and manage the complexity in large data-centers. The great number and the huge heterogeneity of hardware, both industrial and custom-made, has up to now led to reluctance in the adoption of virtualization in the IT infrastructure of large experiment installations. Our experience in the LHCb experiment has shown that virtualization improves the availability and the manageability of the whole system. We have done an evaluation of available hypervisors / virtualization solutions and find that the Microsoft HV technology provides a high level of maturity and flexibility for our purpose. We present the results of these comparison tests, describing in detail, the architecture of our virtualization infrastructure with a special emphasis on the security for services visible to the outside world. Security is achieved by a sophisticated combination of VLANs, firewalls and virtual routing - the cost and benefits of this solution are analysed. We have adapted our cluster management tools, notably Quattor, for the needs of virtual machines and this allows us to migrate smoothly services on physical machines to the virtualized infrastructure. The procedures for migration will also be described. In the final part of the document we describe our recent R&D activities aiming to replacing the SAN-backend for the virtualization by a cheaper iSCSI solution - this will allow to move all servers and related services to the virtualized infrastructure, excepting the ones doing hardware control via non-commodity PCI plugin cards.

  17. Evaluating open-source cloud computing solutions for geosciences

    NASA Astrophysics Data System (ADS)

    Huang, Qunying; Yang, Chaowei; Liu, Kai; Xia, Jizhe; Xu, Chen; Li, Jing; Gui, Zhipeng; Sun, Min; Li, Zhenglong

    2013-09-01

    Many organizations start to adopt cloud computing for better utilizing computing resources by taking advantage of its scalability, cost reduction, and easy to access characteristics. Many private or community cloud computing platforms are being built using open-source cloud solutions. However, little has been done to systematically compare and evaluate the features and performance of open-source solutions in supporting Geosciences. This paper provides a comprehensive study of three open-source cloud solutions, including OpenNebula, Eucalyptus, and CloudStack. We compared a variety of features, capabilities, technologies and performances including: (1) general features and supported services for cloud resource creation and management, (2) advanced capabilities for networking and security, and (3) the performance of the cloud solutions in provisioning and operating the cloud resources as well as the performance of virtual machines initiated and managed by the cloud solutions in supporting selected geoscience applications. Our study found that: (1) no significant performance differences in central processing unit (CPU), memory and I/O of virtual machines created and managed by different solutions, (2) OpenNebula has the fastest internal network while both Eucalyptus and CloudStack have better virtual machine isolation and security strategies, (3) Cloudstack has the fastest operations in handling virtual machines, images, snapshots, volumes and networking, followed by OpenNebula, and (4) the selected cloud computing solutions are capable for supporting concurrent intensive web applications, computing intensive applications, and small-scale model simulations without intensive data communication.

  18. A performance study of live VM migration technologies: VMotion vs XenMotion

    NASA Astrophysics Data System (ADS)

    Feng, Xiujie; Tang, Jianxiong; Luo, Xuan; Jin, Yaohui

    2011-12-01

    Due to the growing demand of flexible resource management for cloud computing services, researches on live virtual machine migration have attained more and more attention. Live migration of virtual machine across different hosts has been a powerful tool to facilitate system maintenance, load balancing, fault tolerance and so on. In this paper, we use a measurement-based approach to compare the performance of two major live migration technologies under certain network conditions, i.e., VMotion and XenMotion. The results show that VMotion generates much less data transferred than XenMotion when migrating identical VMs. However, in network with moderate packet loss and delay, which are typical in a VPN (virtual private network) scenario used to connect the data centers, XenMotion outperforms VMotion in total migration time. We hope that this study can be helpful in choosing suitable virtualization environments for data center administrators and optimizing existing live migration mechanisms.

  19. Existing machine propulsion is transformed by state-of-the-art gearbox apparatus saves at least 50% energy

    NASA Astrophysics Data System (ADS)

    Abramov, V.

    2013-12-01

    This innovation on www.repowermachine.com is finalist at Clean-tech and Energy of 2012 Minnesota's TEKNE AWARDS. Vehicles are pushed by force of friction between their wheels and land, propellers and water or air according to Third Newton's law of physics of moving. Force of friction is dependent to vehicle weight as highest torque of wheel or propeller for vehicle moving from stop. Friction force DOES NOT dependent to motor power. Why existing SUV of 2,000 lb uses 550 hp motor when first vehicle has 0.75 hp motor (Carl Benz';s patent #37435, January 29, 1886 in Germany)? Gas or magnet field reaches needed torque of wheels too slowly because requires huge motor power for acceleration SUV from 0 to 100 mph for 5 second. The acceleration system by gas or magnet field uses additional energy for increasing motor shaft idle speed and reduces its highest torque of physical volume because necessary to increase motor power that equal/exceed motor power according to vehicle weight. Therefore, any transmission torque DOES NOT NEED and it is use as second brake. Ship, locomotives, helicopters, CNC machine tools, etc motor(s) directly turn wheels, propellers, spindles or ignore to use gear -transmission designs. How do you follow to Creator's physics law of LEVER for saving energy? Existing machine propulsion is transformed by one comprising least numbers of gears and maybe shafts from above state-of-the-art 1,000 gearbox apparatus designs. It is installed or replaced transmission in existing propulsion that is transformed to non-accelerated propulsion. It cuts about 80% mechanical energy that acceleration system wastes in motor heat form, cuts time of movement by reaching each speed for 1-2 seconds. It produces all needed speeds and uses only idle speed of cheapest motor with reduced power and cost that have replaced existing motor too. There is opportunity to eliminate vehicle/machine roads traffics in cities that creates additional unknown GHG emissions Revolutionary methods capability to create 144 forward/72 reverse torque/overdrive speeds by one gear less than heavy-duty truck gearbox of 18 forward/2 reverse torque plus 10 compound gearboxes for vehicle maneuverability improvement. It capability to reduce size of motor up to 5x5x5x5x5x5=15,625 times by 7 shafts !!! Therefore, SUV non-accelerated propulsion comprising GAEES of 24 overdrive speeds that uses 20 hp motor idle speed only or torque that will be sufficient to move this SUV from stop. HEAVY-DUTY TRUCK: Chosen GAEEF of 36 torques/overdrive and 18 reverse speeds by 20 gears/5 shafts (in comparison to its 18 torques/2 reverse by 29 gears/4 shafts) reduces heavy-duty truck motor power from 400 hp to 50 hp. It increases energy economy in 400/50=8 times!!! PABLIC TRANSPORTATION: Existing cruise ship/locomotive with chosen GAEES of 64 torques/overdrive speeds and 32 reverse speeds by 22 gears/7 shafts that provide to reduce from 3000 hp to 200 hp for energy economy in 3000/200=15 times!!!

  20. Learning for VMM + WTA Embedded Classifiers

    DTIC Science & Technology

    2016-03-31

    enabling correct classification of each novel acoustic signal (generator, idle car , and idle truck). The classification structure requires, after...measured on our SoC FPAA IC. The test input is composed of signals from urban environment for 3 objects (generator, idle car , and idle truck...classifier results from a rural truck data set, an urban generator set, and urban idle car dataset. Solid lines represent our extracted background

  1. Cloudy Solar Software - Enhanced Capabilities for Finding, Pre-processing, and Visualizing Solar Data

    NASA Astrophysics Data System (ADS)

    Istvan Etesi, Laszlo; Tolbert, K.; Schwartz, R.; Zarro, D.; Dennis, B.; Csillaghy, A.

    2010-05-01

    In our project "Extending the Virtual Solar Observatory (VSO)” we have combined some of the features available in Solar Software (SSW) to produce an integrated environment for data analysis, supporting the complete workflow from data location, retrieval, preparation, and analysis to creating publication-quality figures. Our goal is an integrated analysis experience in IDL, easy-to-use but flexible enough to allow more sophisticated procedures such as multi-instrument analysis. To that end, we have made the transition from a locally oriented setting where all the analysis is done on the user's computer, to an extended analysis environment where IDL has access to services available on the Internet. We have implemented a form of Cloud Computing that uses the VSO search and a new data retrieval and pre-processing server (PrepServer) that provides remote execution of instrument-specific data preparation. We have incorporated the interfaces to the VSO search and the PrepServer into an IDL widget (SHOW_SYNOP) that provides user-friendly searching and downloading of raw solar data and optionally sends search results for pre-processing to the PrepServer prior to downloading the data. The raw and pre-processed data can be displayed with our plotting suite, PLOTMAN, which can handle different data types (light curves, images, and spectra) and perform basic data operations such as zooming, image overlays, solar rotation, etc. PLOTMAN is highly configurable and suited for visual data analysis and for creating publishable figures. PLOTMAN and SHOW_SYNOP work hand-in-hand for a convenient working environment. Our environment supports a growing number of solar instruments that currently includes RHESSI, SOHO/EIT, TRACE, SECCHI/EUVI, HINODE/XRT, and HINODE/EIS.

  2. Charliecloud

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Priedhorsky, Reid; Randles, Tim

    Charliecloud is a set of scripts to let users run a virtual cluster of virtual machines (VMs) on a desktop or supercomputer. Key functions include: 1. Creating (typically by installing an operating system from vendor media) and updating VM images; 2. Running a single VM; 3. Running multiple VMs in a virtual cluster. The virtual machines can talk to one another over the network and (in some cases) the outside world. This is accomplished by calling external programs such as QEMU and the Virtual Distributed Ethernet (VDE) suite. The goal is to let users have a virtual cluster containing nodesmore » where they have privileged access, while isolating that privilege within the virtual cluster so it cannot affect the physical compute resources. Host configuration enforces security; this is not included in Charliecloud, though security guidelines are included in its documentation and Charliecloud is designed to facilitate such configuration. Charliecloud manages passing information from host computers into and out of the virtual machines, such as parameters of the virtual cluster, input data specified by the user, output data from virtual compute jobs, VM console display, and network connections (e.g., SSH or X11). Parameters for the virtual cluster (number of VMs, RAM and disk per VM, etc.) are specified by the user or gathered from the environment (e.g., SLURM environment variables). Example job scripts are included. These include computation examples (such as a "hello world" MPI job) as well as performance tests. They also include a security test script to verify that the virtual cluster is appropriately sandboxed. Tests include: 1. Pinging hosts inside and outside the virtual cluster to explore connectivity; 2. Port scans (again inside and outside) to see what services are available; 3. Sniffing tests to see what traffic is visible to running VMs; 4. IP address spoofing to test network functionality in this case; 5. File access tests to make sure host access permissions are enforced. This test script is not a comprehensive scanner and does not test for specific vulnerabilities. Importantly, no information about physical hosts or network topology is included in this script (or any of Charliecloud); while part of a sensible test, such information is specified by the user when the test is run. That is, one cannot learn anything about the LANL network or computing infrastructure by examining Charliecloud code.« less

  3. Accessing SDO data in a pipeline environment using the VSO WSDL/SOAP interface

    NASA Astrophysics Data System (ADS)

    Suarez Sola, F. I.; Hourcle, J. A.; Amezcua, A.; Bogart, R.; Davey, A. R.; Gurman, J. B.; Hill, F.; Hughitt, V. K.; Martens, P. C.; Spencer, J.; Vso Team

    2010-12-01

    As part of the Virtual Solar Observatory (VSO) effort to support the Solar Dynamics Observatory (SDO) data, the VSO has worked on bringing up to date its WSDL document and SOAP interface to make it compatible with most widely used web services core engines. (E.g. axis2, jws, etc.) In this presentation we will explore the possibilities available for searching and/or fetching data within pipeline code. We will explain some of the WSDL/VSO-SDO interface intricacies and show how the vast amount of data that is available via the VSO can be tapped via IDL, Java, Perl or C in an uncomplicated way.

  4. Passenger vehicle idling in Vermont, Phase II.

    DOT National Transportation Integrated Search

    2014-08-01

    While trip-start and trip-end idling, including idling at intermediary stops along a route, cannot be completely eliminated, the duration of these discretionary idling events is largely controlled by the driver and can be considered part of travel or...

  5. Using PVM to host CLIPS in distributed environments

    NASA Technical Reports Server (NTRS)

    Myers, Leonard; Pohl, Kym

    1994-01-01

    It is relatively easy to enhance CLIPS (C Language Integrated Production System) to support multiple expert systems running in a distributed environment with heterogeneous machines. The task is minimized by using the PVM (Parallel Virtual Machine) code from Oak Ridge Labs to provide the distributed utility. PVM is a library of C and FORTRAN subprograms that supports distributive computing on many different UNIX platforms. A PVM deamon is easily installed on each CPU that enters the virtual machine environment. Any user with rsh or rexec access to a machine can use the one PVM deamon to obtain a generous set of distributed facilities. The ready availability of both CLIPS and PVM makes the combination of software particularly attractive for budget conscious experimentation of heterogeneous distributive computing with multiple CLIPS executables. This paper presents a design that is sufficient to provide essential message passing functions in CLIPS and enable the full range of PVM facilities.

  6. Virtual Mission Operations of Remote Sensors With Rapid Access To and From Space

    NASA Technical Reports Server (NTRS)

    Ivancic, William D.; Stewart, Dave; Walke, Jon; Dikeman, Larry; Sage, Steven; Miller, Eric; Northam, James; Jackson, Chris; Taylor, John; Lynch, Scott; hide

    2010-01-01

    This paper describes network-centric operations, where a virtual mission operations center autonomously receives sensor triggers, and schedules space and ground assets using Internet-based technologies and service-oriented architectures. For proof-of-concept purposes, sensor triggers are received from the United States Geological Survey (USGS) to determine targets for space-based sensors. The Surrey Satellite Technology Limited (SSTL) Disaster Monitoring Constellation satellite, the United Kingdom Disaster Monitoring Constellation (UK-DMC), is used as the space-based sensor. The UK-DMC s availability is determined via machine-to-machine communications using SSTL s mission planning system. Access to/from the UK-DMC for tasking and sensor data is via SSTL s and Universal Space Network s (USN) ground assets. The availability and scheduling of USN s assets can also be performed autonomously via machine-to-machine communications. All communication, both on the ground and between ground and space, uses open Internet standards.

  7. Extending the Operational Envelope of a Turbofan Engine Simulation into the Sub-Idle Region

    NASA Technical Reports Server (NTRS)

    Chapman, Jeffryes W.; Hamley, Andrew J.; Guo, Ten-Huei; Litt, Jonathan S.

    2016-01-01

    In many non-linear gas turbine simulations, operation in the sub-idle region can lead to model instability. This paper lays out a method for extending the operational envelope of a map based gas turbine simulation to include the sub-idle region. This method develops a multi-simulation solution where the baseline component maps are extrapolated below the idle level and an alternate model is developed to serve as a safety net when the baseline model becomes unstable or unreliable. Sub-idle model development takes place in two distinct operational areas, windmilling/shutdown and purge/cranking/ startup. These models are based on derived steady state operating points with transient values extrapolated between initial (known) and final (assumed) states. Model transitioning logic is developed to predict baseline model sub-idle instability, and transition smoothly and stably to the backup sub-idle model. Results from the simulation show a realistic approximation of sub-idle behavior as compared to generic sub-idle engine performance that allows the engine to operate continuously and stably from shutdown to full power.

  8. Extending the Operational Envelope of a Turbofan Engine Simulation into the Sub-Idle Region

    NASA Technical Reports Server (NTRS)

    Chapman, Jeffryes Walter; Hamley, Andrew J.; Guo, Ten-Huei; Litt, Jonathan S.

    2016-01-01

    In many non-linear gas turbine simulations, operation in the sub-idle region can lead to model instability. This paper lays out a method for extending the operational envelope of a map based gas turbine simulation to include the sub-idle region. This method develops a multi-simulation solution where the baseline component maps are extrapolated below the idle level and an alternate model is developed to serve as a safety net when the baseline model becomes unstable or unreliable. Sub-idle model development takes place in two distinct operational areas, windmilling/shutdown and purge/cranking/startup. These models are based on derived steady state operating points with transient values extrapolated between initial (known) and final (assumed) states. Model transitioning logic is developed to predict baseline model sub-idle instability, and transition smoothly and stably to the backup sub-idle model. Results from the simulation show a realistic approximation of sub-idle behavior as compared to generic sub-idle engine performance that allows the engine to operate continuously and stably from shutdown to full power.

  9. Effect of asynchrony on numerical simulations of fluid flow phenomena

    NASA Astrophysics Data System (ADS)

    Konduri, Aditya; Mahoney, Bryan; Donzis, Diego

    2015-11-01

    Designing scalable CFD codes on massively parallel computers is a challenge. This is mainly due to the large number of communications between processing elements (PEs) and their synchronization, leading to idling of PEs. Indeed, communication will likely be the bottleneck in the scalability of codes on Exascale machines. Our recent work on asynchronous computing for PDEs based on finite-differences has shown that it is possible to relax synchronization between PEs at a mathematical level. Computations then proceed regardless of the status of communication, reducing the idle time of PEs and improving the scalability. However, accuracy of the schemes is greatly affected. We have proposed asynchrony-tolerant (AT) schemes to address this issue. In this work, we study the effect of asynchrony on the solution of fluid flow problems using standard and AT schemes. We show that asynchrony creates additional scales with low energy content. The specific wavenumbers affected can be shown to be due to two distinct effects: the randomness in the arrival of messages and the corresponding switching between schemes. Understanding these errors allow us to effectively control them, rendering the method's feasibility in solving turbulent flows at realistic conditions on future computing systems.

  10. Exposure to whole-body vibration and seat transmissibility in a large sample of earth scrapers.

    PubMed

    Salmoni, Alan; Cann, Adam; Gillin, Kent

    2010-01-01

    It is often difficult to access a large sample of vehicles in various work environments to evaluate worker exposure to vibration such as in construction and mining. Thus the main purpose of the present research was to test vibration exposure in a relatively large number of earth scrapers. The second aim was to assess vibration exposure values on seat transmissibility. 33earth scrapers were assessed for both exposure to whole-body vibration and seat transmissibility. Two triaxial accelerometers, one placed on the seat and one on the floor directly below the seat, were used to gather whole-body vibration values (a(w)). Each machine was tested for a minimum of three complete work cycles: idling, scraping, travelling full, dumping, travelling empty back to the scrape site. Results showed that idling and scraping produced low levels of vibration when compared to travelling and dumping. Second, when the a(w) values were compared to the EU safety standards for an eight hour work day, the data (z axis) exceeded the exposure action value (0.5 m/s2) in all machines, and the exposure limit value (1.15 m/s2) in some. Implications; Operators of the scrapers were being exposed to unsafe levels of whole-body vibration. When the seats were assessed to see whether they were attenuating operator exposure to vibration, many of the seat effective amplitude transmissibility (SEAT) values exceeded 1.0. This meant that some of the seats were actually amplifying the vibration present at the floor, particularly in the y axis. Travelways should be kept smooth, operating speeds reduced, and new seats, effective in all three axes, designed.

  11. A Concept for Optimizing Behavioural Effectiveness & Efficiency

    NASA Astrophysics Data System (ADS)

    Barca, Jan Carlo; Rumantir, Grace; Li, Raymond

    Both humans and machines exhibit strengths and weaknesses that can be enhanced by merging the two entities. This research aims to provide a broader understanding of how closer interactions between these two entities can facilitate more optimal goal-directed performance through the use of artificial extensions of the human body. Such extensions may assist us in adapting to and manipulating our environments in a more effective way than any system known today. To demonstrate this concept, we have developed a simulation where a semi interactive virtual spider can be navigated through an environment consisting of several obstacles and a virtual predator capable of killing the spider. The virtual spider can be navigated through the use of three different control systems that can be used to assist in optimising overall goal directed performance. The first two control systems use, an onscreen button interface and a touch sensor, respectively to facilitate human navigation of the spider. The third control system is an autonomous navigation system through the use of machine intelligence embedded in the spider. This system enables the spider to navigate and react to changes in its local environment. The results of this study indicate that machines should be allowed to override human control in order to maximise the benefits of collaboration between man and machine. This research further indicates that the development of strong machine intelligence, sensor systems that engage all human senses, extra sensory input systems, physical remote manipulators, multiple intelligent extensions of the human body, as well as a tighter symbiosis between man and machine, can support an upgrade of the human form.

  12. The influence of negative training set size on machine learning-based virtual screening.

    PubMed

    Kurczab, Rafał; Smusz, Sabina; Bojarski, Andrzej J

    2014-01-01

    The paper presents a thorough analysis of the influence of the number of negative training examples on the performance of machine learning methods. The impact of this rather neglected aspect of machine learning methods application was examined for sets containing a fixed number of positive and a varying number of negative examples randomly selected from the ZINC database. An increase in the ratio of positive to negative training instances was found to greatly influence most of the investigated evaluating parameters of ML methods in simulated virtual screening experiments. In a majority of cases, substantial increases in precision and MCC were observed in conjunction with some decreases in hit recall. The analysis of dynamics of those variations let us recommend an optimal composition of training data. The study was performed on several protein targets, 5 machine learning algorithms (SMO, Naïve Bayes, Ibk, J48 and Random Forest) and 2 types of molecular fingerprints (MACCS and CDK FP). The most effective classification was provided by the combination of CDK FP with SMO or Random Forest algorithms. The Naïve Bayes models appeared to be hardly sensitive to changes in the number of negative instances in the training set. In conclusion, the ratio of positive to negative training instances should be taken into account during the preparation of machine learning experiments, as it might significantly influence the performance of particular classifier. What is more, the optimization of negative training set size can be applied as a boosting-like approach in machine learning-based virtual screening.

  13. The influence of negative training set size on machine learning-based virtual screening

    PubMed Central

    2014-01-01

    Background The paper presents a thorough analysis of the influence of the number of negative training examples on the performance of machine learning methods. Results The impact of this rather neglected aspect of machine learning methods application was examined for sets containing a fixed number of positive and a varying number of negative examples randomly selected from the ZINC database. An increase in the ratio of positive to negative training instances was found to greatly influence most of the investigated evaluating parameters of ML methods in simulated virtual screening experiments. In a majority of cases, substantial increases in precision and MCC were observed in conjunction with some decreases in hit recall. The analysis of dynamics of those variations let us recommend an optimal composition of training data. The study was performed on several protein targets, 5 machine learning algorithms (SMO, Naïve Bayes, Ibk, J48 and Random Forest) and 2 types of molecular fingerprints (MACCS and CDK FP). The most effective classification was provided by the combination of CDK FP with SMO or Random Forest algorithms. The Naïve Bayes models appeared to be hardly sensitive to changes in the number of negative instances in the training set. Conclusions In conclusion, the ratio of positive to negative training instances should be taken into account during the preparation of machine learning experiments, as it might significantly influence the performance of particular classifier. What is more, the optimization of negative training set size can be applied as a boosting-like approach in machine learning-based virtual screening. PMID:24976867

  14. Enterprise Cloud Architecture for Chinese Ministry of Railway

    NASA Astrophysics Data System (ADS)

    Shan, Xumei; Liu, Hefeng

    Enterprise like PRC Ministry of Railways (MOR), is facing various challenges ranging from highly distributed computing environment and low legacy system utilization, Cloud Computing is increasingly regarded as one workable solution to address this. This article describes full scale cloud solution with Intel Tashi as virtual machine infrastructure layer, Hadoop HDFS as computing platform, and self developed SaaS interface, gluing virtual machine and HDFS with Xen hypervisor. As a result, on demand computing task application and deployment have been tackled per MOR real working scenarios at the end of article.

  15. Virtual Reality Enhanced Instructional Learning

    ERIC Educational Resources Information Center

    Nachimuthu, K.; Vijayakumari, G.

    2009-01-01

    Virtual Reality (VR) is a creation of virtual 3D world in which one can feel and sense the world as if it is real. It is allowing engineers to design machines and Educationists to design AV [audiovisual] equipment in real time but in 3-dimensional hologram as if the actual material is being made and worked upon. VR allows a least-cost (energy…

  16. 40 CFR 86.1228-85 - Transmissions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Test Procedures for New Gasoline-Fueled, Natural Gas-Fueled, Liquefied Petroleum Gas-Fueled and... manufacturer's recommendation to the ultimate purchaser. (b) Except for the first idle mode, idle modes less...; manual transmissions shall be in gear with the clutch disengaged, except first idle. The first idle mode...

  17. Virtual manufacturing work cell for engineering

    NASA Astrophysics Data System (ADS)

    Watanabe, Hideo; Ohashi, Kazushi; Takahashi, Nobuyuki; Kato, Kiyotaka; Fujita, Satoru

    1997-12-01

    The life cycles of products have been getting shorter. To meet this rapid turnover, manufacturing systems must be frequently changed as well. In engineering to develop manufacturing systems, there are several tasks such as process planning, layout design, programming, and final testing using actual machines. This development of manufacturing systems takes a long time and is expensive. To aid the above engineering process, we have developed the virtual manufacturing workcell (VMW). This paper describes a concept of VMW and design method through computer aided manufacturing engineering using VMW (CAME-VMW) related to the above engineering tasks. The VMW has all design data, and realizes a behavior of equipment and devices using a simulator. The simulator has logical and physical functionality. The one simulates a sequence control and the other simulates motion control, shape movement in 3D space. The simulator can execute the same control software made for actual machines. Therefore we can verify the behavior precisely before the manufacturing workcell will be constructed. The VMW creates engineering work space for several engineers and offers debugging tools such as virtual equipment and virtual controllers. We applied this VMW to development of a transfer workcell for vaporization machine in actual manufacturing system to produce plasma display panel (PDP) workcell and confirmed its effectiveness.

  18. CloVR: a virtual machine for automated and portable sequence analysis from the desktop using cloud computing.

    PubMed

    Angiuoli, Samuel V; Matalka, Malcolm; Gussman, Aaron; Galens, Kevin; Vangala, Mahesh; Riley, David R; Arze, Cesar; White, James R; White, Owen; Fricke, W Florian

    2011-08-30

    Next-generation sequencing technologies have decentralized sequence acquisition, increasing the demand for new bioinformatics tools that are easy to use, portable across multiple platforms, and scalable for high-throughput applications. Cloud computing platforms provide on-demand access to computing infrastructure over the Internet and can be used in combination with custom built virtual machines to distribute pre-packaged with pre-configured software. We describe the Cloud Virtual Resource, CloVR, a new desktop application for push-button automated sequence analysis that can utilize cloud computing resources. CloVR is implemented as a single portable virtual machine (VM) that provides several automated analysis pipelines for microbial genomics, including 16S, whole genome and metagenome sequence analysis. The CloVR VM runs on a personal computer, utilizes local computer resources and requires minimal installation, addressing key challenges in deploying bioinformatics workflows. In addition CloVR supports use of remote cloud computing resources to improve performance for large-scale sequence processing. In a case study, we demonstrate the use of CloVR to automatically process next-generation sequencing data on multiple cloud computing platforms. The CloVR VM and associated architecture lowers the barrier of entry for utilizing complex analysis protocols on both local single- and multi-core computers and cloud systems for high throughput data processing.

  19. Identification of Program Signatures from Cloud Computing System Telemetry Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nichols, Nicole M.; Greaves, Mark T.; Smith, William P.

    Malicious cloud computing activity can take many forms, including running unauthorized programs in a virtual environment. Detection of these malicious activities while preserving the privacy of the user is an important research challenge. Prior work has shown the potential viability of using cloud service billing metrics as a mechanism for proxy identification of malicious programs. Previously this novel detection method has been evaluated in a synthetic and isolated computational environment. In this paper we demonstrate the ability of billing metrics to identify programs, in an active cloud computing environment, including multiple virtual machines running on the same hypervisor. The openmore » source cloud computing platform OpenStack, is used for private cloud management at Pacific Northwest National Laboratory. OpenStack provides a billing tool (Ceilometer) to collect system telemetry measurements. We identify four different programs running on four virtual machines under the same cloud user account. Programs were identified with up to 95% accuracy. This accuracy is dependent on the distinctiveness of telemetry measurements for the specific programs we tested. Future work will examine the scalability of this approach for a larger selection of programs to better understand the uniqueness needed to identify a program. Additionally, future work should address the separation of signatures when multiple programs are running on the same virtual machine.« less

  20. Dynamic provisioning of a HEP computing infrastructure on a shared hybrid HPC system

    NASA Astrophysics Data System (ADS)

    Meier, Konrad; Fleig, Georg; Hauth, Thomas; Janczyk, Michael; Quast, Günter; von Suchodoletz, Dirk; Wiebelt, Bernd

    2016-10-01

    Experiments in high-energy physics (HEP) rely on elaborate hardware, software and computing systems to sustain the high data rates necessary to study rare physics processes. The Institut fr Experimentelle Kernphysik (EKP) at KIT is a member of the CMS and Belle II experiments, located at the LHC and the Super-KEKB accelerators, respectively. These detectors share the requirement, that enormous amounts of measurement data must be processed and analyzed and a comparable amount of simulated events is required to compare experimental results with theoretical predictions. Classical HEP computing centers are dedicated sites which support multiple experiments and have the required software pre-installed. Nowadays, funding agencies encourage research groups to participate in shared HPC cluster models, where scientist from different domains use the same hardware to increase synergies. This shared usage proves to be challenging for HEP groups, due to their specialized software setup which includes a custom OS (often Scientific Linux), libraries and applications. To overcome this hurdle, the EKP and data center team of the University of Freiburg have developed a system to enable the HEP use case on a shared HPC cluster. To achieve this, an OpenStack-based virtualization layer is installed on top of a bare-metal cluster. While other user groups can run their batch jobs via the Moab workload manager directly on bare-metal, HEP users can request virtual machines with a specialized machine image which contains a dedicated operating system and software stack. In contrast to similar installations, in this hybrid setup, no static partitioning of the cluster into a physical and virtualized segment is required. As a unique feature, the placement of the virtual machine on the cluster nodes is scheduled by Moab and the job lifetime is coupled to the lifetime of the virtual machine. This allows for a seamless integration with the jobs sent by other user groups and honors the fairshare policies of the cluster. The developed thin integration layer between OpenStack and Moab can be adapted to other batch servers and virtualization systems, making the concept also applicable for other cluster operators. This contribution will report on the concept and implementation of an OpenStack-virtualized cluster used for HEP workflows. While the full cluster will be installed in spring 2016, a test-bed setup with 800 cores has been used to study the overall system performance and dedicated HEP jobs were run in a virtualized environment over many weeks. Furthermore, the dynamic integration of the virtualized worker nodes, depending on the workload at the institute's computing system, will be described.

  1. The Virtual Solar Observatory: What Are We Up To Now?

    NASA Technical Reports Server (NTRS)

    Gurman, J. B.; Hill, F.; Suarez-Sola, F.; Bogart, R.; Amezcua, A.; Martens, P.; Hourcle, J.; Hughitt, K.; Davey, A.

    2012-01-01

    In the nearly ten years of a functional Virtual Solar Observatory (VSO), http://virtualsolar.org/ we have made it possible to query and access sixty-seven distinct solar data products and several event lists from nine spacecraft and fifteen observatories or observing networks. We have used existing VSO technology, and developed new software, for a distributed network of sites caching and serving SDO HMI and/ or AlA data. We have also developed an application programming interface (API) that has enabled VSO search and data access capabilities in IDL, Python, and Java. We also have quite a bit of work yet to do, including completion of the implementation of access to SDO EVE data, and access to some nineteen other data sets from space- and ground-based observatories. In addition, we have been developing a new graphic user interface that will enable the saving of user interface and search preferences. We solicit advice from the community input prioritizing our task list, and adding to it

  2. Analysis Of Technology Options To Reduce The Fuel Consumption Of Idling Trucks

    DOT National Transportation Integrated Search

    2000-06-01

    Long-haul trucks idling overnight consume more than 838 million gallons (20 million barrels) of fuel annually. Idling also emits pollutants. Truck drivers idle their engines primarily to (1) heat or cool the cab and/or sleeper, (2) keep the fuel warm...

  3. The Virtual Xenbase: transitioning an online bioinformatics resource to a private cloud

    PubMed Central

    Karimi, Kamran; Vize, Peter D.

    2014-01-01

    As a model organism database, Xenbase has been providing informatics and genomic data on Xenopus (Silurana) tropicalis and Xenopus laevis frogs for more than a decade. The Xenbase database contains curated, as well as community-contributed and automatically harvested literature, gene and genomic data. A GBrowse genome browser, a BLAST+ server and stock center support are available on the site. When this resource was first built, all software services and components in Xenbase ran on a single physical server, with inherent reliability, scalability and inter-dependence issues. Recent advances in networking and virtualization techniques allowed us to move Xenbase to a virtual environment, and more specifically to a private cloud. To do so we decoupled the different software services and components, such that each would run on a different virtual machine. In the process, we also upgraded many of the components. The resulting system is faster and more reliable. System maintenance is easier, as individual virtual machines can now be updated, backed up and changed independently. We are also experiencing more effective resource allocation and utilization. Database URL: www.xenbase.org PMID:25380782

  4. Naval Applications of Virtual Reality,

    DTIC Science & Technology

    1993-01-01

    Expert Virtual Reality Special Report 󈨡, pp. 67- 72. 14. SUBJECT TERMS 15 NUMBER o0 PAGES man-machine interface virtual reality decision support...collective and individual performance. -" Virtual reality projects could help *y by Mark Gembicki Av-t-abilty CodesA Avafllat Idt Iofe and David Rousseau...alt- 67 VIRTUAL . REALITY SPECIAl, REPORT r-OPY avcriaikxb to DD)C qg .- 154,41X~~~~~~~~~~~~j 1411 iI..:41 T a].’ 1,1 4 1111 I 4 1 * .11 ~ 4 l.~w111511 I

  5. Alternative Fuels Data Center: Dallas Police Department Reduces Vehicle

    Science.gov Websites

    Idling Dallas Police Department Reduces Vehicle Idling to someone by E-mail Share Alternative Fuels Data Center: Dallas Police Department Reduces Vehicle Idling on Facebook Tweet about Alternative Fuels Data Center: Dallas Police Department Reduces Vehicle Idling on Twitter Bookmark Alternative Fuels

  6. 40 CFR Appendix B to Subpart E of... - Tables

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Variable-Speed Engines Test segment Mode number Engine speed 1 Observed torque 2 (percent of max. observed...'s specifications. Idle speed is specified by the manufacturer. 2 Torque (non-idle): Throttle fully open for 100 percent points. Other non-idle points: ± 2 percent of engine maximum value. Torque (idle...

  7. BioImg.org: A Catalog of Virtual Machine Images for the Life Sciences

    PubMed Central

    Dahlö, Martin; Haziza, Frédéric; Kallio, Aleksi; Korpelainen, Eija; Bongcam-Rudloff, Erik; Spjuth, Ola

    2015-01-01

    Virtualization is becoming increasingly important in bioscience, enabling assembly and provisioning of complete computer setups, including operating system, data, software, and services packaged as virtual machine images (VMIs). We present an open catalog of VMIs for the life sciences, where scientists can share information about images and optionally upload them to a server equipped with a large file system and fast Internet connection. Other scientists can then search for and download images that can be run on the local computer or in a cloud computing environment, providing easy access to bioinformatics environments. We also describe applications where VMIs aid life science research, including distributing tools and data, supporting reproducible analysis, and facilitating education. BioImg.org is freely available at: https://bioimg.org. PMID:26401099

  8. BioImg.org: A Catalog of Virtual Machine Images for the Life Sciences.

    PubMed

    Dahlö, Martin; Haziza, Frédéric; Kallio, Aleksi; Korpelainen, Eija; Bongcam-Rudloff, Erik; Spjuth, Ola

    2015-01-01

    Virtualization is becoming increasingly important in bioscience, enabling assembly and provisioning of complete computer setups, including operating system, data, software, and services packaged as virtual machine images (VMIs). We present an open catalog of VMIs for the life sciences, where scientists can share information about images and optionally upload them to a server equipped with a large file system and fast Internet connection. Other scientists can then search for and download images that can be run on the local computer or in a cloud computing environment, providing easy access to bioinformatics environments. We also describe applications where VMIs aid life science research, including distributing tools and data, supporting reproducible analysis, and facilitating education. BioImg.org is freely available at: https://bioimg.org.

  9. Access, Equity, and Opportunity. Women in Machining: A Model Program.

    ERIC Educational Resources Information Center

    Warner, Heather

    The Women in Machining (WIM) program is a Machine Action Project (MAP) initiative that was developed in response to a local skilled metalworking labor shortage, despite a virtual absence of women and people of color from area shops. The project identified post-war stereotypes and other barriers that must be addressed if women are to have an equal…

  10. Automated Spatio-Temporal Analysis of Remotely Sensed Imagery for Water Resources Management

    NASA Astrophysics Data System (ADS)

    Bahr, Thomas

    2016-04-01

    Since 2012, the state of California faces an extreme drought, which impacts water supply in many ways. Advanced remote sensing is an important technology to better assess water resources, monitor drought conditions and water supplies, plan for drought response and mitigation, and measure drought impacts. In the present case study latest time series analysis capabilities are used to examine surface water in reservoirs located along the western flank of the Sierra Nevada region of California. This case study was performed using the COTS software package ENVI 5.3. Integration of custom processes and automation is supported by IDL (Interactive Data Language). Thus, ENVI analytics is running via the object-oriented and IDL-based ENVITask API. A time series from Landsat images (L-5 TM, L-7 ETM+, L-8 OLI) of the AOI was obtained for 1999 to 2015 (October acquisitions). Downloaded from the USGS EarthExplorer web site, they already were georeferenced to a UTM Zone 10N (WGS-84) coordinate system. ENVITasks were used to pre-process the Landsat images as follows: • Triangulation based gap-filling for the SLC-off Landsat-7 ETM+ images. • Spatial subsetting to the same geographic extent. • Radiometric correction to top-of-atmosphere (TOA) reflectance. • Atmospheric correction using QUAC®, which determines atmospheric correction parameters directly from the observed pixel spectra in a scene, without ancillary information. Spatio-temporal analysis was executed with the following tasks: • Creation of Modified Normalized Difference Water Index images (MNDWI, Xu 2006) to enhance open water features while suppressing noise from built-up land, vegetation, and soil. • Threshold based classification of the water index images to extract the water features. • Classification aggregation as a post-classification cleanup process. • Export of the respective water classes to vector layers for further evaluation in a GIS. • Animation of the classification series and export to a common video format. • Plotting the time series of water surface area in square kilometers. The automated spatio-temporal analysis introduced here can be embedded in virtually any existing geospatial workflow for operational applications. Three integration options were implemented in this case study: • Integration within any ArcGIS environment whether deployed on the desktop, in the cloud, or online. Execution uses a customized ArcGIS script tool. A Python script file retrieves the parameters from the user interface and runs the precompiled IDL code. That IDL code is used to interface between the Python script and the relevant ENVITasks. • Publishing the spatio-temporal analysis tasks as services via the ENVI Services Engine (ESE). ESE is a cloud-based image analysis solution to publish and deploy advanced ENVI image and data analytics to existing enterprise infrastructures. For this purpose the entire IDL code can be capsuled in a single ENVITask. • Integration in an existing geospatial workflow using the Python-to-IDL Bridge. This mechanism allows calling IDL code within Python on a user-defined platform. The results of this case study verify the drastic decrease of the amount of surface water in the AOI, indicative of the major drought that is pervasive throughout California. Accordingly, the time series analysis was correlated successfully with the daily reservoir elevations of the Don Pedro reservoir (station DNP, operated by CDEC).

  11. A genetic algorithm for a bi-objective mathematical model for dynamic virtual cell formation problem

    NASA Astrophysics Data System (ADS)

    Moradgholi, Mostafa; Paydar, Mohammad Mahdi; Mahdavi, Iraj; Jouzdani, Javid

    2016-09-01

    Nowadays, with the increasing pressure of the competitive business environment and demand for diverse products, manufacturers are force to seek for solutions that reduce production costs and rise product quality. Cellular manufacturing system (CMS), as a means to this end, has been a point of attraction to both researchers and practitioners. Limitations of cell formation problem (CFP), as one of important topics in CMS, have led to the introduction of virtual CMS (VCMS). This research addresses a bi-objective dynamic virtual cell formation problem (DVCFP) with the objective of finding the optimal formation of cells, considering the material handling costs, fixed machine installation costs and variable production costs of machines and workforce. Furthermore, we consider different skills on different machines in workforce assignment in a multi-period planning horizon. The bi-objective model is transformed to a single-objective fuzzy goal programming model and to show its performance; numerical examples are solved using the LINGO software. In addition, genetic algorithm (GA) is customized to tackle large-scale instances of the problems to show the performance of the solution method.

  12. Integration of virtualized worker nodes in standard batch systems

    NASA Astrophysics Data System (ADS)

    Büge, Volker; Hessling, Hermann; Kemp, Yves; Kunze, Marcel; Oberst, Oliver; Quast, Günter; Scheurer, Armin; Synge, Owen

    2010-04-01

    Current experiments in HEP only use a limited number of operating system flavours. Their software might only be validated on one single OS platform. Resource providers might have other operating systems of choice for the installation of the batch infrastructure. This is especially the case if a cluster is shared with other communities, or communities that have stricter security requirements. One solution would be to statically divide the cluster into separated sub-clusters. In such a scenario, no opportunistic distribution of the load can be achieved, resulting in a poor overall utilization efficiency. Another approach is to make the batch system aware of virtualization, and to provide each community with its favoured operating system in a virtual machine. Here, the scheduler has full flexibility, resulting in a better overall efficiency of the resources. In our contribution, we present a lightweight concept for the integration of virtual worker nodes into standard batch systems. The virtual machines are started on the worker nodes just before jobs are executed there. No meta-scheduling is introduced. We demonstrate two prototype implementations, one based on the Sun Grid Engine (SGE), the other using Maui/Torque as a batch system. Both solutions support local job as well as Grid job submission. The hypervisors currently used are Xen and KVM, a port to another system is easily envisageable. To better handle different virtual machines on the physical host, the management solution VmImageManager is developed. We will present first experience from running the two prototype implementations. In a last part, we will show the potential future use of this lightweight concept when integrated into high-level (i.e. Grid) work-flows.

  13. Grid heterogeneity in in-silico experiments: an exploration of drug screening using DOCK on cloud environments.

    PubMed

    Yim, Wen-Wai; Chien, Shu; Kusumoto, Yasuyuki; Date, Susumu; Haga, Jason

    2010-01-01

    Large-scale in-silico screening is a necessary part of drug discovery and Grid computing is one answer to this demand. A disadvantage of using Grid computing is the heterogeneous computational environments characteristic of a Grid. In our study, we have found that for the molecular docking simulation program DOCK, different clusters within a Grid organization can yield inconsistent results. Because DOCK in-silico virtual screening (VS) is currently used to help select chemical compounds to test with in-vitro experiments, such differences have little effect on the validity of using virtual screening before subsequent steps in the drug discovery process. However, it is difficult to predict whether the accumulation of these discrepancies over sequentially repeated VS experiments will significantly alter the results if VS is used as the primary means for identifying potential drugs. Moreover, such discrepancies may be unacceptable for other applications requiring more stringent thresholds. This highlights the need for establishing a more complete solution to provide the best scientific accuracy when executing an application across Grids. One possible solution to platform heterogeneity in DOCK performance explored in our study involved the use of virtual machines as a layer of abstraction. This study investigated the feasibility and practicality of using virtual machine and recent cloud computing technologies in a biological research application. We examined the differences and variations of DOCK VS variables, across a Grid environment composed of different clusters, with and without virtualization. The uniform computer environment provided by virtual machines eliminated inconsistent DOCK VS results caused by heterogeneous clusters, however, the execution time for the DOCK VS increased. In our particular experiments, overhead costs were found to be an average of 41% and 2% in execution time for two different clusters, while the actual magnitudes of the execution time costs were minimal. Despite the increase in overhead, virtual clusters are an ideal solution for Grid heterogeneity. With greater development of virtual cluster technology in Grid environments, the problem of platform heterogeneity may be eliminated through virtualization, allowing greater usage of VS, and will benefit all Grid applications in general.

  14. Alternative Fuels Data Center: Idle Reduction Laws and Incentives

    Science.gov Websites

    Conserve Fuel Printable Version Share this resource Send a link to Alternative Fuels Data Center : Idle Reduction Laws and Incentives to someone by E-mail Share Alternative Fuels Data Center: Idle Fuels Data Center: Idle Reduction Laws and Incentives on Digg Find More places to share Alternative

  15. 40 CFR 85.2220 - Preconditioned two speed idle test-EPA 91.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 18 2010-07-01 2010-07-01 false Preconditioned two speed idle test-EPA... Warranty Short Tests § 85.2220 Preconditioned two speed idle test—EPA 91. (a) General requirements—(1...-speed mode followed immediately by a first-chance idle mode. (ii) The second-chance test as described...

  16. 40 CFR 85.2220 - Preconditioned two speed idle test-EPA 91.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 18 2011-07-01 2011-07-01 false Preconditioned two speed idle test-EPA... Warranty Short Tests § 85.2220 Preconditioned two speed idle test—EPA 91. (a) General requirements—(1...-speed mode followed immediately by a first-chance idle mode. (ii) The second-chance test as described...

  17. An Argument for Partial Admissibility of Polygraph Results in Trials by Courts-Martial

    DTIC Science & Technology

    1990-04-01

    FUNDAMENTALS OF THE POLYGRAPH TECHNIQUE 6 A. THE POLYGRAPH MACHINE 7 1. THE CARDIOSPHYMOGRAPH 8 2. THE PNEUMOGRAPH 9 3. THE GALVANOMETER 10 4. THE... Machine Anyone observing a polygraph machine for the first time could easily conclude it is a survivor of the Spanish Inquisition. The lengths of...wire and coils get the immediate attention of the subject. However, the various polygraph machines in use today cause virtually no discomfort. Several

  18. Remote Data Exploration with the Interactive Data Language (IDL)

    NASA Technical Reports Server (NTRS)

    Galloy, Michael

    2013-01-01

    A difficulty for many NASA researchers is that often the data to analyze is located remotely from the scientist and the data is too large to transfer for local analysis. Researchers have developed the Data Access Protocol (DAP) for accessing remote data. Presently one can use DAP from within IDL, but the IDL-DAP interface is both limited and cumbersome. A more powerful and user-friendly interface to DAP for IDL has been developed. Users are able to browse remote data sets graphically, select partial data to retrieve, import that data and make customized plots, and have an interactive IDL command line session simultaneous with the remote visualization. All of these IDL-DAP tools are usable easily and seamlessly for any IDL user. IDL and DAP are both widely used in science, but were not easily used together. The IDL DAP bindings were incomplete and had numerous bugs that prevented their serious use. For example, the existing bindings did not read DAP Grid data, which is the organization of nearly all NASA datasets currently served via DAP. This project uniquely provides a fully featured, user-friendly interface to DAP from IDL, both from the command line and a GUI application. The DAP Explorer GUI application makes browsing a dataset more user-friendly, while also providing the capability to run user-defined functions on specified data. Methods for running remote functions on the DAP server were investigated, and a technique for accomplishing this task was decided upon.

  19. Teaching Cybersecurity Using the Cloud

    ERIC Educational Resources Information Center

    Salah, Khaled; Hammoud, Mohammad; Zeadally, Sherali

    2015-01-01

    Cloud computing platforms can be highly attractive to conduct course assignments and empower students with valuable and indispensable hands-on experience. In particular, the cloud can offer teaching staff and students (whether local or remote) on-demand, elastic, dedicated, isolated, (virtually) unlimited, and easily configurable virtual machines.…

  20. Ant-Based Cyber Defense (also known as

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Glenn Fink, PNNL

    2015-09-29

    ABCD is a four-level hierarchy with human supervisors at the top, a top-level agent called a Sergeant controlling each enclave, Sentinel agents located at each monitored host, and mobile Sensor agents that swarm through the enclaves to detect cyber malice and misconfigurations. The code comprises four parts: (1) the core agent framework, (2) the user interface and visualization, (3) test-range software to create a network of virtual machines including a simulated Internet and user and host activity emulation scripts, and (4) a test harness to allow the safe running of adversarial code within the framework of monitored virtual machines.

  1. Prototyping Faithful Execution in a Java virtual machine.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tarman, Thomas David; Campbell, Philip LaRoche; Pierson, Lyndon George

    2003-09-01

    This report presents the implementation of a stateless scheme for Faithful Execution, the design for which is presented in a companion report, ''Principles of Faithful Execution in the Implementation of Trusted Objects'' (SAND 2003-2328). We added a simple cryptographic capability to an already simplified class loader and its associated Java Virtual Machine (JVM) to provide a byte-level implementation of Faithful Execution. The extended class loader and JVM we refer to collectively as the Sandia Faithfully Executing Java architecture (or JavaFE for short). This prototype is intended to enable exploration of more sophisticated techniques which we intend to implement in hardware.

  2. Build and Execute Environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guan, Qiang

    At exascale, the challenge becomes to develop applications that run at scale and use exascale platforms reliably, efficiently, and flexibly. Workflows become much more complex because they must seamlessly integrate simulation and data analytics. They must include down-sampling, post-processing, feature extraction, and visualization. Power and data transfer limitations require these analysis tasks to be run in-situ or in-transit. We expect successful workflows will comprise multiple linked simulations along with tens of analysis routines. Users will have limited development time at scale and, therefore, must have rich tools to develop, debug, test, and deploy applications. At this scale, successful workflows willmore » compose linked computations from an assortment of reliable, well-defined computation elements, ones that can come and go as required, based on the needs of the workflow over time. We propose a novel framework that utilizes both virtual machines (VMs) and software containers to create a workflow system that establishes a uniform build and execution environment (BEE) beyond the capabilities of current systems. In this environment, applications will run reliably and repeatably across heterogeneous hardware and software. Containers, both commercial (Docker and Rocket) and open-source (LXC and LXD), define a runtime that isolates all software dependencies from the machine operating system. Workflows may contain multiple containers that run different operating systems, different software, and even different versions of the same software. We will run containers in open-source virtual machines (KVM) and emulators (QEMU) so that workflows run on any machine entirely in user-space. On this platform of containers and virtual machines, we will deliver workflow software that provides services, including repeatable execution, provenance, checkpointing, and future proofing. We will capture provenance about how containers were launched and how they interact to annotate workflows for repeatable and partial re-execution. We will coordinate the physical snapshots of virtual machines with parallel programming constructs, such as barriers, to automate checkpoint and restart. We will also integrate with HPC-specific container runtimes to gain access to accelerators and other specialized hardware to preserve native performance. Containers will link development to continuous integration. When application developers check code in, it will automatically be tested on a suite of different software and hardware architectures.« less

  3. Alternative Fuels Data Center: Students Reduce Vehicle Idling in San

    Science.gov Websites

    Antonio, Texas Students Reduce Vehicle Idling in San Antonio, Texas to someone by E-mail Share Alternative Fuels Data Center: Students Reduce Vehicle Idling in San Antonio, Texas on Facebook Tweet about Alternative Fuels Data Center: Students Reduce Vehicle Idling in San Antonio, Texas on Twitter Bookmark

  4. 40 CFR 85.2215 - Two speed idle test-EPA 91.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 18 2010-07-01 2010-07-01 false Two speed idle test-EPA 91. 85.2215... Tests § 85.2215 Two speed idle test—EPA 91. (a) General requirements—(1) Exhaust gas sampling algorithm...) of this section, consists of an idle mode followed by a high-speed mode. (ii) The second-chance high...

  5. 40 CFR 85.2215 - Two speed idle test-EPA 91.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 18 2011-07-01 2011-07-01 false Two speed idle test-EPA 91. 85.2215... Tests § 85.2215 Two speed idle test—EPA 91. (a) General requirements—(1) Exhaust gas sampling algorithm...) of this section, consists of an idle mode followed by a high-speed mode. (ii) The second-chance high...

  6. Prediction based proactive thermal virtual machine scheduling in green clouds.

    PubMed

    Kinger, Supriya; Kumar, Rajesh; Sharma, Anju

    2014-01-01

    Cloud computing has rapidly emerged as a widely accepted computing paradigm, but the research on Cloud computing is still at an early stage. Cloud computing provides many advanced features but it still has some shortcomings such as relatively high operating cost and environmental hazards like increasing carbon footprints. These hazards can be reduced up to some extent by efficient scheduling of Cloud resources. Working temperature on which a machine is currently running can be taken as a criterion for Virtual Machine (VM) scheduling. This paper proposes a new proactive technique that considers current and maximum threshold temperature of Server Machines (SMs) before making scheduling decisions with the help of a temperature predictor, so that maximum temperature is never reached. Different workload scenarios have been taken into consideration. The results obtained show that the proposed system is better than existing systems of VM scheduling, which does not consider current temperature of nodes before making scheduling decisions. Thus, a reduction in need of cooling systems for a Cloud environment has been obtained and validated.

  7. Discrete particle swarm optimization to solve multi-objective limited-wait hybrid flow shop scheduling problem

    NASA Astrophysics Data System (ADS)

    Santosa, B.; Siswanto, N.; Fiqihesa

    2018-04-01

    This paper proposes a discrete Particle Swam Optimization (PSO) to solve limited-wait hybrid flowshop scheduing problem with multi objectives. Flow shop schedulimg represents the condition when several machines are arranged in series and each job must be processed at each machine with same sequence. The objective functions are minimizing completion time (makespan), total tardiness time, and total machine idle time. Flow shop scheduling model always grows to cope with the real production system accurately. Since flow shop scheduling is a NP-Hard problem then the most suitable method to solve is metaheuristics. One of metaheuristics algorithm is Particle Swarm Optimization (PSO), an algorithm which is based on the behavior of a swarm. Originally, PSO was intended to solve continuous optimization problems. Since flow shop scheduling is a discrete optimization problem, then, we need to modify PSO to fit the problem. The modification is done by using probability transition matrix mechanism. While to handle multi objectives problem, we use Pareto Optimal (MPSO). The results of MPSO is better than the PSO because the MPSO solution set produced higher probability to find the optimal solution. Besides the MPSO solution set is closer to the optimal solution

  8. Virtual Environment Training: Auxiliary Machinery Room (AMR) Watchstation Trainer.

    ERIC Educational Resources Information Center

    Hriber, Dennis C.; And Others

    1993-01-01

    Describes a project implemented at Newport News Shipbuilding that used Virtual Environment Training to improve the performance of submarine crewmen. Highlights include development of the Auxiliary Machine Room (AMR) Watchstation Trainer; Digital Video Interactive (DVI); screen layout; test design and evaluation; user reactions; authoring language;…

  9. The virtual machine (VM) scaler: an infrastructure manager supporting environmental modeling on IaaS clouds

    USDA-ARS?s Scientific Manuscript database

    Infrastructure-as-a-service (IaaS) clouds provide a new medium for deployment of environmental modeling applications. Harnessing advancements in virtualization, IaaS clouds can provide dynamic scalable infrastructure to better support scientific modeling computational demands. Providing scientific m...

  10. Global Detection of Live Virtual Machine Migration Based on Cellular Neural Networks

    PubMed Central

    Xie, Kang; Yang, Yixian; Zhang, Ling; Jing, Maohua; Xin, Yang; Li, Zhongxian

    2014-01-01

    In order to meet the demands of operation monitoring of large scale, autoscaling, and heterogeneous virtual resources in the existing cloud computing, a new method of live virtual machine (VM) migration detection algorithm based on the cellular neural networks (CNNs), is presented. Through analyzing the detection process, the parameter relationship of CNN is mapped as an optimization problem, in which improved particle swarm optimization algorithm based on bubble sort is used to solve the problem. Experimental results demonstrate that the proposed method can display the VM migration processing intuitively. Compared with the best fit heuristic algorithm, this approach reduces the processing time, and emerging evidence has indicated that this new approach is affordable to parallelism and analog very large scale integration (VLSI) implementation allowing the VM migration detection to be performed better. PMID:24959631

  11. Global detection of live virtual machine migration based on cellular neural networks.

    PubMed

    Xie, Kang; Yang, Yixian; Zhang, Ling; Jing, Maohua; Xin, Yang; Li, Zhongxian

    2014-01-01

    In order to meet the demands of operation monitoring of large scale, autoscaling, and heterogeneous virtual resources in the existing cloud computing, a new method of live virtual machine (VM) migration detection algorithm based on the cellular neural networks (CNNs), is presented. Through analyzing the detection process, the parameter relationship of CNN is mapped as an optimization problem, in which improved particle swarm optimization algorithm based on bubble sort is used to solve the problem. Experimental results demonstrate that the proposed method can display the VM migration processing intuitively. Compared with the best fit heuristic algorithm, this approach reduces the processing time, and emerging evidence has indicated that this new approach is affordable to parallelism and analog very large scale integration (VLSI) implementation allowing the VM migration detection to be performed better.

  12. The downside of downtime: The prevalence and work pacing consequences of idle time at work.

    PubMed

    Brodsky, Andrew; Amabile, Teresa M

    2018-05-01

    Although both media commentary and academic research have focused much attention on the dilemma of employees being too busy, this paper presents evidence of the opposite phenomenon, in which employees do not have enough work to fill their time and are left with hours of meaningless idle time each week. We conducted six studies that examine the prevalence and work pacing consequences of involuntary idle time. In a nationally representative cross-occupational survey (Study 1), we found that idle time occurs frequently across all occupational categories; we estimate that employers in the United States pay roughly $100 billion in wages for time that employees spend idle. Studies 2a-3b experimentally demonstrate that there are also collateral consequences of idle time; when workers expect idle time following a task, their work pace declines and their task completion time increases. This decline reverses the well-documented deadline effect, producing a deadtime effect, whereby workers slow down as a task progresses. Our analyses of work pace patterns provide evidence for a time discounting mechanism: workers discount idle time when it is relatively distant, but act to avoid it increasingly as it becomes more proximate. Finally, Study 4 demonstrates that the expectation of being able to engage in leisure activities during posttask free time (e.g., surfing the Internet) can mitigate the collateral work pace losses due to idle time. Through examination and discussion of the effects of idle time at work, we broaden theory on work pacing. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  13. CloVR: A virtual machine for automated and portable sequence analysis from the desktop using cloud computing

    PubMed Central

    2011-01-01

    Background Next-generation sequencing technologies have decentralized sequence acquisition, increasing the demand for new bioinformatics tools that are easy to use, portable across multiple platforms, and scalable for high-throughput applications. Cloud computing platforms provide on-demand access to computing infrastructure over the Internet and can be used in combination with custom built virtual machines to distribute pre-packaged with pre-configured software. Results We describe the Cloud Virtual Resource, CloVR, a new desktop application for push-button automated sequence analysis that can utilize cloud computing resources. CloVR is implemented as a single portable virtual machine (VM) that provides several automated analysis pipelines for microbial genomics, including 16S, whole genome and metagenome sequence analysis. The CloVR VM runs on a personal computer, utilizes local computer resources and requires minimal installation, addressing key challenges in deploying bioinformatics workflows. In addition CloVR supports use of remote cloud computing resources to improve performance for large-scale sequence processing. In a case study, we demonstrate the use of CloVR to automatically process next-generation sequencing data on multiple cloud computing platforms. Conclusion The CloVR VM and associated architecture lowers the barrier of entry for utilizing complex analysis protocols on both local single- and multi-core computers and cloud systems for high throughput data processing. PMID:21878105

  14. Design and fabrication of complete dentures using CAD/CAM technology

    PubMed Central

    Han, Weili; Li, Yanfeng; Zhang, Yue; lv, Yuan; Zhang, Ying; Hu, Ping; Liu, Huanyue; Ma, Zheng; Shen, Yi

    2017-01-01

    Abstract The aim of the study was to test the feasibility of using commercially available computer-aided design and computer-aided manufacturing (CAD/CAM) technology including 3Shape Dental System 2013 trial version, WIELAND V2.0.049 and WIELAND ZENOTEC T1 milling machine to design and fabricate complete dentures. The modeling process of full denture available in the trial version of 3Shape Dental System 2013 was used to design virtual complete dentures on the basis of 3-dimensional (3D) digital edentulous models generated from the physical models. The virtual complete dentures designed were exported to CAM software of WIELAND V2.0.049. A WIELAND ZENOTEC T1 milling machine controlled by the CAM software was used to fabricate physical dentitions and baseplates by milling acrylic resin composite plates. The physical dentitions were bonded to the corresponding baseplates to form the maxillary and mandibular complete dentures. Virtual complete dentures were successfully designed using the software through several steps including generation of 3D digital edentulous models, model analysis, arrangement of artificial teeth, trimming relief area, and occlusal adjustment. Physical dentitions and baseplates were successfully fabricated according to the designed virtual complete dentures using milling machine controlled by a CAM software. Bonding physical dentitions to the corresponding baseplates generated the final physical complete dentures. Our study demonstrated that complete dentures could be successfully designed and fabricated by using CAD/CAM. PMID:28072686

  15. Idling Reduction for Emergency and Other Service Vehicles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2015-05-07

    This is a fact sheet about reducing idling for emergency and service vehicles. Emergency vehicles, such as police cars, ambulances, and fire trucks, along with other service vehicles such as armored cars, are often exempt from laws that limit engine idling. However, these vehicles can save fuel and reduce emissions with technologies that allow them to perform vital services without idling.

  16. Idling speed control system of an internal combustion engine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miyazaki, M.; Ishii, M.; Kako, H.

    1986-09-16

    This patent describes an idling speed control system of an internal combustion engine comprising: a valve device which controls the amount of intake air for the engine; an actuator which includes an electric motor for variably controlling the opening of the value device; rotation speed detector means for detecting the rotation speed of the engine; idling condition detector means for detecting the idling condition of the engine; feedback control means responsive to the detected output of the idling condition detector means for generating feedback control pulses to intermittently drive the electric motor so that the detected rotation speed of themore » engine under the idling condition may converge into a target idling rotation speed; and control means responsive to the output of detector means that detects an abnormally low rotation speed of the engine detected by the rotation speed detector means for generating control pulses that do not overlap the feedback control pulses to drive the electric motor in a predetermined direction.« less

  17. 48 CFR 31.205-17 - Idle facilities and idle capacity costs.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... REGULATION GENERAL CONTRACTING REQUIREMENTS CONTRACT COST PRINCIPLES AND PROCEDURES Contracts With Commercial..., or sale, in accordance with sound business, economics, or security practices. Widespread idle...

  18. Design and implementation of a reliable and cost-effective cloud computing infrastructure: the INFN Napoli experience

    NASA Astrophysics Data System (ADS)

    Capone, V.; Esposito, R.; Pardi, S.; Taurino, F.; Tortone, G.

    2012-12-01

    Over the last few years we have seen an increasing number of services and applications needed to manage and maintain cloud computing facilities. This is particularly true for computing in high energy physics, which often requires complex configurations and distributed infrastructures. In this scenario a cost effective rationalization and consolidation strategy is the key to success in terms of scalability and reliability. In this work we describe an IaaS (Infrastructure as a Service) cloud computing system, with high availability and redundancy features, which is currently in production at INFN-Naples and ATLAS Tier-2 data centre. The main goal we intended to achieve was a simplified method to manage our computing resources and deliver reliable user services, reusing existing hardware without incurring heavy costs. A combined usage of virtualization and clustering technologies allowed us to consolidate our services on a small number of physical machines, reducing electric power costs. As a result of our efforts we developed a complete solution for data and computing centres that can be easily replicated using commodity hardware. Our architecture consists of 2 main subsystems: a clustered storage solution, built on top of disk servers running GlusterFS file system, and a virtual machines execution environment. GlusterFS is a network file system able to perform parallel writes on multiple disk servers, providing this way live replication of data. High availability is also achieved via a network configuration using redundant switches and multiple paths between hypervisor hosts and disk servers. We also developed a set of management scripts to easily perform basic system administration tasks such as automatic deployment of new virtual machines, adaptive scheduling of virtual machines on hypervisor hosts, live migration and automated restart in case of hypervisor failures.

  19. Idle waves in high-performance computing

    NASA Astrophysics Data System (ADS)

    Markidis, Stefano; Vencels, Juris; Peng, Ivy Bo; Akhmetova, Dana; Laure, Erwin; Henri, Pierre

    2015-01-01

    The vast majority of parallel scientific applications distributes computation among processes that are in a busy state when computing and in an idle state when waiting for information from other processes. We identify the propagation of idle waves through processes in scientific applications with a local information exchange between the two processes. Idle waves are nondispersive and have a phase velocity inversely proportional to the average busy time. The physical mechanism enabling the propagation of idle waves is the local synchronization between two processes due to remote data dependency. This study provides a description of the large number of processes in parallel scientific applications as a continuous medium. This work also is a step towards an understanding of how localized idle periods can affect remote processes, leading to the degradation of global performance in parallel scientific applications.

  20. The Performance of the NAS HSPs in 1st Half of 1994

    NASA Technical Reports Server (NTRS)

    Bergeron, Robert J.; Walter, Howard (Technical Monitor)

    1995-01-01

    During the first six months of 1994, the NAS (National Airspace System) 16-CPU Y-MP C90 Von Neumann (VN) delivered an average throughput of 4.045 GFLOPS while the ACSF (Aeronautics Consolidated Supercomputer Facility) 8-CPU Y-MP C90 Eagle averaged 1.658 GFLOPS. The VN rate represents a machine efficiency of 26.3% whereas the Eagle rate corresponds to a machine efficiency of 21.6%. VN displayed a greater efficiency than Eagle primarily because the stronger workload demand for its CPU cycles allowed it to devote more time to user programs and less time to idle. An additional factor increasing VN efficiency was the ability of the UNICOS 8.0 Operating System to deliver a larger fraction of CPU time to user programs. Although measurements indicate increasing vector length for both workloads, insufficient vector lengths continue to hinder HSP (High Speed Processor) performance. To improve HSP performance, NAS should continue to encourage the HSP users to modify their codes to increase program vector length.

  1. The Virtual Xenbase: transitioning an online bioinformatics resource to a private cloud.

    PubMed

    Karimi, Kamran; Vize, Peter D

    2014-01-01

    As a model organism database, Xenbase has been providing informatics and genomic data on Xenopus (Silurana) tropicalis and Xenopus laevis frogs for more than a decade. The Xenbase database contains curated, as well as community-contributed and automatically harvested literature, gene and genomic data. A GBrowse genome browser, a BLAST+ server and stock center support are available on the site. When this resource was first built, all software services and components in Xenbase ran on a single physical server, with inherent reliability, scalability and inter-dependence issues. Recent advances in networking and virtualization techniques allowed us to move Xenbase to a virtual environment, and more specifically to a private cloud. To do so we decoupled the different software services and components, such that each would run on a different virtual machine. In the process, we also upgraded many of the components. The resulting system is faster and more reliable. System maintenance is easier, as individual virtual machines can now be updated, backed up and changed independently. We are also experiencing more effective resource allocation and utilization. Database URL: www.xenbase.org. © The Author(s) 2014. Published by Oxford University Press.

  2. Idle emissions from heavy-duty diesel vehicles: review and recent data.

    PubMed

    Khan, A B M S; Clark, Nigel N; Thompson, Gregory J; Wayne, W Scott; Gautam, Mridul; Lyons, Donald W; Hawelti, Daniel

    2006-10-01

    Heavy-duty diesel vehicle idling consumes fuel and reduces atmospheric quality, but its restriction cannot simply be proscribed, because cab heat or air-conditioning provides essential driver comfort. A comprehensive tailpipe emissions database to describe idling impacts is not yet available. This paper presents a substantial data set that incorporates results from the West Virginia University transient engine test cell, the E-55/59 Study and the Gasoline/Diesel PM Split Study. It covered 75 heavy-duty diesel engines and trucks, which were divided into two groups: vehicles with mechanical fuel injection (MFI) and vehicles with electronic fuel injection (EFI). Idle emissions of CO, hydrocarbon (HC), oxides of nitrogen (NOx), particulate matter (PM), and carbon dioxide (CO2) have been reported. Idle CO2 emissions allowed the projection of fuel consumption during idling. Test-to-test variations were observed for repeat idle tests on the same vehicle because of measurement variation, accessory loads, and ambient conditions. Vehicles fitted with EFI, on average, emitted approximately 20 g/hr of CO, 6 g/hr of HC, 86 g/hr of NOx, 1 g/hr of PM, and 4636 g/hr of CO2 during idle. MFI equipped vehicles emitted approximately 35 g/hr of CO, 23 g/hr of HC, 48 g/hr of NOx, 4 g/hr of PM, and 4484 g/hr of CO2, on average, during idle. Vehicles with EFI emitted less idle CO, HC, and PM, which could be attributed to the efficient combustion and superior fuel atomization in EFI systems. Idle NOx, however, increased with EFI, which corresponds with the advancing of timing to improve idle combustion. Fuel injection management did not have any effect on CO2 and, hence, fuel consumption. Use of air conditioning without increasing engine speed increased idle CO2, NOx, PM, HC, and fuel consumption by 25% on average. When the engine speed was elevated from 600 to 1100 revolutions per minute, CO2 and NOx emissions and fuel consumption increased by >150%, whereas PM and HC emissions increased by approximately 100% and 70%, respectively. Six Detroit Diesel Corp. (DDC) Series 60 engines in engine test cell were found to emit less CO, NOx, and PM emissions and consumed fuel at only 75% of the level found in the chassis dynamometer data. This is because fan and compressor loads were absent in the engine test cell.

  3. Research on axisymmetric aspheric surface numerical design and manufacturing technology

    NASA Astrophysics Data System (ADS)

    Wang, Zhen-zhong; Guo, Yin-biao; Lin, Zheng

    2006-02-01

    The key technology for aspheric machining offers exact machining path and machining aspheric lens with high accuracy and efficiency, in spite of the development of traditional manual manufacturing into nowadays numerical control (NC) machining. This paper presents a mathematical model between virtual cone and aspheric surface equations, and discusses the technology of uniform wear of grinding wheel and error compensation in aspheric machining. Finally, a software system for high precision aspheric surface manufacturing is designed and realized, based on the mentioned above. This software system can work out grinding wheel path according to input parameters and generate machining NC programs of aspheric surfaces.

  4. Virtual Planning, Control, and Machining for a Modular-Based Automated Factory Operation in an Augmented Reality Environment

    PubMed Central

    Pai, Yun Suen; Yap, Hwa Jen; Md Dawal, Siti Zawiah; Ramesh, S.; Phoon, Sin Ye

    2016-01-01

    This study presents a modular-based implementation of augmented reality to provide an immersive experience in learning or teaching the planning phase, control system, and machining parameters of a fully automated work cell. The architecture of the system consists of three code modules that can operate independently or combined to create a complete system that is able to guide engineers from the layout planning phase to the prototyping of the final product. The layout planning module determines the best possible arrangement in a layout for the placement of various machines, in this case a conveyor belt for transportation, a robot arm for pick-and-place operations, and a computer numerical control milling machine to generate the final prototype. The robotic arm module simulates the pick-and-place operation offline from the conveyor belt to a computer numerical control (CNC) machine utilising collision detection and inverse kinematics. Finally, the CNC module performs virtual machining based on the Uniform Space Decomposition method and axis aligned bounding box collision detection. The conducted case study revealed that given the situation, a semi-circle shaped arrangement is desirable, whereas the pick-and-place system and the final generated G-code produced the highest deviation of 3.83 mm and 5.8 mm respectively. PMID:27271840

  5. Virtual Planning, Control, and Machining for a Modular-Based Automated Factory Operation in an Augmented Reality Environment.

    PubMed

    Pai, Yun Suen; Yap, Hwa Jen; Md Dawal, Siti Zawiah; Ramesh, S; Phoon, Sin Ye

    2016-06-07

    This study presents a modular-based implementation of augmented reality to provide an immersive experience in learning or teaching the planning phase, control system, and machining parameters of a fully automated work cell. The architecture of the system consists of three code modules that can operate independently or combined to create a complete system that is able to guide engineers from the layout planning phase to the prototyping of the final product. The layout planning module determines the best possible arrangement in a layout for the placement of various machines, in this case a conveyor belt for transportation, a robot arm for pick-and-place operations, and a computer numerical control milling machine to generate the final prototype. The robotic arm module simulates the pick-and-place operation offline from the conveyor belt to a computer numerical control (CNC) machine utilising collision detection and inverse kinematics. Finally, the CNC module performs virtual machining based on the Uniform Space Decomposition method and axis aligned bounding box collision detection. The conducted case study revealed that given the situation, a semi-circle shaped arrangement is desirable, whereas the pick-and-place system and the final generated G-code produced the highest deviation of 3.83 mm and 5.8 mm respectively.

  6. Virtual Planning, Control, and Machining for a Modular-Based Automated Factory Operation in an Augmented Reality Environment

    NASA Astrophysics Data System (ADS)

    Pai, Yun Suen; Yap, Hwa Jen; Md Dawal, Siti Zawiah; Ramesh, S.; Phoon, Sin Ye

    2016-06-01

    This study presents a modular-based implementation of augmented reality to provide an immersive experience in learning or teaching the planning phase, control system, and machining parameters of a fully automated work cell. The architecture of the system consists of three code modules that can operate independently or combined to create a complete system that is able to guide engineers from the layout planning phase to the prototyping of the final product. The layout planning module determines the best possible arrangement in a layout for the placement of various machines, in this case a conveyor belt for transportation, a robot arm for pick-and-place operations, and a computer numerical control milling machine to generate the final prototype. The robotic arm module simulates the pick-and-place operation offline from the conveyor belt to a computer numerical control (CNC) machine utilising collision detection and inverse kinematics. Finally, the CNC module performs virtual machining based on the Uniform Space Decomposition method and axis aligned bounding box collision detection. The conducted case study revealed that given the situation, a semi-circle shaped arrangement is desirable, whereas the pick-and-place system and the final generated G-code produced the highest deviation of 3.83 mm and 5.8 mm respectively.

  7. Fragment-based quantitative structure-activity relationship (FB-QSAR) for fragment-based drug design.

    PubMed

    Du, Qi-Shi; Huang, Ri-Bo; Wei, Yu-Tuo; Pang, Zong-Wen; Du, Li-Qin; Chou, Kuo-Chen

    2009-01-30

    In cooperation with the fragment-based design a new drug design method, the so-called "fragment-based quantitative structure-activity relationship" (FB-QSAR) is proposed. The essence of the new method is that the molecular framework in a family of drug candidates are divided into several fragments according to their substitutes being investigated. The bioactivities of molecules are correlated with the physicochemical properties of the molecular fragments through two sets of coefficients in the linear free energy equations. One coefficient set is for the physicochemical properties and the other for the weight factors of the molecular fragments. Meanwhile, an iterative double least square (IDLS) technique is developed to solve the two sets of coefficients in a training data set alternately and iteratively. The IDLS technique is a feedback procedure with machine learning ability. The standard Two-dimensional quantitative structure-activity relationship (2D-QSAR) is a special case, in the FB-QSAR, when the whole molecule is treated as one entity. The FB-QSAR approach can remarkably enhance the predictive power and provide more structural insights into rational drug design. As an example, the FB-QSAR is applied to build a predictive model of neuraminidase inhibitors for drug development against H5N1 influenza virus. (c) 2008 Wiley Periodicals, Inc.

  8. Prediction of movement intention using connectivity within motor-related network: An electrocorticography study.

    PubMed

    Kang, Byeong Keun; Kim, June Sic; Ryun, Seokyun; Chung, Chun Kee

    2018-01-01

    Most brain-machine interface (BMI) studies have focused only on the active state of which a BMI user performs specific movement tasks. Therefore, models developed for predicting movements were optimized only for the active state. The models may not be suitable in the idle state during resting. This potential maladaptation could lead to a sudden accident or unintended movement resulting from prediction error. Prediction of movement intention is important to develop a more efficient and reasonable BMI system which could be selectively operated depending on the user's intention. Physical movement is performed through the serial change of brain states: idle, planning, execution, and recovery. The motor networks in the primary motor cortex and the dorsolateral prefrontal cortex are involved in these movement states. Neuronal communication differs between the states. Therefore, connectivity may change depending on the states. In this study, we investigated the temporal dynamics of connectivity in dorsolateral prefrontal cortex and primary motor cortex to predict movement intention. Movement intention was successfully predicted by connectivity dynamics which may reflect changes in movement states. Furthermore, dorsolateral prefrontal cortex is crucial in predicting movement intention to which primary motor cortex contributes. These results suggest that brain connectivity is an excellent approach in predicting movement intention.

  9. Virtual reality in surgical training.

    PubMed

    Lange, T; Indelicato, D J; Rosen, J M

    2000-01-01

    Virtual reality in surgery and, more specifically, in surgical training, faces a number of challenges in the future. These challenges are building realistic models of the human body, creating interface tools to view, hear, touch, feel, and manipulate these human body models, and integrating virtual reality systems into medical education and treatment. A final system would encompass simulators specifically for surgery, performance machines, telemedicine, and telesurgery. Each of these areas will need significant improvement for virtual reality to impact medicine successfully in the next century. This article gives an overview of, and the challenges faced by, current systems in the fast-changing field of virtual reality technology, and provides a set of specific milestones for a truly realistic virtual human body.

  10. Cloud flexibility using DIRAC interware

    NASA Astrophysics Data System (ADS)

    Fernandez Albor, Víctor; Seco Miguelez, Marcos; Fernandez Pena, Tomas; Mendez Muñoz, Victor; Saborido Silva, Juan Jose; Graciani Diaz, Ricardo

    2014-06-01

    Communities of different locations are running their computing jobs on dedicated infrastructures without the need to worry about software, hardware or even the site where their programs are going to be executed. Nevertheless, this usually implies that they are restricted to use certain types or versions of an Operating System because either their software needs an definite version of a system library or a specific platform is required by the collaboration to which they belong. On this scenario, if a data center wants to service software to incompatible communities, it has to split its physical resources among those communities. This splitting will inevitably lead to an underuse of resources because the data centers are bound to have periods where one or more of its subclusters are idle. It is, in this situation, where Cloud Computing provides the flexibility and reduction in computational cost that data centers are searching for. This paper describes a set of realistic tests that we ran on one of such implementations. The test comprise software from three different HEP communities (Auger, LHCb and QCD phenomelogists) and the Parsec Benchmark Suite running on one or more of three Linux flavors (SL5, Ubuntu 10.04 and Fedora 13). The implemented infrastructure has, at the cloud level, CloudStack that manages the virtual machines (VM) and the hosts on which they run, and, at the user level, the DIRAC framework along with a VM extension that will submit, monitorize and keep track of the user jobs and also requests CloudStack to start or stop the necessary VM's. In this infrastructure, the community software is distributed via the CernVM-FS, which has been proven to be a reliable and scalable software distribution system. With the resulting infrastructure, users are allowed to send their jobs transparently to the Data Center. The main purpose of this system is the creation of flexible cluster, multiplatform with an scalable method for software distribution for several VOs. Users from different communities do not need to care about the installation of the standard software that is available at the nodes, nor the operating system of the host machine, which is transparent to the user.

  11. AstroCloud, a Cyber-Infrastructure for Astronomy Research: Data Access and Interoperability

    NASA Astrophysics Data System (ADS)

    Fan, D.; He, B.; Xiao, J.; Li, S.; Li, C.; Cui, C.; Yu, C.; Hong, Z.; Yin, S.; Wang, C.; Cao, Z.; Fan, Y.; Mi, L.; Wan, W.; Wang, J.

    2015-09-01

    Data access and interoperability module connects the observation proposals, data, virtual machines and software. According to the unique identifier of PI (principal investigator), an email address or an internal ID, data can be collected by PI's proposals, or by the search interfaces, e.g. conesearch. Files associated with the searched results could be easily transported to cloud storages, including the storage with virtual machines, or several commercial platforms like Dropbox. Benefitted from the standards of IVOA (International Observatories Alliance), VOTable formatted searching result could be sent to kinds of VO software. Latter endeavor will try to integrate more data and connect archives and some other astronomical resources.

  12. An efficient approach for improving virtual machine placement in cloud computing environment

    NASA Astrophysics Data System (ADS)

    Ghobaei-Arani, Mostafa; Shamsi, Mahboubeh; Rahmanian, Ali A.

    2017-11-01

    The ever increasing demand for the cloud services requires more data centres. The power consumption in the data centres is a challenging problem for cloud computing, which has not been considered properly by the data centre developer companies. Especially, large data centres struggle with the power cost and the Greenhouse gases production. Hence, employing the power efficient mechanisms are necessary to optimise the mentioned effects. Moreover, virtual machine (VM) placement can be used as an effective method to reduce the power consumption in data centres. In this paper by grouping both virtual and physical machines, and taking into account the maximum absolute deviation during the VM placement, the power consumption as well as the service level agreement (SLA) deviation in data centres are reduced. To this end, the best-fit decreasing algorithm is utilised in the simulation to reduce the power consumption by about 5% compared to the modified best-fit decreasing algorithm, and at the same time, the SLA violation is improved by 6%. Finally, the learning automata are used to a trade-off between power consumption reduction from one side, and SLA violation percentage from the other side.

  13. In-vivo determination of chewing patterns using FBG and artificial neural networks

    NASA Astrophysics Data System (ADS)

    Pegorini, Vinicius; Zen Karam, Leandro; Rocha Pitta, Christiano S.; Ribeiro, Richardson; Simioni Assmann, Tangriani; Cardozo da Silva, Jean Carlos; Bertotti, Fábio L.; Kalinowski, Hypolito J.; Cardoso, Rafael

    2015-09-01

    This paper reports the process of pattern classification of the chewing process of ruminants. We propose a simplified signal processing scheme for optical fiber Bragg grating (FBG) sensors based on machine learning techniques. The FBG sensors measure the biomechanical forces during jaw movements and an artificial neural network is responsible for the classification of the associated chewing pattern. In this study, three patterns associated to dietary supplement, hay and ryegrass were considered. Additionally, two other important events for ingestive behavior studies were monitored, rumination and idle period. Experimental results show that the proposed approach for pattern classification has been capable of differentiating the materials involved in the chewing process with a small classification error.

  14. Alternative Fuels Data Center: Strategies to Conserve Fuel

    Science.gov Websites

    conserve fuel. Idle Reduction Idle Reduction Find ways to save fuel and money by idling less. Driving save money. Parts and Equipment Parts and Equipment Learn about outfitting your fleet's vehicles with

  15. Self-replicating machines in continuous space with virtual physics.

    PubMed

    Smith, Arnold; Turney, Peter; Ewaschuk, Robert

    2003-01-01

    JohnnyVon is an implementation of self-replicating machines in continuous two-dimensional space. Two types of particles drift about in a virtual liquid. The particles are automata with discrete internal states but continuous external relationships. Their internal states are governed by finite state machines, but their external relationships are governed by a simulated physics that includes Brownian motion, viscosity, and springlike attractive and repulsive forces. The particles can be assembled into patterns that can encode arbitrary strings of bits. We demonstrate that, if an arbitrary seed pattern is put in a soup of separate individual particles, the pattern will replicate by assembling the individual particles into copies of itself. We also show that, given sufficient time, a soup of separate individual particles will eventually spontaneously form self-replicating patterns. We discuss the implications of JohnnyVon for research in nanotechnology, theoretical biology, and artificial life.

  16. Noise and Vibration Risk Prevention Virtual Web for Ubiquitous Training

    ERIC Educational Resources Information Center

    Redel-Macías, María Dolores; Cubero-Atienza, Antonio J.; Martínez-Valle, José Miguel; Pedrós-Pérez, Gerardo; del Pilar Martínez-Jiménez, María

    2015-01-01

    This paper describes a new Web portal offering experimental labs for ubiquitous training of university engineering students in work-related risk prevention. The Web-accessible computer program simulates the noise and machine vibrations met in the work environment, in a series of virtual laboratories that mimic an actual laboratory and provide the…

  17. Summary of OEM Idling Recommendations from Vehicle Owner's Manuals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keel-Blackmon, Kristy; Curran, Scott; Lapsa, Melissa Voss

    The project upon which this report is based was conceived in 2012 during discussions between the East Tennessee Clean Fuels Coalition (ETCleanFuels) and Oak Ridge National Laboratory (ORNL) who both noted that a detailed summary of idling recommendations for a wide variety of engines and vehicles were not available in the literature. The two organizations agreed that ETCleanFuels would develop a first-of-its-kind collection of idling recommendations from the owner’s manuals of modern production vehicles. Vehicle engine idling, a subject that has long been debated, is largely shrouded in misinformation. The justifications for idling seem to be many: driver comfort, waitingmore » in lines, and talking on cell phones to name a few. Assuredly, a great number of people idle because of the myths and misinformation surrounding this issue. This report addresses these myths by turning to statements taken directly from the automobile and engine manufacturers themselves.« less

  18. Using shadow page cache to improve isolated drivers performance.

    PubMed

    Zheng, Hao; Dong, Xiaoshe; Wang, Endong; Chen, Baoke; Zhu, Zhengdong; Liu, Chengzhe

    2015-01-01

    With the advantage of the reusability property of the virtualization technology, users can reuse various types and versions of existing operating systems and drivers in a virtual machine, so as to customize their application environment. In order to prevent users' virtualization environments being impacted by driver faults in virtual machine, Chariot examines the correctness of driver's write operations by the method of combining a driver's write operation capture and a driver's private access control table. However, this method needs to keep the write permission of shadow page table as read-only, so as to capture isolated driver's write operations through page faults, which adversely affect the performance of the driver. Based on delaying setting frequently used shadow pages' write permissions to read-only, this paper proposes an algorithm using shadow page cache to improve the performance of isolated drivers and carefully study the relationship between the performance of drivers and the size of shadow page cache. Experimental results show that, through the shadow page cache, the performance of isolated drivers can be greatly improved without impacting Chariot's reliability too much.

  19. Using Shadow Page Cache to Improve Isolated Drivers Performance

    PubMed Central

    Dong, Xiaoshe; Wang, Endong; Chen, Baoke; Zhu, Zhengdong; Liu, Chengzhe

    2015-01-01

    With the advantage of the reusability property of the virtualization technology, users can reuse various types and versions of existing operating systems and drivers in a virtual machine, so as to customize their application environment. In order to prevent users' virtualization environments being impacted by driver faults in virtual machine, Chariot examines the correctness of driver's write operations by the method of combining a driver's write operation capture and a driver's private access control table. However, this method needs to keep the write permission of shadow page table as read-only, so as to capture isolated driver's write operations through page faults, which adversely affect the performance of the driver. Based on delaying setting frequently used shadow pages' write permissions to read-only, this paper proposes an algorithm using shadow page cache to improve the performance of isolated drivers and carefully study the relationship between the performance of drivers and the size of shadow page cache. Experimental results show that, through the shadow page cache, the performance of isolated drivers can be greatly improved without impacting Chariot's reliability too much. PMID:25815373

  20. VirtualSpace: A vision of a machine-learned virtual space environment

    NASA Astrophysics Data System (ADS)

    Bortnik, J.; Sarno-Smith, L. K.; Chu, X.; Li, W.; Ma, Q.; Angelopoulos, V.; Thorne, R. M.

    2017-12-01

    Space borne instrumentation tends to come and go. A typical instrument will go through a phase of design and construction, be deployed on a spacecraft for several years while it collects data, and then be decommissioned and fade into obscurity. The data collected from that instrument will typically receive much attention while it is being collected, perhaps in the form of event studies, conjunctions with other instruments, or a few statistical surveys, but once the instrument or spacecraft is decommissioned, the data will be archived and receive progressively less attention with every passing year. This is the fate of all historical data, and will be the fate of data being collected by instruments even at the present time. But what if those instruments could come alive, and all be simultaneously present at any and every point in time and space? Imagine the scientific insights, and societal gains that could be achieved with a grand (virtual) heliophysical observatory that consists of every current and historical mission ever deployed? We propose that this is not just fantasy but is imminently doable with the data currently available, with the present computational resources, and with currently available algorithms. This project revitalizes existing data resources and lays the groundwork for incorporating data from every future mission to expand the scope and refine the resolution of the virtual observatory. We call this project VirtualSpace: a machine-learned virtual space environment.

  1. Virtual pools for interactive analysis and software development through an integrated Cloud environment

    NASA Astrophysics Data System (ADS)

    Grandi, C.; Italiano, A.; Salomoni, D.; Calabrese Melcarne, A. K.

    2011-12-01

    WNoDeS, an acronym for Worker Nodes on Demand Service, is software developed at CNAF-Tier1, the National Computing Centre of the Italian Institute for Nuclear Physics (INFN) located in Bologna. WNoDeS provides on demand, integrated access to both Grid and Cloud resources through virtualization technologies. Besides the traditional use of computing resources in batch mode, users need to have interactive and local access to a number of systems. WNoDeS can dynamically select these computers instantiating Virtual Machines, according to the requirements (computing, storage and network resources) of users through either the Open Cloud Computing Interface API, or through a web console. An interactive use is usually limited to activities in user space, i.e. where the machine configuration is not modified. In some other instances the activity concerns development and testing of services and thus implies the modification of the system configuration (and, therefore, root-access to the resource). The former use case is a simple extension of the WNoDeS approach, where the resource is provided in interactive mode. The latter implies saving the virtual image at the end of each user session so that it can be presented to the user at subsequent requests. This work describes how the LHC experiments at INFN-Bologna are testing and making use of these dynamically created ad-hoc machines via WNoDeS to support flexible, interactive analysis and software development at the INFN Tier-1 Computing Centre.

  2. Electrical Machines Laminations Magnetic Properties: A Virtual Instrument Laboratory

    ERIC Educational Resources Information Center

    Martinez-Roman, Javier; Perez-Cruz, Juan; Pineda-Sanchez, Manuel; Puche-Panadero, Ruben; Roger-Folch, Jose; Riera-Guasp, Martin; Sapena-Baño, Angel

    2015-01-01

    Undergraduate courses in electrical machines often include an introduction to their magnetic circuits and to the various magnetic materials used in their construction and their properties. The students must learn to be able to recognize and compare the permeability, saturation, and losses of these magnetic materials, relate each material to its…

  3. MLViS: A Web Tool for Machine Learning-Based Virtual Screening in Early-Phase of Drug Discovery and Development

    PubMed Central

    Korkmaz, Selcuk; Zararsiz, Gokmen; Goksuluk, Dincer

    2015-01-01

    Virtual screening is an important step in early-phase of drug discovery process. Since there are thousands of compounds, this step should be both fast and effective in order to distinguish drug-like and nondrug-like molecules. Statistical machine learning methods are widely used in drug discovery studies for classification purpose. Here, we aim to develop a new tool, which can classify molecules as drug-like and nondrug-like based on various machine learning methods, including discriminant, tree-based, kernel-based, ensemble and other algorithms. To construct this tool, first, performances of twenty-three different machine learning algorithms are compared by ten different measures, then, ten best performing algorithms have been selected based on principal component and hierarchical cluster analysis results. Besides classification, this application has also ability to create heat map and dendrogram for visual inspection of the molecules through hierarchical cluster analysis. Moreover, users can connect the PubChem database to download molecular information and to create two-dimensional structures of compounds. This application is freely available through www.biosoft.hacettepe.edu.tr/MLViS/. PMID:25928885

  4. Prediction Based Proactive Thermal Virtual Machine Scheduling in Green Clouds

    PubMed Central

    Kinger, Supriya; Kumar, Rajesh; Sharma, Anju

    2014-01-01

    Cloud computing has rapidly emerged as a widely accepted computing paradigm, but the research on Cloud computing is still at an early stage. Cloud computing provides many advanced features but it still has some shortcomings such as relatively high operating cost and environmental hazards like increasing carbon footprints. These hazards can be reduced up to some extent by efficient scheduling of Cloud resources. Working temperature on which a machine is currently running can be taken as a criterion for Virtual Machine (VM) scheduling. This paper proposes a new proactive technique that considers current and maximum threshold temperature of Server Machines (SMs) before making scheduling decisions with the help of a temperature predictor, so that maximum temperature is never reached. Different workload scenarios have been taken into consideration. The results obtained show that the proposed system is better than existing systems of VM scheduling, which does not consider current temperature of nodes before making scheduling decisions. Thus, a reduction in need of cooling systems for a Cloud environment has been obtained and validated. PMID:24737962

  5. 1001 Ways to run AutoDock Vina for virtual screening

    NASA Astrophysics Data System (ADS)

    Jaghoori, Mohammad Mahdi; Bleijlevens, Boris; Olabarriaga, Silvia D.

    2016-03-01

    Large-scale computing technologies have enabled high-throughput virtual screening involving thousands to millions of drug candidates. It is not trivial, however, for biochemical scientists to evaluate the technical alternatives and their implications for running such large experiments. Besides experience with the molecular docking tool itself, the scientist needs to learn how to run it on high-performance computing (HPC) infrastructures, and understand the impact of the choices made. Here, we review such considerations for a specific tool, AutoDock Vina, and use experimental data to illustrate the following points: (1) an additional level of parallelization increases virtual screening throughput on a multi-core machine; (2) capturing of the random seed is not enough (though necessary) for reproducibility on heterogeneous distributed computing systems; (3) the overall time spent on the screening of a ligand library can be improved by analysis of factors affecting execution time per ligand, including number of active torsions, heavy atoms and exhaustiveness. We also illustrate differences among four common HPC infrastructures: grid, Hadoop, small cluster and multi-core (virtual machine on the cloud). Our analysis shows that these platforms are suitable for screening experiments of different sizes. These considerations can guide scientists when choosing the best computing platform and set-up for their future large virtual screening experiments.

  6. 1001 Ways to run AutoDock Vina for virtual screening.

    PubMed

    Jaghoori, Mohammad Mahdi; Bleijlevens, Boris; Olabarriaga, Silvia D

    2016-03-01

    Large-scale computing technologies have enabled high-throughput virtual screening involving thousands to millions of drug candidates. It is not trivial, however, for biochemical scientists to evaluate the technical alternatives and their implications for running such large experiments. Besides experience with the molecular docking tool itself, the scientist needs to learn how to run it on high-performance computing (HPC) infrastructures, and understand the impact of the choices made. Here, we review such considerations for a specific tool, AutoDock Vina, and use experimental data to illustrate the following points: (1) an additional level of parallelization increases virtual screening throughput on a multi-core machine; (2) capturing of the random seed is not enough (though necessary) for reproducibility on heterogeneous distributed computing systems; (3) the overall time spent on the screening of a ligand library can be improved by analysis of factors affecting execution time per ligand, including number of active torsions, heavy atoms and exhaustiveness. We also illustrate differences among four common HPC infrastructures: grid, Hadoop, small cluster and multi-core (virtual machine on the cloud). Our analysis shows that these platforms are suitable for screening experiments of different sizes. These considerations can guide scientists when choosing the best computing platform and set-up for their future large virtual screening experiments.

  7. Virtual reality hardware and graphic display options for brain-machine interfaces

    PubMed Central

    Marathe, Amar R.; Carey, Holle L.; Taylor, Dawn M.

    2009-01-01

    Virtual reality hardware and graphic displays are reviewed here as a development environment for brain-machine interfaces (BMIs). Two desktop stereoscopic monitors and one 2D monitor were compared in a visual depth discrimination task and in a 3D target-matching task where able-bodied individuals used actual hand movements to match a virtual hand to different target hands. Three graphic representations of the hand were compared: a plain sphere, a sphere attached to the fingertip of a realistic hand and arm, and a stylized pacman-like hand. Several subjects had great difficulty using either stereo monitor for depth perception when perspective size cues were removed. A mismatch in stereo and size cues generated inappropriate depth illusions. This phenomenon has implications for choosing target and virtual hand sizes in BMI experiments. Target matching accuracy was about as good with the 2D monitor as with either 3D monitor. However, users achieved this accuracy by exploring the boundaries of the hand in the target with carefully controlled movements. This method of determining relative depth may not be possible in BMI experiments if movement control is more limited. Intuitive depth cues, such as including a virtual arm, can significantly improve depth perception accuracy with or without stereo viewing. PMID:18006069

  8. Haverhill, Mass. School Bus Company Reduces Idling Under Settlement

    EPA Pesticide Factsheets

    Coppola Bus, Inc., a Haverhill, Mass. company has reduced vehicle idling and therefore reduced diesel emissions, and paid an $18,000 penalty as part of a settlement with the U.S. Environmental Protection Agency for claims of excessive school bus idling.

  9. Idle speed and fuel vapor recovery control system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Orzel, D.V.

    1993-06-01

    A method for controlling idling speed of an engine via bypass throttle connected in parallel to a primary engine throttle and for controlling purge flow through a vapor recovery system into an air/fuel intake of the engine is described, comprising the steps of: positioning the bypass throttle to decrease any difference between a desired engine idle speed and actual engine idle speed; and decreasing the purge flow when said bypass throttle position is less than a preselected fraction of a maximum bypass throttle position.

  10. myChEMBL: a virtual machine implementation of open data and cheminformatics tools.

    PubMed

    Ochoa, Rodrigo; Davies, Mark; Papadatos, George; Atkinson, Francis; Overington, John P

    2014-01-15

    myChEMBL is a completely open platform, which combines public domain bioactivity data with open source database and cheminformatics technologies. myChEMBL consists of a Linux (Ubuntu) Virtual Machine featuring a PostgreSQL schema with the latest version of the ChEMBL database, as well as the latest RDKit cheminformatics libraries. In addition, a self-contained web interface is available, which can be modified and improved according to user specifications. The VM is available at: ftp://ftp.ebi.ac.uk/pub/databases/chembl/VM/myChEMBL/current. The web interface and web services code is available at: https://github.com/rochoa85/myChEMBL.

  11. General-Purpose Front End for Real-Time Data Processing

    NASA Technical Reports Server (NTRS)

    James, Mark

    2007-01-01

    FRONTIER is a computer program that functions as a front end for any of a variety of other software of both the artificial intelligence (AI) and conventional data-processing types. As used here, front end signifies interface software needed for acquiring and preprocessing data and making the data available for analysis by the other software. FRONTIER is reusable in that it can be rapidly tailored to any such other software with minimum effort. Each component of FRONTIER is programmable and is executed in an embedded virtual machine. Each component can be reconfigured during execution. The virtual-machine implementation making FRONTIER independent of the type of computing hardware on which it is executed.

  12. Examining Effects of Virtual Machine Settings on Voice over Internet Protocol in a Private Cloud Environment

    ERIC Educational Resources Information Center

    Liao, Yuan

    2011-01-01

    The virtualization of computing resources, as represented by the sustained growth of cloud computing, continues to thrive. Information Technology departments are building their private clouds due to the perception of significant cost savings by managing all physical computing resources from a single point and assigning them to applications or…

  13. New Virtual Field Trips. Revised Edition.

    ERIC Educational Resources Information Center

    Cooper, Gail; Cooper, Garry

    This book is an annotated guidebook, arranged by subject matter, of World Wide Web sites for K-12 students. The following chapters are included: (1) Virtual Time Machine (i.e., sites that cover topics in world history); (2) Tour the World (i.e., sites that include information about countries); (3) Outer Space; (4) The Great Outdoors; (5) Aquatic…

  14. Virtual Factory Framework for Supporting Production Planning and Control.

    PubMed

    Kibira, Deogratias; Shao, Guodong

    2017-01-01

    Developing optimal production plans for smart manufacturing systems is challenging because shop floor events change dynamically. A virtual factory incorporating engineering tools, simulation, and optimization generates and communicates performance data to guide wise decision making for different control levels. This paper describes such a platform specifically for production planning. We also discuss verification and validation of the constituent models. A case study of a machine shop is used to demonstrate data generation for production planning in a virtual factory.

  15. Piezoelectric shunt damping of a circular saw blade with autonomous power supply for noise and vibration reduction

    NASA Astrophysics Data System (ADS)

    Pohl, Martin; Rose, Michael

    2016-01-01

    Circular saws are widespread tools for machining metal, wood or even ceramics. Due to the thin blade and excitation by the workpiece contact of the cutting edges, circular saws are prone to vibration and intense noise emission. Damping the blade will lower the hearing protection requirements of the users and possibly increase precision. Therefore a new damping concept for circular saw blades is presented in this paper. It is based on negative capacitance shunted piezoelectric transducers which are applied to the saw blade core. The required energy for the electronics is harvested from the rotation by a generator, so that no change of the machine tool is required. All components are integrated into an autonomous saw tool. Finally, the system is experimentally investigated without rotation, in idling and in cutting condition in a circular saw test stand in the Institute for Machine Tools and Production Engineering (IWF) at TU Braunschweig. The experimental investigation shows a good reduction of the vibration amplitude over a wide frequency range in the non-rotating condition. When rotating, the damping effect is lower and limited to some narrow frequency bands. The proposed reason for the reduced damping effect in rotating condition consists in the saturation of the electronic circuits due to the limited supply voltage capabilities.

  16. Real life cost and quality of life associated with continuous intraduodenal levodopa infusion compared with oral treatment in Parkinson patients.

    PubMed

    Lundqvist, Christofer; Beiske, Antonie Giæver; Reiertsen, Ola; Kristiansen, Ivar Sønbø

    2014-12-01

    Advanced-stage Parkinson's disease (PD) strongly affects quality of life (QoL). Continuous intraduodenal administration of levodopa (IDL) is efficacious, but entails high costs. This study aims to estimate these costs in routine care. 10 patients with advanced-PD who switched from oral medication to IDL were assessed at baseline, and subsequently at 3, 6, 9 and 12 months follow-up. We used the Unified PD Rating Scale (UPDRS) for function and 15D for Quality of Life (QoL). Costs were assessed using quarterly structured patient questionnaires and hospital registries. Costs per quality adjusted life year (QALY) were estimated for conventional treatment prior to switch and for 1-year treatment with IDL. Probabilistic sensitivity analysis was based on bootstrapping. IDL significantly improved functional scores and was safe to use. One-year conventional oral treatment entailed 0.63 QALY while IDL entailed 0.68 (p > 0.05). The estimated total 1-year treatment cost was NOK419,160 on conventional treatment and NOK890,920 on IDL, representing a cost of NOK9.2 million (€1.18 mill) per additional QALY. The incremental cost per unit UPDRS improvement was NOK25,000 (€3,250). Medication was the dominant cost during IDL (45% of total costs), it represented only 6.4% of the total for conventional treatment. IDL improves function but is not cost effective using recommended thresholds for cost/QALY in Norway.

  17. Exploring Infiniband Hardware Virtualization in OpenNebula towards Efficient High-Performance Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pais Pitta de Lacerda Ruivo, Tiago; Bernabeu Altayo, Gerard; Garzoglio, Gabriele

    2014-11-11

    has been widely accepted that software virtualization has a big negative impact on high-performance computing (HPC) application performance. This work explores the potential use of Infiniband hardware virtualization in an OpenNebula cloud towards the efficient support of MPI-based workloads. We have implemented, deployed, and tested an Infiniband network on the FermiCloud private Infrastructure-as-a-Service (IaaS) cloud. To avoid software virtualization towards minimizing the virtualization overhead, we employed a technique called Single Root Input/Output Virtualization (SRIOV). Our solution spanned modifications to the Linux’s Hypervisor as well as the OpenNebula manager. We evaluated the performance of the hardware virtualization on up to 56more » virtual machines connected by up to 8 DDR Infiniband network links, with micro-benchmarks (latency and bandwidth) as well as w a MPI-intensive application (the HPL Linpack benchmark).« less

  18. 40 CFR 85.2213 - Idle test-EPA 91.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 18 2011-07-01 2011-07-01 false Idle test-EPA 91. 85.2213 Section 85.2213 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED....2213 Idle test—EPA 91. (a) General requirements—(1) Exhaust gas sampling algorithm. The analysis of...

  19. 40 CFR 85.2213 - Idle test-EPA 91.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 19 2013-07-01 2013-07-01 false Idle test-EPA 91. 85.2213 Section 85.2213 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED....2213 Idle test—EPA 91. (a) General requirements—(1) Exhaust gas sampling algorithm. The analysis of...

  20. Seamless online science workflow development and collaboration using IDL and the ENVI Services Engine

    NASA Astrophysics Data System (ADS)

    Harris, A. T.; Ramachandran, R.; Maskey, M.

    2013-12-01

    The Exelis-developed IDL and ENVI software are ubiquitous tools in Earth science research environments. The IDL Workbench is used by the Earth science community for programming custom data analysis and visualization modules. ENVI is a software solution for processing and analyzing geospatial imagery that combines support for multiple Earth observation scientific data types (optical, thermal, multi-spectral, hyperspectral, SAR, LiDAR) with advanced image processing and analysis algorithms. The ENVI & IDL Services Engine (ESE) is an Earth science data processing engine that allows researchers to use open standards to rapidly create, publish and deploy advanced Earth science data analytics within any existing enterprise infrastructure. Although powerful in many ways, the tools lack collaborative features out-of-box. Thus, as part of the NASA funded project, Collaborative Workbench to Accelerate Science Algorithm Development, researchers at the University of Alabama in Huntsville and Exelis have developed plugins that allow seamless research collaboration from within IDL workbench. Such additional features within IDL workbench are possible because IDL workbench is built using the Eclipse Rich Client Platform (RCP). RCP applications allow custom plugins to be dropped in for extended functionalities. Specific functionalities of the plugins include creating complex workflows based on IDL application source code, submitting workflows to be executed by ESE in the cloud, and sharing and cloning of workflows among collaborators. All these functionalities are available to scientists without leaving their IDL workbench. Because ESE can interoperate with any middleware, scientific programmers can readily string together IDL processing tasks (or tasks written in other languages like C++, Java or Python) to create complex workflows for deployment within their current enterprise architecture (e.g. ArcGIS Server, GeoServer, Apache ODE or SciFlo from JPL). Using the collaborative IDL Workbench, coupled with ESE for execution in the cloud, asynchronous workflows could be executed in batch mode on large data in the cloud. We envision that a scientist will initially develop a scientific workflow locally on a small set of data. Once tested, the scientist will deploy the workflow to the cloud for execution. Depending on the results, the scientist may share the workflow and results, allowing them to be stored in a community catalog and instantly loaded into the IDL Workbench of other scientists. Thereupon, scientists can clone and modify or execute the workflow with different input parameters. The Collaborative Workbench will provide a platform for collaboration in the cloud, helping Earth scientists solve big-data problems in the Earth and planetary sciences.

  1. Virtual Network Configuration Management System for Data Center Operations and Management

    NASA Astrophysics Data System (ADS)

    Okita, Hideki; Yoshizawa, Masahiro; Uehara, Keitaro; Mizuno, Kazuhiko; Tarui, Toshiaki; Naono, Ken

    Virtualization technologies are widely deployed in data centers to improve system utilization. However, they increase the workload for operators, who have to manage the structure of virtual networks in data centers. A virtual-network management system which automates the integration of the configurations of the virtual networks is provided. The proposed system collects the configurations from server virtualization platforms and VLAN-supported switches, and integrates these configurations according to a newly developed XML-based management information model for virtual-network configurations. Preliminary evaluations show that the proposed system helps operators by reducing the time to acquire the configurations from devices and correct the inconsistency of operators' configuration management database by about 40 percent. Further, they also show that the proposed system has excellent scalability; the system takes less than 20 minutes to acquire the virtual-network configurations from a large scale network that includes 300 virtual machines. These results imply that the proposed system is effective for improving the configuration management process for virtual networks in data centers.

  2. Efficient operating system level virtualization techniques for cloud resources

    NASA Astrophysics Data System (ADS)

    Ansu, R.; Samiksha; Anju, S.; Singh, K. John

    2017-11-01

    Cloud computing is an advancing technology which provides the servcies of Infrastructure, Platform and Software. Virtualization and Computer utility are the keys of Cloud computing. The numbers of cloud users are increasing day by day. So it is the need of the hour to make resources available on demand to satisfy user requirements. The technique in which resources namely storage, processing power, memory and network or I/O are abstracted is known as Virtualization. For executing the operating systems various virtualization techniques are available. They are: Full System Virtualization and Para Virtualization. In Full Virtualization, the whole architecture of hardware is duplicated virtually. No modifications are required in Guest OS as the OS deals with the VM hypervisor directly. In Para Virtualization, modifications of OS is required to run in parallel with other OS. For the Guest OS to access the hardware, the host OS must provide a Virtual Machine Interface. OS virtualization has many advantages such as migrating applications transparently, consolidation of server, online maintenance of OS and providing security. This paper briefs both the virtualization techniques and discusses the issues in OS level virtualization.

  3. Virtual Employment Test Bed Operational Research and Systems Analysis to Test Armaments Designs Early in the Life Cycle

    DTIC Science & Technology

    2014-06-01

    motion capture data used to determine position and orientation of a Soldier’s head, turret and the M2 machine gun • Controlling and acquiring user/weapon...data from the M2 simulation machine gun • Controlling paintball guns used to fire at the GPK during an experimental run • Sending and receiving TCP...Mounted, Armor/Cavalry, Combat Engineers, Field Artillery Cannon Crewmember, or MP duty assignment – Currently M2 .50 Caliber Machine Gun qualified

  4. CHARACTERIZATION OF THE FINE PARTICLE AND GASEOUS EMISSIONS DURING SCHOOL BUS IDLING

    EPA Science Inventory

    The particulate matter (PM) and gaseous emissions from six diesel school buses were determined over a simulated idling period typical of schools in the northeastern U.S. Testing was conducted for both continuous idle and hot restart conditions using particle and gas analyzers. Th...

  5. 40 CFR 85.2218 - Preconditioned idle test-EPA 91.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 18 2011-07-01 2011-07-01 false Preconditioned idle test-EPA 91. 85.2218 Section 85.2218 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS... Tests § 85.2218 Preconditioned idle test—EPA 91. (a) General requirements—(1) Exhaust gas sampling...

  6. 40 CFR 85.2212 - Idle test-EPA 81.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 19 2013-07-01 2013-07-01 false Idle test-EPA 81. 85.2212 Section 85.2212 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED....2212 Idle test—EPA 81. (a)(1) General calendar year applicability. The test procedure described in this...

  7. 40 CFR 85.2212 - Idle test-EPA 81.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 19 2012-07-01 2012-07-01 false Idle test-EPA 81. 85.2212 Section 85.2212 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED....2212 Idle test—EPA 81. (a)(1) General calendar year applicability. The test procedure described in this...

  8. 40 CFR 85.2212 - Idle test-EPA 81.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 18 2011-07-01 2011-07-01 false Idle test-EPA 81. 85.2212 Section 85.2212 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED....2212 Idle test—EPA 81. (a)(1) General calendar year applicability. The test procedure described in this...

  9. 46 CFR 252.20 - Subsidized and nonsubsidized voyages.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ..., idleness, delay or lay-up—(i) Report by operator. The operator shall report promptly to the Region Director any reduced crew period and any period of idleness, lay-up or delay occurring during or between... the event the nonsubsidized voyage follows a subsidized period of reduced crew, idleness or lay-up...

  10. EVALUATION OF FUEL CELL AUXILIARY POWER UNITS FOR HEAVY-DUTY DIESEL TRUCKS

    EPA Science Inventory

    A large number of heavy-duty trucks idle a significant amount. Heavy-duty line-haul truck engines idle about 30-50% of the time the engine is running. Drivers idle engines to power climate control devices (e.g., heaters and air conditioners) and sleeper compartment accessories (e...

  11. EFFECTS OF ENGINE SPEED AND ACCESSORY LOAD ON IDLING EMISSIONS FROM HEAVY-DUTY DIESEL TRUCK ENGINES

    EPA Science Inventory

    A nontrivial portion of heavy-duty vehicle emissions of nitrogen oxides (NOx) and particulate matter (PM) occurs during idling. Regulators and the environmental community are interested in curtailing truck idling emissions, but current emissions models do not characterize them ac...

  12. 40 CFR 86.1537 - Idle test run.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES (CONTINUED) Emission Regulations for Otto-Cycle...-Cycle Heavy-Duty Engines, New Otto-Cycle Light-Duty Trucks, and New Methanol-Fueled Natural Gas-Fueled, and Liquefied Petroleum Gas-Fueled Diesel-Cycle Light-Duty Trucks; Idle Test Procedures § 86.1537 Idle...

  13. 40 CFR 86.1537 - Idle test run.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES (CONTINUED) Emission Regulations for Otto-Cycle...-Cycle Heavy-Duty Engines, New Otto-Cycle Light-Duty Trucks, and New Methanol-Fueled Natural Gas-Fueled, and Liquefied Petroleum Gas-Fueled Diesel-Cycle Light-Duty Trucks; Idle Test Procedures § 86.1537 Idle...

  14. Effects of Habitat Management Treatments on Plant Community Composition and Biomass in a Montane Wetland

    EPA Science Inventory

    We evaluated the vegetative response of wetlands and adjacent upland grasslands to four treatment regimes (continuous idle, fall prescribed burning followed by idle, annual fall cattle grazing, and rotation of summer grazing and idle) commonly used by the USGS. . . Our results il...

  15. 40 CFR 86.165-12 - Air conditioning idle test procedure.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... at idle when CO2 emissions are measured without any air conditioning systems operating, followed by a ten-minute period at idle when CO2 emissions are measured with the air conditioning system operating... section, turn on the vehicle's air conditioning system. Set automatic air conditioning systems to a...

  16. 40 CFR 85.2218 - Preconditioned idle test-EPA 91.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 19 2012-07-01 2012-07-01 false Preconditioned idle test-EPA 91. 85... Tests § 85.2218 Preconditioned idle test—EPA 91. (a) General requirements—(1) Exhaust gas sampling algorithm. The analysis of exhaust gas concentrations begins ten seconds after the applicable test mode...

  17. 40 CFR 86.1537 - Idle test run.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES (CONTINUED) Emission Regulations for Otto-Cycle...-Cycle Heavy-Duty Engines, New Otto-Cycle Light-Duty Trucks, and New Methanol-Fueled Natural Gas-Fueled, and Liquefied Petroleum Gas-Fueled Diesel-Cycle Light-Duty Trucks; Idle Test Procedures § 86.1537 Idle...

  18. Point Cloud Based Change Detection - an Automated Approach for Cloud-based Services

    NASA Astrophysics Data System (ADS)

    Collins, Patrick; Bahr, Thomas

    2016-04-01

    The fusion of stereo photogrammetric point clouds with LiDAR data or terrain information derived from SAR interferometry has a significant potential for 3D topographic change detection. In the present case study latest point cloud generation and analysis capabilities are used to examine a landslide that occurred in the village of Malin in Maharashtra, India, on 30 July 2014, and affected an area of ca. 44.000 m2. It focuses on Pléiades high resolution satellite imagery and the Airbus DS WorldDEMTM as a product of the TanDEM-X mission. This case study was performed using the COTS software package ENVI 5.3. Integration of custom processes and automation is supported by IDL (Interactive Data Language). Thus, ENVI analytics is running via the object-oriented and IDL-based ENVITask API. The pre-event topography is represented by the WorldDEMTM product, delivered with a raster of 12 m x 12 m and based on the EGM2008 geoid (called pre-DEM). For the post-event situation a Pléiades 1B stereo image pair of the AOI affected was obtained. The ENVITask "GeneratePointCloudsByDenseImageMatching" was implemented to extract passive point clouds in LAS format from the panchromatic stereo datasets: • A dense image-matching algorithm is used to identify corresponding points in the two images. • A block adjustment is applied to refine the 3D coordinates that describe the scene geometry. • Additionally, the WorldDEMTM was input to constrain the range of heights in the matching area, and subsequently the length of the epipolar line. The "PointCloudFeatureExtraction" task was executed to generate the post-event digital surface model from the photogrammetric point clouds (called post-DEM). Post-processing consisted of the following steps: • Adding the geoid component (EGM 2008) to the post-DEM. • Pre-DEM reprojection to the UTM Zone 43N (WGS-84) coordinate system and resizing. • Subtraction of the pre-DEM from the post-DEM. • Filtering and threshold based classification of the DEM difference to analyze the surface changes in 3D. The automated point cloud generation and analysis introduced here can be embedded in virtually any existing geospatial workflow for operational applications. Three integration options were implemented in this case study: • Integration within any ArcGIS environment whether deployed on the desktop, in the cloud, or online. Execution uses a customized ArcGIS script tool. A Python script file retrieves the parameters from the user interface and runs the precompiled IDL code. That IDL code is used to interface between the Python script and the relevant ENVITasks. • Publishing the point cloud processing tasks as services via the ENVI Services Engine (ESE). ESE is a cloud-based image analysis solution to publish and deploy advanced ENVI image and data analytics to existing enterprise infrastructures. For this purpose the entire IDL code can be capsuled in a single ENVITask. • Integration in an existing geospatial workflow using the Python-to-IDL Bridge. This mechanism allows calling IDL code within Python on a user-defined platform. The results of this case study allow a 3D estimation of the topographic changes within the tectonically active and anthropogenically invaded Malin area after the landslide event. Accordingly, the point cloud analysis was correlated successfully with modelled displacement contours of the slope. Based on optical satellite imagery, such point clouds of high precision and density distribution can be obtained in a few minutes to support the operational monitoring of landslide processes.

  19. An Introduction to 3-D Sound

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Null, Cynthia H. (Technical Monitor)

    1997-01-01

    This talk will overview the basic technologies related to the creation of virtual acoustic images, and the potential of including spatial auditory displays in human-machine interfaces. Research into the perceptual error inherent in both natural and virtual spatial hearing is reviewed, since the formation of improved technologies is tied to psychoacoustic research. This includes a discussion of Head Related Transfer Function (HRTF) measurement techniques (the HRTF provides important perceptual cues within a virtual acoustic display). Many commercial applications of virtual acoustics have so far focused on games and entertainment ; in this review, other types of applications are examined, including aeronautic safety, voice communications, virtual reality, and room acoustic simulation. In particular, the notion that realistic simulation is optimized within a virtual acoustic display when head motion and reverberation cues are included within a perceptual model.

  20. Open access for ALICE analysis based on virtualization technology

    NASA Astrophysics Data System (ADS)

    Buncic, P.; Gheata, M.; Schutz, Y.

    2015-12-01

    Open access is one of the important leverages for long-term data preservation for a HEP experiment. To guarantee the usability of data analysis tools beyond the experiment lifetime it is crucial that third party users from the scientific community have access to the data and associated software. The ALICE Collaboration has developed a layer of lightweight components built on top of virtualization technology to hide the complexity and details of the experiment-specific software. Users can perform basic analysis tasks within CernVM, a lightweight generic virtual machine, paired with an ALICE specific contextualization. Once the virtual machine is launched, a graphical user interface is automatically started without any additional configuration. This interface allows downloading the base ALICE analysis software and running a set of ALICE analysis modules. Currently the available tools include fully documented tutorials for ALICE analysis, such as the measurement of strange particle production or the nuclear modification factor in Pb-Pb collisions. The interface can be easily extended to include an arbitrary number of additional analysis modules. We present the current status of the tools used by ALICE through the CERN open access portal, and the plans for future extensions of this system.

  1. Energy-efficient Data-intensive Computing with a Fast Array of Wimpy Nodes

    DTIC Science & Technology

    2011-10-01

    sleep states provided by the Intel Atom chipset (between 2– 4 W) to turn off machines and migrate workloads during idle periods and low utilization...are generated. 81 0 200 400 600 800 1000 1200 IO PS in T ho us an ds Threads 1 2 4 8 16 32 64 Solid = Multi, Dashed = Single QD/T = 1 QD/T = 2...600 700 800 900 1000 L a te n c y ( in u s ) K Lookups/sec R1G8 R2G8 R4G8 R8G8 R16G8 R32G8 R64G8 (a) Multiget=2 (b) Multiget= 4 (c) Multiget=8 0

  2. Enhanced emotional responses during social coordination with a virtual partner

    PubMed Central

    Dumas, Guillaume; Kelso, J.A. Scott; Tognoli, Emmanuelle

    2016-01-01

    Emotion and motion, though seldom studied in tandem, are complementary aspects of social experience. This study investigates variations in emotional responses during movement coordination between a human and a Virtual Partner (VP), an agent whose virtual finger movements are driven by the Haken-Kelso-Bunz (HKB) equations of Coordination Dynamics. Twenty-one subjects were instructed to coordinate finger movements with the VP in either inphase or antiphase patterns. By adjusting model parameters, we manipulated the ‘intention’ of VP as cooperative or competitive with the human's instructed goal. Skin potential responses (SPR) were recorded to quantify the intensity of emotional response. At the end of each trial, subjects rated the VP's intention and whether they thought their partner was another human being or a machine. We found greater emotional responses when subjects reported that their partner was human and when coordination was stable. That emotional responses are strongly influenced by dynamic features of the VP's behavior, has implications for mental health, brain disorders and the design of socially cooperative machines. PMID:27094374

  3. Introduction of Virtualization Technology to Multi-Process Model Checking

    NASA Technical Reports Server (NTRS)

    Leungwattanakit, Watcharin; Artho, Cyrille; Hagiya, Masami; Tanabe, Yoshinori; Yamamoto, Mitsuharu

    2009-01-01

    Model checkers find failures in software by exploring every possible execution schedule. Java PathFinder (JPF), a Java model checker, has been extended recently to cover networked applications by caching data transferred in a communication channel. A target process is executed by JPF, whereas its peer process runs on a regular virtual machine outside. However, non-deterministic target programs may produce different output data in each schedule, causing the cache to restart the peer process to handle the different set of data. Virtualization tools could help us restore previous states of peers, eliminating peer restart. This paper proposes the application of virtualization technology to networked model checking, concentrating on JPF.

  4. Achieving High Resolution Timer Events in Virtualized Environment.

    PubMed

    Adamczyk, Blazej; Chydzinski, Andrzej

    2015-01-01

    Virtual Machine Monitors (VMM) have become popular in different application areas. Some applications may require to generate the timer events with high resolution and precision. This however may be challenging due to the complexity of VMMs. In this paper we focus on the timer functionality provided by five different VMMs-Xen, KVM, Qemu, VirtualBox and VMWare. Firstly, we evaluate resolutions and precisions of their timer events. Apparently, provided resolutions and precisions are far too low for some applications (e.g. networking applications with the quality of service). Then, using Xen virtualization we demonstrate the improved timer design that greatly enhances both the resolution and precision of achieved timer events.

  5. 40 CFR 86.1506 - Equipment required and specifications; overview.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... appear in §§ 86.1509 through 86.1511. (2) Fuel and analytical tests. Fuel requirements for idle exhaust... Natural Gas-Fueled, and Liquefied Petroleum Gas-Fueled Diesel-Cycle Light-Duty Trucks; Idle Test... for performing idle exhaust emission tests on Otto-cycle heavy-duty engines and Otto-cycle light-duty...

  6. Alternative Fuels Data Center: Idle Reduction Research and Development

    Science.gov Websites

    researchers at Argonne National Laboratory completed their analysis of the full fuel-cycle effects of idle Laboratory analyzed the full fuel-cycle effects of current idle reduction technologies. Researchers compared , electrified parking spaces, APUs, and several combinations of these. They compared effects for the United

  7. 40 CFR 85.2213 - Idle test-EPA 91.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 19 2012-07-01 2012-07-01 false Idle test-EPA 91. 85.2213 Section 85...) CONTROL OF AIR POLLUTION FROM MOBILE SOURCES Emission Control System Performance Warranty Short Tests § 85.2213 Idle test—EPA 91. (a) General requirements—(1) Exhaust gas sampling algorithm. The analysis of...

  8. Alternative Fuels Data Center: Heavy-Duty Truck Idle Reduction Technologies

    Science.gov Websites

    reduction technologies. Both DOE and the U.S. Environmental Protection Agency (EPA) provide information Heavy-Duty Truck Idle Reduction Technologies to someone by E-mail Share Alternative Fuels Data Center: Heavy-Duty Truck Idle Reduction Technologies on Facebook Tweet about Alternative Fuels Data

  9. A video, text, and speech-driven realistic 3-d virtual head for human-machine interface.

    PubMed

    Yu, Jun; Wang, Zeng-Fu

    2015-05-01

    A multiple inputs-driven realistic facial animation system based on 3-D virtual head for human-machine interface is proposed. The system can be driven independently by video, text, and speech, thus can interact with humans through diverse interfaces. The combination of parameterized model and muscular model is used to obtain a tradeoff between computational efficiency and high realism of 3-D facial animation. The online appearance model is used to track 3-D facial motion from video in the framework of particle filtering, and multiple measurements, i.e., pixel color value of input image and Gabor wavelet coefficient of illumination ratio image, are infused to reduce the influence of lighting and person dependence for the construction of online appearance model. The tri-phone model is used to reduce the computational consumption of visual co-articulation in speech synchronized viseme synthesis without sacrificing any performance. The objective and subjective experiments show that the system is suitable for human-machine interaction.

  10. Strength computation of forged parts taking into account strain hardening and damage

    NASA Astrophysics Data System (ADS)

    Cristescu, Michel L.

    2004-06-01

    Modern non-linear simulation software, such as FORGE 3 (registered trade mark of TRANSVALOR), are able to compute the residual stresses, the strain hardening and the damage during the forging process. A thermally dependent elasto-visco-plastic law is used to simulate the behavior of the material of the hot forged piece. A modified Lemaitre law coupled with elasticiy, plasticity and thermic is used to simulate the damage. After the simulation of the different steps of the forging process, the part is cooled and then virtually machined, in order to obtain the finished part. An elastic computation is then performed to equilibrate the residual stresses, so that we obtain the true geometry of the finished part after machining. The response of the part to the loadings it will sustain during it's life is then computed, taking into account the residual stresses, the strain hardening and the damage that occur during forging. This process is illustrated by the forging, virtual machining and stress analysis of an aluminium wheel hub.

  11. Dynamically programmable cache

    NASA Astrophysics Data System (ADS)

    Nakkar, Mouna; Harding, John A.; Schwartz, David A.; Franzon, Paul D.; Conte, Thomas

    1998-10-01

    Reconfigurable machines have recently been used as co- processors to accelerate the execution of certain algorithms or program subroutines. The problems with the above approach include high reconfiguration time and limited partial reconfiguration. By far the most critical problems are: (1) the small on-chip memory which results in slower execution time, and (2) small FPGA areas that cannot implement large subroutines. Dynamically Programmable Cache (DPC) is a novel architecture for embedded processors which offers solutions to the above problems. To solve memory access problems, DPC processors merge reconfigurable arrays with the data cache at various cache levels to create a multi-level reconfigurable machines. As a result DPC machines have both higher data accessibility and FPGA memory bandwidth. To solve the limited FPGA resource problem, DPC processors implemented multi-context switching (Virtualization) concept. Virtualization allows implementation of large subroutines with fewer FPGA cells. Additionally, DPC processors can parallelize the execution of several operations resulting in faster execution time. In this paper, the speedup improvement for DPC machines are shown to be 5X faster than an Altera FLEX10K FPGA chip and 2X faster than a Sun Ultral SPARC station for two different algorithms (convolution and motion estimation).

  12. Agreements in Virtual Organizations

    NASA Astrophysics Data System (ADS)

    Pankowska, Malgorzata

    This chapter is an attempt to explain the important impact that contract theory delivers with respect to the concept of virtual organization. The author believes that not enough research has been conducted in order to transfer theoretical foundations for networking to the phenomena of virtual organizations and open autonomic computing environment to ensure the controllability and management of them. The main research problem of this chapter is to explain the significance of agreements for virtual organizations governance. The first part of this chapter comprises explanations of differences among virtual machines and virtual organizations for further descriptions of the significance of the first ones to the development of the second. Next, the virtual organization development tendencies are presented and problems of IT governance in highly distributed organizational environment are discussed. The last part of this chapter covers analysis of contracts and agreements management for governance in open computing environments.

  13. Issues and prospects for the next generation of the spatial data transfer standard (SDTS)

    USGS Publications Warehouse

    Arctur, D.; Hair, D.; Timson, G.; Martin, E.P.; Fegeas, R.

    1998-01-01

    The Spatial Data Transfer Standard (SDTS) was designed to be capable of representing virtually any data model, rather than being a prescription for a single data model. It has fallen short of this ambitious goal for a number of reasons, which this paper investigates. In addition to issues that might have been anticipated in its design, a number of new issues have arisen since its initial development. These include the need to support explicit feature definitions, incremental update, value-added extensions, and change tracking within large, national databases. It is time to consider the next stage of evolution for SDTS. This paper suggests development of an Object Profile for SDTS that would integrate concepts for a dynamic schema structure, OpenGIS interface, and CORBA IDL.

  14. Rational improvement of gp41-targeting HIV-1 fusion inhibitors: an innovatively designed Ile-Asp-Leu tail with alternative conformations.

    PubMed

    Zhu, Yun; Su, Shan; Qin, Lili; Wang, Qian; Shi, Lei; Ma, Zhenxuan; Tang, Jianchao; Jiang, Shibo; Lu, Lu; Ye, Sheng; Zhang, Rongguang

    2016-09-26

    Peptides derived from the C-terminal heptad repeat (CHR) of HIV gp41 have been developed as effective fusion inhibitors against HIV-1, but facing the challenges of enhancing potency and stability. Here, we report a rationally designed novel HIV-1 fusion inhibitor derived from CHR-derived peptide (Trp628~Gln653, named CP), but with an innovative Ile-Asp-Leu tail (IDL) that dramatically increased the inhibitory activity by up to 100 folds. We also determined the crystal structures of artificial fusion peptides N36- and N43-L6-CP-IDL. Although the overall structures of both fusion peptides share the canonical six-helix bundle (6-HB) configuration, their IDL tails adopt two different conformations: a one-turn helix with the N36, and a hook-like structure with the longer N43. Structural comparison showed that the hook-like IDL tail possesses a larger interaction interface with NHR than the helical one. Further molecular dynamics simulations of the two 6-HBs and isolated CP-IDL peptides suggested that hook-like form of IDL tail can be stabilized by its binding to NHR trimer. Therefore, CP-IDL has potential for further development as a new HIV fusion inhibitor, and this strategy could be widely used in developing artificial fusion inhibitors against HIV and other enveloped viruses.

  15. Case Study – Idling Reduction Technologies for Emergency Service Vehicles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laughlin, Michael; Owens, Russell J.

    2016-01-01

    This case study explores the use of idle reduction technologies (IRTs) on emergency service vehicles in police, fire, and ambulance applications. Various commercially available IRT systems and approaches can decrease, or ultimately eliminate, engine idling. Fleets will thus save money on fuel, and will also decrease their criteria pollutant emissions, greenhouse gas emissions, and noise.

  16. Alternative Fuels Data Center: County Fleet Goes Big on Idle Reduction,

    Science.gov Websites

    Ethanol Use, Fuel Efficiency County Fleet Goes Big on Idle Reduction, Ethanol Use, Fuel , Ethanol Use, Fuel Efficiency on Facebook Tweet about Alternative Fuels Data Center: County Fleet Goes Big on Idle Reduction, Ethanol Use, Fuel Efficiency on Twitter Bookmark Alternative Fuels Data Center

  17. 40 CFR 85.2210 - Engine restart 2500 rpm/idle test-EPA 81.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 18 2010-07-01 2010-07-01 false Engine restart 2500 rpm/idle test-EPA... Warranty Short Tests § 85.2210 Engine restart 2500 rpm/idle test—EPA 81. (a)(1) General calendar year... engines. (ii) In a state for which the Administrator has approved a State Implementation Plan revision...

  18. 40 CFR 85.2214 - Two speed idle test-EPA 81.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 18 2011-07-01 2011-07-01 false Two speed idle test-EPA 81. 85.2214... Tests § 85.2214 Two speed idle test—EPA 81. (a)(1) General calendar year applicability. The test... exhaust pipes originate from a common point. (4) The engine speed is increased to 2500 ±300 rpm, with...

  19. 40 CFR 85.2214 - Two speed idle test-EPA 81.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 18 2010-07-01 2010-07-01 false Two speed idle test-EPA 81. 85.2214... Tests § 85.2214 Two speed idle test—EPA 81. (a)(1) General calendar year applicability. The test... exhaust pipes originate from a common point. (4) The engine speed is increased to 2500 ±300 rpm, with...

  20. Productivity improvement through cycle time analysis

    NASA Astrophysics Data System (ADS)

    Bonal, Javier; Rios, Luis; Ortega, Carlos; Aparicio, Santiago; Fernandez, Manuel; Rosendo, Maria; Sanchez, Alejandro; Malvar, Sergio

    1996-09-01

    A cycle time (CT) reduction methodology has been developed in the Lucent Technology facility (former AT&T) in Madrid, Spain. It is based on a comparison of the contribution of each process step in each technology with a target generated by a cycle time model. These targeted cycle times are obtained using capacity data of the machines processing those steps, queuing theory and theory of constrains (TOC) principles (buffers to protect bottleneck and low cycle time/inventory everywhere else). Overall efficiency equipment (OEE) like analysis is done in the machine groups with major differences between their target cycle time and real values. Comparisons between the current value of the parameters that command their capacity (process times, availability, idles, reworks, etc.) and the engineering standards are done to detect the cause of exceeding their contribution to the cycle time. Several friendly and graphical tools have been developed to track and analyze those capacity parameters. Specially important have showed to be two tools: ASAP (analysis of scheduling, arrivals and performance) and performer which analyzes interrelation problems among machines procedures and direct labor. The performer is designed for a detailed and daily analysis of an isolate machine. The extensive use of this tool by the whole labor force has demonstrated impressive results in the elimination of multiple small inefficiencies with a direct positive implications on OEE. As for ASAP, it shows the lot in process/queue for different machines at the same time. ASAP is a powerful tool to analyze the product flow management and the assigned capacity for interdependent operations like the cleaning and the oxidation/diffusion. Additional tools have been developed to track, analyze and improve the process times and the availability.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCaskey, Alexander J.

    There is a lack of state-of-the-art HPC simulation tools for simulating general quantum computing. Furthermore, there are no real software tools that integrate current quantum computers into existing classical HPC workflows. This product, the Quantum Virtual Machine (QVM), solves this problem by providing an extensible framework for pluggable virtual, or physical, quantum processing units (QPUs). It enables the execution of low level quantum assembly codes and returns the results of such executions.

  2. The Fluke Security Project

    DTIC Science & Technology

    2000-04-01

    be an extension of Utah’s nascent Quarks system, oriented to closely coupled cluster environments. However, the grant did not actually begin until... Intel x86, implemented ten virtual machine monitors and servers, including a virtual memory manager, a checkpointer, a process manager, a file server...Fluke, we developed a novel hierarchical processor scheduling frame- work called CPU inheritance scheduling [5]. This is a framework for scheduling

  3. The Integration of CloudStack and OCCI/OpenNebula with DIRAC

    NASA Astrophysics Data System (ADS)

    Méndez Muñoz, Víctor; Fernández Albor, Víctor; Graciani Diaz, Ricardo; Casajús Ramo, Adriàn; Fernández Pena, Tomás; Merino Arévalo, Gonzalo; José Saborido Silva, Juan

    2012-12-01

    The increasing availability of Cloud resources is arising as a realistic alternative to the Grid as a paradigm for enabling scientific communities to access large distributed computing resources. The DIRAC framework for distributed computing is an easy way to efficiently access to resources from both systems. This paper explains the integration of DIRAC with two open-source Cloud Managers: OpenNebula (taking advantage of the OCCI standard) and CloudStack. These are computing tools to manage the complexity and heterogeneity of distributed data center infrastructures, allowing to create virtual clusters on demand, including public, private and hybrid clouds. This approach has required to develop an extension to the previous DIRAC Virtual Machine engine, which was developed for Amazon EC2, allowing the connection with these new cloud managers. In the OpenNebula case, the development has been based on the CernVM Virtual Software Appliance with appropriate contextualization, while in the case of CloudStack, the infrastructure has been kept more general, which permits other Virtual Machine sources and operating systems being used. In both cases, CernVM File System has been used to facilitate software distribution to the computing nodes. With the resulting infrastructure, the cloud resources are transparent to the users through a friendly interface, like the DIRAC Web Portal. The main purpose of this integration is to get a system that can manage cloud and grid resources at the same time. This particular feature pushes DIRAC to a new conceptual denomination as interware, integrating different middleware. Users from different communities do not need to care about the installation of the standard software that is available at the nodes, nor the operating system of the host machine which is transparent to the user. This paper presents an analysis of the overhead of the virtual layer, doing some tests to compare the proposed approach with the existing Grid solution. License Notice: Published under licence in Journal of Physics: Conference Series by IOP Publishing Ltd.

  4. Generating Contextual Descriptions of Virtual Reality (VR) Spaces

    NASA Astrophysics Data System (ADS)

    Olson, D. M.; Zaman, C. H.; Sutherland, A.

    2017-12-01

    Virtual reality holds great potential for science communication, education, and research. However, interfaces for manipulating data and environments in virtual worlds are limited and idiosyncratic. Furthermore, speech and vision are the primary modalities by which humans collect information about the world, but the linking of visual and natural language domains is a relatively new pursuit in computer vision. Machine learning techniques have been shown to be effective at image and speech classification, as well as at describing images with language (Karpathy 2016), but have not yet been used to describe potential actions. We propose a technique for creating a library of possible context-specific actions associated with 3D objects in immersive virtual worlds based on a novel dataset generated natively in virtual reality containing speech, image, gaze, and acceleration data. We will discuss the design and execution of a user study in virtual reality that enabled the collection and the development of this dataset. We will also discuss the development of a hybrid machine learning algorithm linking vision data with environmental affordances in natural language. Our findings demonstrate that it is possible to develop a model which can generate interpretable verbal descriptions of possible actions associated with recognized 3D objects within immersive VR environments. This suggests promising applications for more intuitive user interfaces through voice interaction within 3D environments. It also demonstrates the potential to apply vast bodies of embodied and semantic knowledge to enrich user interaction within VR environments. This technology would allow for applications such as expert knowledge annotation of 3D environments, complex verbal data querying and object manipulation in virtual spaces, and computer-generated, dynamic 3D object affordances and functionality during simulations.

  5. The perception of spatial layout in real and virtual worlds.

    PubMed

    Arthur, E J; Hancock, P A; Chrysler, S T

    1997-01-01

    As human-machine interfaces grow more immersive and graphically-oriented, virtual environment systems become more prominent as the medium for human-machine communication. Often, virtual environments (VE) are built to provide exact metrical representations of existing or proposed physical spaces. However, it is not known how individuals develop representational models of these spaces in which they are immersed and how those models may be distorted with respect to both the virtual and real-world equivalents. To evaluate the process of model development, the present experiment examined participant's ability to reproduce a complex spatial layout of objects having experienced them previously under different viewing conditions. The layout consisted of nine common objects arranged on a flat plane. These objects could be viewed in a free binocular virtual condition, a free binocular real-world condition, and in a static monocular view of the real world. The first two allowed active exploration of the environment while the latter condition allowed the participant only a passive opportunity to observe from a single viewpoint. Viewing conditions were a between-subject variable with 10 participants randomly assigned to each condition. Performance was assessed using mapping accuracy and triadic comparisons of relative inter-object distances. Mapping results showed a significant effect of viewing condition where, interestingly, the static monocular condition was superior to both the active virtual and real binocular conditions. Results for the triadic comparisons showed a significant interaction for gender by viewing condition in which males were more accurate than females. These results suggest that the situation model resulting from interaction with a virtual environment was indistinguishable from interaction with real objects at least within the constraints of the present procedure.

  6. Breadboard RL10-2B low-thrust operating mode (second iteration) test report

    NASA Technical Reports Server (NTRS)

    Kanic, Paul G.; Kaldor, Raymond B.; Watkins, Pia M.

    1988-01-01

    Cryogenic rocket engines requiring a cooling process to thermally condition the engine to operating temperature can be made more efficient if cooling propellants can be burned. Tank head idle and pumped idle modes can be used to burn propellants employed for cooling, thereby providing useful thrust. Such idle modes required the use of a heat exchanger to vaporize oxygen prior to injection into the combustion chamber. During December 1988, Pratt and Whitney conducted a series of engine hot firing demonstrating the operation of two new, previously untested oxidizer heat exchanger designs. The program was a second iteration of previous low thrust testing conducted in 1984, during which a first-generation heat exchanger design was used. Although operation was demonstrated at tank head idle and pumped idle, the engine experienced instability when propellants could not be supplied to the heat exchanger at design conditions.

  7. Performance of machine-learning scoring functions in structure-based virtual screening.

    PubMed

    Wójcikowski, Maciej; Ballester, Pedro J; Siedlecki, Pawel

    2017-04-25

    Classical scoring functions have reached a plateau in their performance in virtual screening and binding affinity prediction. Recently, machine-learning scoring functions trained on protein-ligand complexes have shown great promise in small tailored studies. They have also raised controversy, specifically concerning model overfitting and applicability to novel targets. Here we provide a new ready-to-use scoring function (RF-Score-VS) trained on 15 426 active and 893 897 inactive molecules docked to a set of 102 targets. We use the full DUD-E data sets along with three docking tools, five classical and three machine-learning scoring functions for model building and performance assessment. Our results show RF-Score-VS can substantially improve virtual screening performance: RF-Score-VS top 1% provides 55.6% hit rate, whereas that of Vina only 16.2% (for smaller percent the difference is even more encouraging: RF-Score-VS top 0.1% achieves 88.6% hit rate for 27.5% using Vina). In addition, RF-Score-VS provides much better prediction of measured binding affinity than Vina (Pearson correlation of 0.56 and -0.18, respectively). Lastly, we test RF-Score-VS on an independent test set from the DEKOIS benchmark and observed comparable results. We provide full data sets to facilitate further research in this area (http://github.com/oddt/rfscorevs) as well as ready-to-use RF-Score-VS (http://github.com/oddt/rfscorevs_binary).

  8. Bridging Realty to Virtual Reality: Investigating Gender Effect and Student Engagement on Learning through Video Game Play in an Elementary School Classroom

    ERIC Educational Resources Information Center

    Annetta, Leonard; Mangrum, Jennifer; Holmes, Shawn; Collazo, Kimberly; Cheng, Meng-Tzu

    2009-01-01

    The purpose of this study was to examine students' learning of simple machines, a fifth-grade (ages 10-11) forces and motion unit, and student engagement using a teacher-created Multiplayer Educational Gaming Application. This mixed-method study collected pre-test/post-test results to determine student knowledge about simple machines. A survey…

  9. Breadboard RL10-11B low thrust operating mode

    NASA Technical Reports Server (NTRS)

    Kmiec, Thomas D.; Galler, Donald E.

    1987-01-01

    Cryogenic space engines require a cooling process to condition engine hardware to operating temperature before start. This can be accomplished most efficiently by burning propellants that would otherwise be dumped overboard after cooling the engine. The resultant low thrust operating modes are called Tank Head Idle and Pumped Idle. During February 1984, Pratt & Whitney conducted a series of tests demonstrating operation of the RL10 rocket engines at low thrust levels using a previously untried hydrogen/oxygen heat exchanger. The initial testing of the RL10-11B Breadboard Low Thrust Engine is described. The testing demonstrated operation at both tank head idle and pumped idle modes.

  10. Things That Work: Roles and Services of SPDF

    NASA Technical Reports Server (NTRS)

    McGuire, R. E.; Bilitza, D.; Candey, R. M.; Chimiak, R. A.; Cooper, J. F.; Garcia, L. N.; Han, D. B.; Harris, B. T.; Johnson, R. C.; King, J. H.; hide

    2010-01-01

    The current Heliophysics Science Data Management Policy (HpSDMP) defines the roles of the Space Physics Data Facility (SPDF) project as a heliophysics active Final Archive (aFA), a focus for critical data infrastructure services and a center of excellence for data and ancillary information services. This presentation will highlight (1) select current SPDF activities, (2) the lessons we are continuing to learn in how to usefully serve the the heliophysics science community and (3)SPDF's programmatic emphasis in the coming year. In cooperation with the Heliophysics Virtual discipline Observatories (VxOs), we are working closely with current, and with upcoming missions such as RBSP and MMS, to define effective approaches to ensure the long-term availability and archiving of mission data, as well as how SPDF services can complement active mission capabilities. We are working to make the Virtual Space Physics Observatory (VSPO) service comprehensive in all significant and NASA relevant heliophysics data. We will highlight a new CDAWeb interface, a faster SSCWeb, availability of our data through VxO services such as Autoplot, a new capability to easily access our data from within IDL and continuing improvements to CDF including better handling of leap seconds.

  11. Architecture and Key Techniques of Augmented Reality Maintenance Guiding System for Civil Aircrafts

    NASA Astrophysics Data System (ADS)

    hong, Zhou; Wenhua, Lu

    2017-01-01

    Augmented reality technology is introduced into the maintenance related field for strengthened information in real-world scenarios through integration of virtual assistant maintenance information with real-world scenarios. This can lower the difficulty of maintenance, reduce maintenance errors, and improve the maintenance efficiency and quality of civil aviation crews. Architecture of augmented reality virtual maintenance guiding system is proposed on the basis of introducing the definition of augmented reality and analyzing the characteristics of augmented reality virtual maintenance. Key techniques involved, such as standardization and organization of maintenance data, 3D registration, modeling of maintenance guidance information and virtual maintenance man-machine interaction, are elaborated emphatically, and solutions are given.

  12. Achieving High Resolution Timer Events in Virtualized Environment

    PubMed Central

    Adamczyk, Blazej; Chydzinski, Andrzej

    2015-01-01

    Virtual Machine Monitors (VMM) have become popular in different application areas. Some applications may require to generate the timer events with high resolution and precision. This however may be challenging due to the complexity of VMMs. In this paper we focus on the timer functionality provided by five different VMMs—Xen, KVM, Qemu, VirtualBox and VMWare. Firstly, we evaluate resolutions and precisions of their timer events. Apparently, provided resolutions and precisions are far too low for some applications (e.g. networking applications with the quality of service). Then, using Xen virtualization we demonstrate the improved timer design that greatly enhances both the resolution and precision of achieved timer events. PMID:26177366

  13. A computer-based training system combining virtual reality and multimedia

    NASA Technical Reports Server (NTRS)

    Stansfield, Sharon A.

    1993-01-01

    Training new users of complex machines is often an expensive and time-consuming process. This is particularly true for special purpose systems, such as those frequently encountered in DOE applications. This paper presents a computer-based training system intended as a partial solution to this problem. The system extends the basic virtual reality (VR) training paradigm by adding a multimedia component which may be accessed during interaction with the virtual environment. The 3D model used to create the virtual reality is also used as the primary navigation tool through the associated multimedia. This method exploits the natural mapping between a virtual world and the real world that it represents to provide a more intuitive way for the student to interact with all forms of information about the system.

  14. DIRAC universal pilots

    NASA Astrophysics Data System (ADS)

    Stagni, F.; McNab, A.; Luzzi, C.; Krzemien, W.; Consortium, DIRAC

    2017-10-01

    In the last few years, new types of computing models, such as IAAS (Infrastructure as a Service) and IAAC (Infrastructure as a Client), gained popularity. New resources may come as part of pledged resources, while others are in the form of opportunistic ones. Most but not all of these new infrastructures are based on virtualization techniques. In addition, some of them, present opportunities for multi-processor computing slots to the users. Virtual Organizations are therefore facing heterogeneity of the available resources and the use of an Interware software like DIRAC to provide the transparent, uniform interface has become essential. The transparent access to the underlying resources is realized by implementing the pilot model. DIRAC’s newest generation of generic pilots (the so-called Pilots 2.0) are the “pilots for all the skies”, and have been successfully released in production more than a year ago. They use a plugin mechanism that makes them easily adaptable. Pilots 2.0 have been used for fetching and running jobs on every type of resource, being it a Worker Node (WN) behind a CREAM/ARC/HTCondor/DIRAC Computing element, a Virtual Machine running on IaaC infrastructures like Vac or BOINC, on IaaS cloud resources managed by Vcycle, the LHCb High Level Trigger farm nodes, and any type of opportunistic computing resource. Make a machine a “Pilot Machine”, and all diversities between them will disappear. This contribution describes how pilots are made suitable for different resources, and the recent steps taken towards a fully unified framework, including monitoring. Also, the cases of multi-processor computing slots either on real or virtual machines, with the whole node or a partition of it, is discussed.

  15. A virtual simulator designed for collision prevention in proton therapy.

    PubMed

    Jung, Hyunuk; Kum, Oyeon; Han, Youngyih; Park, Hee Chul; Kim, Jin Sung; Choi, Doo Ho

    2015-10-01

    In proton therapy, collisions between the patient and nozzle potentially occur because of the large nozzle structure and efforts to minimize the air gap. Thus, software was developed to predict such collisions between the nozzle and patient using treatment virtual simulation. Three-dimensional (3D) modeling of a gantry inner-floor, nozzle, and robotic-couch was performed using SolidWorks based on the manufacturer's machine data. To obtain patient body information, a 3D-scanner was utilized right before CT scanning. Using the acquired images, a 3D-image of the patient's body contour was reconstructed. The accuracy of the image was confirmed against the CT image of a humanoid phantom. The machine components and the virtual patient were combined on the treatment-room coordinate system, resulting in a virtual simulator. The simulator simulated the motion of its components such as rotation and translation of the gantry, nozzle, and couch in real scale. A collision, if any, was examined both in static and dynamic modes. The static mode assessed collisions only at fixed positions of the machine's components, while the dynamic mode operated any time a component was in motion. A collision was identified if any voxels of two components, e.g., the nozzle and the patient or couch, overlapped when calculating volume locations. The event and collision point were visualized, and collision volumes were reported. All components were successfully assembled, and the motions were accurately controlled. The 3D-shape of the phantom agreed with CT images within a deviation of 2 mm. Collision situations were simulated within minutes, and the results were displayed and reported. The developed software will be useful in improving patient safety and clinical efficiency of proton therapy.

  16. Ultrafine particle concentrations in and around idling school buses

    NASA Astrophysics Data System (ADS)

    Zhang, Qunfang; Fischer, Heidi J.; Weiss, Robert E.; Zhu, Yifang

    2013-04-01

    Unnecessary school bus idling increases children's exposure to diesel exhaust, but to what extent children are exposed to ultrafine particles (UFPs, diameter < 100 nm) in and around idling school buses remains unclear. This study employed nine school buses and simulated five scenarios by varying emissions source, wind direction, and window position. The purpose was to investigate the impact of idling on UFP number concentration and PM2.5 mass concentration inside and near school buses. Near the school buses, total particle number concentration increased sharply from engine off to engine on under all scenarios, by a factor of up to 26. The impact of idling on UFP number concentration inside the school buses depended on wind direction and window position: wind direction was important and statistically significant while the effect of window positions depended on wind direction. Under certain scenarios, idling increased in-cabin total particle number concentrations by a factor of up to 5.8, with the significant increase occurring in the size range of 10-30 nm. No significant change of in-cabin PM2.5 mass concentration was observed due to idling, regardless of wind direction and window position, indicating that PM2.5 is not a good indicator for primary diesel exhaust particle exposure. The deposition rates based on total particle number concentration inside school bus cabins varied between 1.5 and 5.0 h-1 across nine tested buses under natural convection conditions, lower than those of passenger cars but higher than those of indoor environments.

  17. How can machine-learning methods assist in virtual screening for hyperuricemia? A healthcare machine-learning approach.

    PubMed

    Ichikawa, Daisuke; Saito, Toki; Ujita, Waka; Oyama, Hiroshi

    2016-12-01

    Our purpose was to develop a new machine-learning approach (a virtual health check-up) toward identification of those at high risk of hyperuricemia. Applying the system to general health check-ups is expected to reduce medical costs compared with administering an additional test. Data were collected during annual health check-ups performed in Japan between 2011 and 2013 (inclusive). We prepared training and test datasets from the health check-up data to build prediction models; these were composed of 43,524 and 17,789 persons, respectively. Gradient-boosting decision tree (GBDT), random forest (RF), and logistic regression (LR) approaches were trained using the training dataset and were then used to predict hyperuricemia in the test dataset. Undersampling was applied to build the prediction models to deal with the imbalanced class dataset. The results showed that the RF and GBDT approaches afforded the best performances in terms of sensitivity and specificity, respectively. The area under the curve (AUC) values of the models, which reflected the total discriminative ability of the classification, were 0.796 [95% confidence interval (CI): 0.766-0.825] for the GBDT, 0.784 [95% CI: 0.752-0.815] for the RF, and 0.785 [95% CI: 0.752-0.819] for the LR approaches. No significant differences were observed between pairs of each approach. Small changes occurred in the AUCs after applying undersampling to build the models. We developed a virtual health check-up that predicted the development of hyperuricemia using machine-learning methods. The GBDT, RF, and LR methods had similar predictive capability. Undersampling did not remarkably improve predictive power. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. The HARNESS Workbench: Unified and Adaptive Access to Diverse HPC Platforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sunderam, Vaidy S.

    2012-03-20

    The primary goal of the Harness WorkBench (HWB) project is to investigate innovative software environments that will help enhance the overall productivity of applications science on diverse HPC platforms. Two complementary frameworks were designed: one, a virtualized command toolkit for application building, deployment, and execution, that provides a common view across diverse HPC systems, in particular the DOE leadership computing platforms (Cray, IBM, SGI, and clusters); and two, a unified runtime environment that consolidates access to runtime services via an adaptive framework for execution-time and post processing activities. A prototype of the first was developed based on the concept ofmore » a 'system-call virtual machine' (SCVM), to enhance portability of the HPC application deployment process across heterogeneous high-end machines. The SCVM approach to portable builds is based on the insertion of toolkit-interpretable directives into original application build scripts. Modifications resulting from these directives preserve the semantics of the original build instruction flow. The execution of the build script is controlled by our toolkit that intercepts build script commands in a manner transparent to the end-user. We have applied this approach to a scientific production code (Gamess-US) on the Cray-XT5 machine. The second facet, termed Unibus, aims to facilitate provisioning and aggregation of multifaceted resources from resource providers and end-users perspectives. To achieve that, Unibus proposes a Capability Model and mediators (resource drivers) to virtualize access to diverse resources, and soft and successive conditioning to enable automatic and user-transparent resource provisioning. A proof of concept implementation has demonstrated the viability of this approach on high end machines, grid systems and computing clouds.« less

  19. Design of efficient molecular organic light-emitting diodes by a high-throughput virtual screening and experimental approach.

    PubMed

    Gómez-Bombarelli, Rafael; Aguilera-Iparraguirre, Jorge; Hirzel, Timothy D; Duvenaud, David; Maclaurin, Dougal; Blood-Forsythe, Martin A; Chae, Hyun Sik; Einzinger, Markus; Ha, Dong-Gwang; Wu, Tony; Markopoulos, Georgios; Jeon, Soonok; Kang, Hosuk; Miyazaki, Hiroshi; Numata, Masaki; Kim, Sunghan; Huang, Wenliang; Hong, Seong Ik; Baldo, Marc; Adams, Ryan P; Aspuru-Guzik, Alán

    2016-10-01

    Virtual screening is becoming a ground-breaking tool for molecular discovery due to the exponential growth of available computer time and constant improvement of simulation and machine learning techniques. We report an integrated organic functional material design process that incorporates theoretical insight, quantum chemistry, cheminformatics, machine learning, industrial expertise, organic synthesis, molecular characterization, device fabrication and optoelectronic testing. After exploring a search space of 1.6 million molecules and screening over 400,000 of them using time-dependent density functional theory, we identified thousands of promising novel organic light-emitting diode molecules across the visible spectrum. Our team collaboratively selected the best candidates from this set. The experimentally determined external quantum efficiencies for these synthesized candidates were as large as 22%.

  20. Using a Virtual Tablet Machine to Improve Student Understanding of the Complex Processes Involved in Tablet Manufacturing.

    PubMed

    Mattsson, Sofia; Sjöström, Hans-Erik; Englund, Claire

    2016-06-25

    Objective. To develop and implement a virtual tablet machine simulation to aid distance students' understanding of the processes involved in tablet production. Design. A tablet simulation was created enabling students to study the effects different parameters have on the properties of the tablet. Once results were generated, students interpreted and explained them on the basis of current theory. Assessment. The simulation was evaluated using written questionnaires and focus group interviews. Students appreciated the exercise and considered it to be motivational. Students commented that they found the simulation, together with the online seminar and the writing of the report, was beneficial for their learning process. Conclusion. According to students' perceptions, the use of the tablet simulation contributed to their understanding of the compaction process.

  1. Using a Virtual Tablet Machine to Improve Student Understanding of the Complex Processes Involved in Tablet Manufacturing

    PubMed Central

    Sjöström, Hans-Erik; Englund, Claire

    2016-01-01

    Objective. To develop and implement a virtual tablet machine simulation to aid distance students’ understanding of the processes involved in tablet production. Design. A tablet simulation was created enabling students to study the effects different parameters have on the properties of the tablet. Once results were generated, students interpreted and explained them on the basis of current theory. Assessment. The simulation was evaluated using written questionnaires and focus group interviews. Students appreciated the exercise and considered it to be motivational. Students commented that they found the simulation, together with the online seminar and the writing of the report, was beneficial for their learning process. Conclusion. According to students’ perceptions, the use of the tablet simulation contributed to their understanding of the compaction process. PMID:27402990

  2. Design of efficient molecular organic light-emitting diodes by a high-throughput virtual screening and experimental approach

    NASA Astrophysics Data System (ADS)

    Gómez-Bombarelli, Rafael; Aguilera-Iparraguirre, Jorge; Hirzel, Timothy D.; Duvenaud, David; MacLaurin, Dougal; Blood-Forsythe, Martin A.; Chae, Hyun Sik; Einzinger, Markus; Ha, Dong-Gwang; Wu, Tony; Markopoulos, Georgios; Jeon, Soonok; Kang, Hosuk; Miyazaki, Hiroshi; Numata, Masaki; Kim, Sunghan; Huang, Wenliang; Hong, Seong Ik; Baldo, Marc; Adams, Ryan P.; Aspuru-Guzik, Alán

    2016-10-01

    Virtual screening is becoming a ground-breaking tool for molecular discovery due to the exponential growth of available computer time and constant improvement of simulation and machine learning techniques. We report an integrated organic functional material design process that incorporates theoretical insight, quantum chemistry, cheminformatics, machine learning, industrial expertise, organic synthesis, molecular characterization, device fabrication and optoelectronic testing. After exploring a search space of 1.6 million molecules and screening over 400,000 of them using time-dependent density functional theory, we identified thousands of promising novel organic light-emitting diode molecules across the visible spectrum. Our team collaboratively selected the best candidates from this set. The experimentally determined external quantum efficiencies for these synthesized candidates were as large as 22%.

  3. Making extreme computations possible with virtual machines

    NASA Astrophysics Data System (ADS)

    Reuter, J.; Chokoufe Nejad, B.; Ohl, T.

    2016-10-01

    State-of-the-art algorithms generate scattering amplitudes for high-energy physics at leading order for high-multiplicity processes as compiled code (in Fortran, C or C++). For complicated processes the size of these libraries can become tremendous (many GiB). We show that amplitudes can be translated to byte-code instructions, which even reduce the size by one order of magnitude. The byte-code is interpreted by a Virtual Machine with runtimes comparable to compiled code and a better scaling with additional legs. We study the properties of this algorithm, as an extension of the Optimizing Matrix Element Generator (O'Mega). The bytecode matrix elements are available as alternative input for the event generator WHIZARD. The bytecode interpreter can be implemented very compactly, which will help with a future implementation on massively parallel GPUs.

  4. Extraction of angle deterministic signals in the presence of stationary speed fluctuations with cyclostationary blind source separation

    NASA Astrophysics Data System (ADS)

    Delvecchio, S.; Antoni, J.

    2012-02-01

    This paper addresses the use of a cyclostationary blind source separation algorithm (namely RRCR) to extract angle deterministic signals from mechanical rotating machines in presence of stationary speed fluctuations. This means that only phase fluctuations while machine is running in steady-state conditions are considered while run-up or run-down speed variations are not taken into account. The machine is also supposed to run in idle conditions so non-stationary phenomena due to the load are not considered. It is theoretically assessed that in such operating conditions the deterministic (periodic) signal in the angle domain becomes cyclostationary at first and second orders in the time domain. This fact justifies the use of the RRCR algorithm, which is able to directly extract the angle deterministic signal from the time domain without performing any kind of interpolation. This is particularly valuable when angular resampling fails because of uncontrolled speed fluctuations. The capability of the proposed approach is verified by means of simulated and actual vibration signals captured on a pneumatic screwdriver handle. In this particular case not only the extraction of the angle deterministic part can be performed but also the separation of the main sources of excitation (i.e. motor shaft imbalance, epyciloidal gear meshing and air pressure forces) affecting the user hand during operations.

  5. A One-Year Case Study: Understanding the Rich Potential of Project-Based Learning in a Virtual Reality Class for High School Students

    ERIC Educational Resources Information Center

    Morales, Teresa M.; Bang, EunJin; Andre, Thomas

    2013-01-01

    This paper presents a qualitative case analysis of a new and unique, high school, student-directed, project-based learning (PBL), virtual reality (VR) class. In order to create projects, students learned, on an independent basis, how to program an industrial-level VR machine. A constraint was that students were required to produce at least one…

  6. Can Science Education Research Give an Answer to Questions Posed by History of Science and Technology? The Case of Steam Engine's Measurement

    ERIC Educational Resources Information Center

    Kanderakis, Nikos E.

    2009-01-01

    According to the principle of virtual velocities, if on a simple machine in equilibrium we suppose a slight virtual movement, then the ratio of weights or forces equals the inverse ratio of velocities or displacements. The product of the weight raised or force applied multiplied by the height or displacement plays a central role there. British…

  7. Hospital steam sterilizer usage: could we switch off to save electricity and water?

    PubMed

    McGain, Forbes; Moore, Graham; Black, Jim

    2016-07-01

    Steam sterilization in hospitals is an energy and water intensive process. Our aim was to identify opportunities to improve electricity and water use. The objectives were to find: the time sterilizers spent active, idle and off; the variability in sterilizer use with the time of day and day of the week; and opportunities to switch off sterilizers instead of idling when no loads were waiting, and the resultant electricity and water savings. Analyses of routine data for one year of the activity of the four steam sterilizers in one hospital in Melbourne, Australia. We examined active sterilizer cycles, routine sterilizer switch-offs, and when sterilizers were active, idle and off. Several switch-off strategies were examined to identify electricity and water savings: switch off idle sterilizers when no loads are waiting and switch off one sterilizer after 10:00 h and a second sterilizer after midnight on all days. Sterilizers were active for 13,430 (38%) sterilizer-hours, off for 4822 (14%) sterilizer-hours, and idle for 16,788 (48%) sterilizer-hours. All four sterilizers were simultaneously active 9% of the time, and two or more sterilizers were idle for 69% of the time. A sterilizer was idle for two hours or less 13% of the time and idle for more than 2 h 87% of the time. A strategy to switch off idle sterilizers would reduce electricity use by 66 MWh and water use by 1004 kl per year, saving 26% electricity use and 13% of water use, resulting in financial savings of AUD$13,867 (UK£6,517) and a reduction in 79 tonnes of CO2 emissions per year. An alternative switch-off strategy of one sterilizer from 10:00 h onwards and a second from midnight would have saved 30 MWh and 456 kl of water. The methodology used of how hospital sterilizer use could be improved could be applied to all hospitals and more broadly to other equipment used in hospitals. © The Author(s) 2016.

  8. Methods For Self-Organizing Software

    DOEpatents

    Bouchard, Ann M.; Osbourn, Gordon C.

    2005-10-18

    A method for dynamically self-assembling and executing software is provided, containing machines that self-assemble execution sequences and data structures. In addition to ordered functions calls (found commonly in other software methods), mutual selective bonding between bonding sites of machines actuates one or more of the bonding machines. Two or more machines can be virtually isolated by a construct, called an encapsulant, containing a population of machines and potentially other encapsulants that can only bond with each other. A hierarchical software structure can be created using nested encapsulants. Multi-threading is implemented by populations of machines in different encapsulants that are interacting concurrently. Machines and encapsulants can move in and out of other encapsulants, thereby changing the functionality. Bonding between machines' sites can be deterministic or stochastic with bonding triggering a sequence of actions that can be implemented by each machine. A self-assembled execution sequence occurs as a sequence of stochastic binding between machines followed by their deterministic actuation. It is the sequence of bonding of machines that determines the execution sequence, so that the sequence of instructions need not be contiguous in memory.

  9. Idleness aversion and the need for justifiable busyness.

    PubMed

    Hsee, Christopher K; Yang, Adelle X; Wang, Liangyan

    2010-07-01

    There are many apparent reasons why people engage in activity, such as to earn money, to become famous, or to advance science. In this report, however, we suggest a potentially deeper reason: People dread idleness, yet they need a reason to be busy. Accordingly, we show in two experiments that without a justification, people choose to be idle; that even a specious justification can motivate people to be busy; and that people who are busy are happier than people who are idle. Curiously, this last effect is true even if people are forced to be busy. Our research suggests that many purported goals that people pursue may be merely justifications to keep themselves busy.

  10. A Novel Artificial Bee Colony Approach of Live Virtual Machine Migration Policy Using Bayes Theorem

    PubMed Central

    Xu, Gaochao; Hu, Liang; Fu, Xiaodong

    2013-01-01

    Green cloud data center has become a research hotspot of virtualized cloud computing architecture. Since live virtual machine (VM) migration technology is widely used and studied in cloud computing, we have focused on the VM placement selection of live migration for power saving. We present a novel heuristic approach which is called PS-ABC. Its algorithm includes two parts. One is that it combines the artificial bee colony (ABC) idea with the uniform random initialization idea, the binary search idea, and Boltzmann selection policy to achieve an improved ABC-based approach with better global exploration's ability and local exploitation's ability. The other one is that it uses the Bayes theorem to further optimize the improved ABC-based process to faster get the final optimal solution. As a result, the whole approach achieves a longer-term efficient optimization for power saving. The experimental results demonstrate that PS-ABC evidently reduces the total incremental power consumption and better protects the performance of VM running and migrating compared with the existing research. It makes the result of live VM migration more high-effective and meaningful. PMID:24385877

  11. A novel artificial bee colony approach of live virtual machine migration policy using Bayes theorem.

    PubMed

    Xu, Gaochao; Ding, Yan; Zhao, Jia; Hu, Liang; Fu, Xiaodong

    2013-01-01

    Green cloud data center has become a research hotspot of virtualized cloud computing architecture. Since live virtual machine (VM) migration technology is widely used and studied in cloud computing, we have focused on the VM placement selection of live migration for power saving. We present a novel heuristic approach which is called PS-ABC. Its algorithm includes two parts. One is that it combines the artificial bee colony (ABC) idea with the uniform random initialization idea, the binary search idea, and Boltzmann selection policy to achieve an improved ABC-based approach with better global exploration's ability and local exploitation's ability. The other one is that it uses the Bayes theorem to further optimize the improved ABC-based process to faster get the final optimal solution. As a result, the whole approach achieves a longer-term efficient optimization for power saving. The experimental results demonstrate that PS-ABC evidently reduces the total incremental power consumption and better protects the performance of VM running and migrating compared with the existing research. It makes the result of live VM migration more high-effective and meaningful.

  12. Human Machine Interfaces for Teleoperators and Virtual Environments Conference

    NASA Technical Reports Server (NTRS)

    1990-01-01

    In a teleoperator system the human operator senses, moves within, and operates upon a remote or hazardous environment by means of a slave mechanism (a mechanism often referred to as a teleoperator). In a virtual environment system the interactive human machine interface is retained but the slave mechanism and its environment are replaced by a computer simulation. Video is replaced by computer graphics. The auditory and force sensations imparted to the human operator are similarly computer generated. In contrast to a teleoperator system, where the purpose is to extend the operator's sensorimotor system in a manner that facilitates exploration and manipulation of the physical environment, in a virtual environment system, the purpose is to train, inform, alter, or study the human operator to modify the state of the computer and the information environment. A major application in which the human operator is the target is that of flight simulation. Although flight simulators have been around for more than a decade, they had little impact outside aviation presumably because the application was so specialized and so expensive.

  13. Mathematical defense method of networked servers with controlled remote backups

    NASA Astrophysics Data System (ADS)

    Kim, Song-Kyoo

    2006-05-01

    The networked server defense model is focused on reliability and availability in security respects. The (remote) backup servers are hooked up by VPN (Virtual Private Network) with high-speed optical network and replace broken main severs immediately. The networked server can be represent as "machines" and then the system deals with main unreliable, spare, and auxiliary spare machine. During vacation periods, when the system performs a mandatory routine maintenance, auxiliary machines are being used for back-ups; the information on the system is naturally delayed. Analog of the N-policy to restrict the usage of auxiliary machines to some reasonable quantity. The results are demonstrated in the network architecture by using the stochastic optimization techniques.

  14. Hydrocarbon emissions from in-use commercial aircraft during airport operations.

    PubMed

    Herndon, Scott C; Rogers, Todd; Dunlea, Edward J; Jayne, John T; Miake-Lye, Richard; Knighton, Berk

    2006-07-15

    The emissions of selected hydrocarbons from in-use commercial aircraft at a major airport in the United States were characterized using proton-transfer reaction mass spectrometry (PTR-MS) and tunable infrared differential absorption spectroscopy (TILDAS) to probe the composition of diluted exhaust plumes downwind. The emission indices for formaldehyde, acetaldehyde, benzene, and toluene, as well as other hydrocarbon species, were determined through analysis of 45 intercepted plumes identified as being associated with specific aircraft. As would have been predicted for high bypass turbine engines, the hydrocarbon emission index was greater in idle and taxiway acceleration plumes relative to approach and takeoff plumes. The opposite was seen in total NOy emission index, which increased from idle to takeoff. Within the idle plumes sampled in this study, the median emission index for formaldehyde was 1.1 g of HCHO per kg of fuel. For the subset of hydrocarbons measured in this work, the idle emissions levels relative to formaldehyde agree well with those of previous studies. The projected total unburned hydrocarbons (UHC) deduced from the range of in-use idle plumes analyzed in this work is greater than a plausible range of engine types using the defined idle condition (7% of rated engine thrust) in the International Civil Aviation Organization (ICAO) databank reference.

  15. On Why It Is Impossible to Prove that the BDX90 Dispatcher Implements a Time-sharing System

    NASA Technical Reports Server (NTRS)

    Boyer, R. S.; Moore, J. S.

    1983-01-01

    The Software Implemented Fault Tolerance SIFT system, is written in PASCAL except for about a page of machine code. The SIFT system implements a small time sharing system in which PASCAL programs for separate application tasks are executed according to a schedule with real time constraints. The PASCAL language has no provision for handling the notion of an interrupt such as the B930 clock interrupt. The PASCAL language also lacks the notion of running a PASCAL subroutine for a given amount of time, suspending it, saving away the suspension, and later activating the suspension. Machine code was used to overcome these inadequacies of PASCAL. Code which handles clock interrupts and suspends processes is called a dispatcher. The time sharing/virtual machine idea is completely destroyed by the reconfiguration task. After termination of the reconfiguration task, the tasks run by the dispatcher have no relation to those run before reconfiguration. It is impossible to view the dispatcher as a time-sharing system implementing virtual BDX930s running concurrently when one process can wipe out the others.

  16. Is There Still a Role for Irrigation and Debridement With Liner Exchange in Acute Periprosthetic Total Knee Infection?

    PubMed

    Duque, Andrés F; Post, Zachary D; Lutz, Rex W; Orozco, Fabio R; Pulido, Sergio H; Ong, Alvin C

    2017-04-01

    Periprosthetic joint infection (PJI) is an important cause of failure in total knee arthroplasty. Irrigation and debridement including liner exchange (I&D/L) success rates have varied for acute PJI. The purpose of this study is to present results of a specific protocol for I&D/L with retention of total knee arthroplasty components. Sixty-seven consecutive I&D/L patients were retrospectively evaluated. Inclusion criteria for I&D/L were as follows: fewer than 3 weeks of symptoms, no immunologic compromise, intact soft tissue sleeve, and well-fixed components. I&D/L consisted of extensive synovectomy; irrigation with 3 L each of betadine, Dakin's, bacitracin, and normal saline solutions; and exchange of the polyethylene component. Postoperatively, all patients were treated with intravenous antibiotics. Infection was considered eradicated if the wound healed without persistent drainage, there was no residual pain or evidence of infection. Forty-six patients (68.66%) had successful infection eradication regardless of bacterial strain. Those with methicillin-resistant Staphylococcus aureus (MRSA) had an 80% failure rate and those with Pseudomonas aeruginosa had a 66.67% failure rate. The success rate for bacteria other than MRSA and Pseudomonas was 85.25%. Our protocol for I&D/L was successful in the majority of patients who met strict criteria. We recommend that PJI patients with MRSA or P aeruginosa not undergo I&D/L and be treated with 2-stage revision. For nearly all other patients, our protocol avoids the cost and patient morbidity of a 2-stage revision. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. The TWINS Science Data System after the launch of TWINS 1

    NASA Astrophysics Data System (ADS)

    Goldstein, J.; Valek, P.; Skoug, R.; Delapp, D.; Redfern, J.; Carruth, B.; McComas, D.

    2007-05-01

    The Two Wide-angle Imaging Neutral-atom Spectrometers (TWINS) 1 satellite is in orbit and science data are expected to commence in the near future. TWINS-1 comprises half of the TWINS stereoscopic neutral atom imaging system that will advance our knowledge of the Earth's ring current. To support the expected data return, we have developed a Science Data System (SDS) for the TWINS mission. The TWINS SDS is an IDL- and Java- driven data interface that operates primarily via a web browser, and has as its spine an SQL-queryable database. Through this interface, TWINS science data will be provided to the TWINS team, the space science community, and the public. In this paper we present the current and future capabilities of the TWINS SDS, as well as how the SDS fits into virtual observatory infrastructure.

  18. Virtualization of open-source secure web services to support data exchange in a pediatric critical care research network

    PubMed Central

    Sward, Katherine A; Newth, Christopher JL; Khemani, Robinder G; Cryer, Martin E; Thelen, Julie L; Enriquez, Rene; Shaoyu, Su; Pollack, Murray M; Harrison, Rick E; Meert, Kathleen L; Berg, Robert A; Wessel, David L; Shanley, Thomas P; Dalton, Heidi; Carcillo, Joseph; Jenkins, Tammara L; Dean, J Michael

    2015-01-01

    Objectives To examine the feasibility of deploying a virtual web service for sharing data within a research network, and to evaluate the impact on data consistency and quality. Material and Methods Virtual machines (VMs) encapsulated an open-source, semantically and syntactically interoperable secure web service infrastructure along with a shadow database. The VMs were deployed to 8 Collaborative Pediatric Critical Care Research Network Clinical Centers. Results Virtual web services could be deployed in hours. The interoperability of the web services reduced format misalignment from 56% to 1% and demonstrated that 99% of the data consistently transferred using the data dictionary and 1% needed human curation. Conclusions Use of virtualized open-source secure web service technology could enable direct electronic abstraction of data from hospital databases for research purposes. PMID:25796596

  19. Liberating Virtual Machines from Physical Boundaries through Execution Knowledge

    DTIC Science & Technology

    2015-12-01

    trivial infrastructures such as VM distribution networks, clients need to wait for an extended period of time before launching a VM. In cloud settings...hardware support. MobiDesk [28] efficiently supports virtual desktops in mobile environments by decou- pling the user’s workload from host systems and...experiment set-up. VMs are migrated between a pair of source and destination hosts, which are connected through a backend 10 Gbps network for

  20. Mapping, Awareness, and Virtualization Network Administrator Training Tool (MAVNATT) Architecture and Framework

    DTIC Science & Technology

    2015-06-01

    unit may setup and teardown the entire tactical infrastructure multiple times per day. This tactical network administrator training is a critical...language and runs on Linux and Unix based systems. All provisioning is based around the Nagios Core application, a powerful backend solution for network...start up a large number of virtual machines quickly. CORE supports the simulation of fixed and mobile networks. CORE is open-source, written in Python

  1. Humans and machines in space: The vision, the challenge, the payoff; Proceedings of the 29th Goddard Memorial Symposium, Washington, Mar. 14, 15, 1991

    NASA Astrophysics Data System (ADS)

    Johnson, Bradley; May, Gayle L.; Korn, Paula

    The present conference discusses the currently envisioned goals of human-machine systems in spacecraft environments, prospects for human exploration of the solar system, and plausible methods for meeting human needs in space. Also discussed are the problems of human-machine interaction in long-duration space flights, remote medical systems for space exploration, the use of virtual reality for planetary exploration, the alliance between U.S. Antarctic and space programs, and the economic and educational impacts of the U.S. space program.

  2. Fundamental Study about the Landscape Estimation and Analysis by CG

    NASA Astrophysics Data System (ADS)

    Nakashima, Yoshio; Miyagoshi, Takashi; Takamatsu, Mamoru; Sassa, Kazuhiro

    In recent years, the color of advertising signboards or vending machines on the streets should be harmonized with the surrounding landscape. In this study, we investigated how the colors (red and white) of the vending machines virtually installed by CG would affect the traditional landscape. 20 subjects estimated landscape samples in Hida-Furukawa by the SD technique. The result of our experiment shows that the vending machines have great influence on the surrounding landscape. On the other hand, we have confirmed that they can harmonize with the circumference landscape by the color to use.

  3. A Framework for Analyzing the Whole Body Surface Area from a Single View

    PubMed Central

    Doretto, Gianfranco; Adjeroh, Donald

    2017-01-01

    We present a virtual reality (VR) framework for the analysis of whole human body surface area. Usual methods for determining the whole body surface area (WBSA) are based on well known formulae, characterized by large errors when the subject is obese, or belongs to certain subgroups. For these situations, we believe that a computer vision approach can overcome these problems and provide a better estimate of this important body indicator. Unfortunately, using machine learning techniques to design a computer vision system able to provide a new body indicator that goes beyond the use of only body weight and height, entails a long and expensive data acquisition process. A more viable solution is to use a dataset composed of virtual subjects. Generating a virtual dataset allowed us to build a population with different characteristics (obese, underweight, age, gender). However, synthetic data might differ from a real scenario, typical of the physician’s clinic. For this reason we develop a new virtual environment to facilitate the analysis of human subjects in 3D. This framework can simulate the acquisition process of a real camera, making it easy to analyze and to create training data for machine learning algorithms. With this virtual environment, we can easily simulate the real setup of a clinic, where a subject is standing in front of a camera, or may assume a different pose with respect to the camera. We use this newly designated environment to analyze the whole body surface area (WBSA). In particular, we show that we can obtain accurate WBSA estimations with just one view, virtually enabling the possibility to use inexpensive depth sensors (e.g., the Kinect) for large scale quantification of the WBSA from a single view 3D map. PMID:28045895

  4. Virtual Machine Language 2.1

    NASA Technical Reports Server (NTRS)

    Riedel, Joseph E.; Grasso, Christopher A.

    2012-01-01

    VML (Virtual Machine Language) is an advanced computing environment that allows spacecraft to operate using mechanisms ranging from simple, time-oriented sequencing to advanced, multicomponent reactive systems. VML has developed in four evolutionary stages. VML 0 is a core execution capability providing multi-threaded command execution, integer data types, and rudimentary branching. VML 1 added named parameterized procedures, extensive polymorphism, data typing, branching, looping issuance of commands using run-time parameters, and named global variables. VML 2 added for loops, data verification, telemetry reaction, and an open flight adaptation architecture. VML 2.1 contains major advances in control flow capabilities for executable state machines. On the resource requirements front, VML 2.1 features a reduced memory footprint in order to fit more capability into modestly sized flight processors, and endian-neutral data access for compatibility with Intel little-endian processors. Sequence packaging has been improved with object-oriented programming constructs and the use of implicit (rather than explicit) time tags on statements. Sequence event detection has been significantly enhanced with multi-variable waiting, which allows a sequence to detect and react to conditions defined by complex expressions with multiple global variables. This multi-variable waiting serves as the basis for implementing parallel rule checking, which in turn, makes possible executable state machines. The new state machine feature in VML 2.1 allows the creation of sophisticated autonomous reactive systems without the need to develop expensive flight software. Users specify named states and transitions, along with the truth conditions required, before taking transitions. Transitions with the same signal name allow separate state machines to coordinate actions: the conditions distributed across all state machines necessary to arm a particular signal are evaluated, and once found true, that signal is raised. The selected signal then causes all identically named transitions in all present state machines to be taken simultaneously. VML 2.1 has relevance to all potential space missions, both manned and unmanned. It was under consideration for use on Orion.

  5. 40 CFR 1033.530 - Duty cycles and calculations.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... with two idle settings, eight propulsion notches, and at least one dynamic brake notch and tested using... dynamic brake) Switch weighting factors Low Idle A 0.190 0.190 0.299 Normal Idle B 0.190 0.315 0.299 Dynamic Brake C 0.125 (1) 0.000 Notch 1 1 0.065 0.065 0.124 Notch 2 2 0.065 0.065 0.123 Notch 3 3 0.052 0...

  6. pick_xwell, a program for interactive picking of crosswell seismic and radar data

    USGS Publications Warehouse

    Ellefsen, K.J.

    1999-01-01

    travel times can be plotted on the computer screen or printed to a file in postscript format. The program is written in the IDL programming language, and it is executed, in command-line mode, within the IDL program. The IDL program must be run from an X-window terminal that is connected to a computer with the Unix operating system. The data must be in the SU format.

  7. Analysis of IUE spectra using the interactive data language

    NASA Technical Reports Server (NTRS)

    Joseph, C. L.

    1981-01-01

    The Interactive Data Language (IDL) is used to analyze high resolution spectra from the IUE. Like other interactive languages, IDL is designed for use by the scientist rather than the professional programmer, allowing him to conceive of his data as simple entities and to operate on this data with minimal difficulty. A package of programs created to analyze interstellar absorption lines is presented as an example of the graphical power of IDL.

  8. Flight evaluation of a hydromechanical backup control for the digital electronic engine control system in an F100 engine

    NASA Technical Reports Server (NTRS)

    Walsh, K. R.; Burcham, F. W.

    1984-01-01

    The backup control (BUC) features, the operation of the BUC system, the BUC control logic, and the BUC flight test results are described. The flight test results include: (1) transfers to the BUC at military and maximum power settings; (2) a military power acceleration showing comparisons bvetween flight and simulation for BUC and primary modes; (3) steady-state idle power showing idle compressor speeds at different flight conditions; and (4) idle-to-military power BUC transients showing where cpmpressor stalls occurred for different ramp rates and idle speeds. All the BUC transfers which occur during the DEEC flight program are initiated by the pilot. Automatic transfers to the BUC do not occur.

  9. Distributed computing methodology for training neural networks in an image-guided diagnostic application.

    PubMed

    Plagianakos, V P; Magoulas, G D; Vrahatis, M N

    2006-03-01

    Distributed computing is a process through which a set of computers connected by a network is used collectively to solve a single problem. In this paper, we propose a distributed computing methodology for training neural networks for the detection of lesions in colonoscopy. Our approach is based on partitioning the training set across multiple processors using a parallel virtual machine. In this way, interconnected computers of varied architectures can be used for the distributed evaluation of the error function and gradient values, and, thus, training neural networks utilizing various learning methods. The proposed methodology has large granularity and low synchronization, and has been implemented and tested. Our results indicate that the parallel virtual machine implementation of the training algorithms developed leads to considerable speedup, especially when large network architectures and training sets are used.

  10. Performance of machine-learning scoring functions in structure-based virtual screening

    PubMed Central

    Wójcikowski, Maciej; Ballester, Pedro J.; Siedlecki, Pawel

    2017-01-01

    Classical scoring functions have reached a plateau in their performance in virtual screening and binding affinity prediction. Recently, machine-learning scoring functions trained on protein-ligand complexes have shown great promise in small tailored studies. They have also raised controversy, specifically concerning model overfitting and applicability to novel targets. Here we provide a new ready-to-use scoring function (RF-Score-VS) trained on 15 426 active and 893 897 inactive molecules docked to a set of 102 targets. We use the full DUD-E data sets along with three docking tools, five classical and three machine-learning scoring functions for model building and performance assessment. Our results show RF-Score-VS can substantially improve virtual screening performance: RF-Score-VS top 1% provides 55.6% hit rate, whereas that of Vina only 16.2% (for smaller percent the difference is even more encouraging: RF-Score-VS top 0.1% achieves 88.6% hit rate for 27.5% using Vina). In addition, RF-Score-VS provides much better prediction of measured binding affinity than Vina (Pearson correlation of 0.56 and −0.18, respectively). Lastly, we test RF-Score-VS on an independent test set from the DEKOIS benchmark and observed comparable results. We provide full data sets to facilitate further research in this area (http://github.com/oddt/rfscorevs) as well as ready-to-use RF-Score-VS (http://github.com/oddt/rfscorevs_binary). PMID:28440302

  11. Decentralized real-time simulation of forest machines

    NASA Astrophysics Data System (ADS)

    Freund, Eckhard; Adam, Frank; Hoffmann, Katharina; Rossmann, Juergen; Kraemer, Michael; Schluse, Michael

    2000-10-01

    To develop realistic forest machine simulators is a demanding task. A useful simulator has to provide a close- to-reality simulation of the forest environment as well as the simulation of the physics of the vehicle. Customers demand a highly realistic three dimensional forestry landscape and the realistic simulation of the complex motion of the vehicle even in rough terrain in order to be able to use the simulator for operator training under close-to- reality conditions. The realistic simulation of the vehicle, especially with the driver's seat mounted on a motion platform, greatly improves the effect of immersion into the virtual reality of a simulated forest and the achievable level of education of the driver. Thus, the connection of the real control devices of forest machines to the simulation system has to be supported, i.e. the real control devices like the joysticks or the board computer system to control the crane, the aggregate etc. Beyond, the fusion of the board computer system and the simulation system is realized by means of sensors, i.e. digital and analog signals. The decentralized system structure allows several virtual reality systems to evaluate and visualize the information of the control devices and the sensors. So, while the driver is practicing, the instructor can immerse into the same virtual forest to monitor the session from his own viewpoint. In this paper, we are describing the realized structure as well as the necessary software and hardware components and application experiences.

  12. Cost analysis of equipment failure of a radiology department and possible choices about maintenance.

    PubMed

    Grisi, Guido; Dalla Palma, Ludovico; Rimondini, Allesandra; Palmolungo, Chiara; Cuttin Zernich, Roberto; Pozzi Mucelli, Roberto

    2002-01-01

    Our aim was to evaluate the economic impact of equipment failures in a radiology department with a view to guiding maintenance policy decisions. We assessed the negative economic impact caused by the interruption of activity of a radiodiagnostics section due to equipment failure, taking into account: the effects occurring during the first day of equipment down-time (assuming that the equipment failure occurs in the middle of the shift) and the effects during the following days until the repair of the failure; the effects occurring in the short- and long-term. To exemplify the negative impact of inactivity due to equipment failure, we chose three radiology sections with different levels of technological and operational complexity (chest radiology, gastrointestinal radiology and remote-controlled diagnostics). For each, we evaluated the loss of contribution margin and the idle capacity costs (short- and long-term impact). The negative economic effects were: for thoracic radiology, 496,77 Euro in the first day, and 30,99 Euro from the second day onwards; for gastrointestinal radiology, 526,40 Euro for the first day, and 730,39 Euro from the second day onwards; for remote-controlled diagnostics, 786,25 Euro for the first day, and 927,67 Euro from the second days onwards. Our results indicate that the level of idle capacity costs (mainly equipment and staff) increases with the complexity of the equipment, whereas the contribution margin appears to fluctuate, because the charges are state-imposed and do not vary with the complexity of equipment. Moreover, our analysis shows that if the workload of a broken machine can easily be assigned to an additional shift using another machine, losses are considerably reduced from the second day onwards. Once the negative economic impact of equipment failures has been evaluated, the second step is to choose the best kind of maintenance. A sound calculation of the economic impact of equipment failures is very useful for guiding the head of department and the hospital manager in deciding whether to purchase maintenance services (or a long-term guarantee) from the equipment manufacturer, to set up an auxiliary centre for maintenance and repair, or to purchase a third-party maintenance contract.

  13. Virtual terrain: a security-based representation of a computer network

    NASA Astrophysics Data System (ADS)

    Holsopple, Jared; Yang, Shanchieh; Argauer, Brian

    2008-03-01

    Much research has been put forth towards detection, correlating, and prediction of cyber attacks in recent years. As this set of research progresses, there is an increasing need for contextual information of a computer network to provide an accurate situational assessment. Typical approaches adopt contextual information as needed; yet such ad hoc effort may lead to unnecessary or even conflicting features. The concept of virtual terrain is, therefore, developed and investigated in this work. Virtual terrain is a common representation of crucial information about network vulnerabilities, accessibilities, and criticalities. A virtual terrain model encompasses operating systems, firewall rules, running services, missions, user accounts, and network connectivity. It is defined as connected graphs with arc attributes defining dynamic relationships among vertices modeling network entities, such as services, users, and machines. The virtual terrain representation is designed to allow feasible development and maintenance of the model, as well as efficacy in terms of the use of the model. This paper will describe the considerations in developing the virtual terrain schema, exemplary virtual terrain models, and algorithms utilizing the virtual terrain model for situation and threat assessment.

  14. Work Truck Idling Reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2017-03-01

    Hybrid utility trucks, with auxiliary power sources for on-board equipment, significantly reduce unnecessary idling resulting in fuel costs savings, less engine wear, and reduction in noise and emissions.

  15. A Machine Learning Approach to the Detection of Pilot's Reaction to Unexpected Events Based on EEG Signals

    PubMed Central

    Cyran, Krzysztof A.

    2018-01-01

    This work considers the problem of utilizing electroencephalographic signals for use in systems designed for monitoring and enhancing the performance of aircraft pilots. Systems with such capabilities are generally referred to as cognitive cockpits. This article provides a description of the potential that is carried by such systems, especially in terms of increasing flight safety. Additionally, a neuropsychological background of the problem is presented. Conducted research was focused mainly on the problem of discrimination between states of brain activity related to idle but focused anticipation of visual cue and reaction to it. Especially, a problem of selecting a proper classification algorithm for such problems is being examined. For that purpose an experiment involving 10 subjects was planned and conducted. Experimental electroencephalographic data was acquired using an Emotiv EPOC+ headset. Proposed methodology involved use of a popular method in biomedical signal processing, the Common Spatial Pattern, extraction of bandpower features, and an extensive test of different classification algorithms, such as Linear Discriminant Analysis, k-nearest neighbors, and Support Vector Machines with linear and radial basis function kernels, Random Forests, and Artificial Neural Networks. PMID:29849544

  16. omniClassifier: a Desktop Grid Computing System for Big Data Prediction Modeling

    PubMed Central

    Phan, John H.; Kothari, Sonal; Wang, May D.

    2016-01-01

    Robust prediction models are important for numerous science, engineering, and biomedical applications. However, best-practice procedures for optimizing prediction models can be computationally complex, especially when choosing models from among hundreds or thousands of parameter choices. Computational complexity has further increased with the growth of data in these fields, concurrent with the era of “Big Data”. Grid computing is a potential solution to the computational challenges of Big Data. Desktop grid computing, which uses idle CPU cycles of commodity desktop machines, coupled with commercial cloud computing resources can enable research labs to gain easier and more cost effective access to vast computing resources. We have developed omniClassifier, a multi-purpose prediction modeling application that provides researchers with a tool for conducting machine learning research within the guidelines of recommended best-practices. omniClassifier is implemented as a desktop grid computing system using the Berkeley Open Infrastructure for Network Computing (BOINC) middleware. In addition to describing implementation details, we use various gene expression datasets to demonstrate the potential scalability of omniClassifier for efficient and robust Big Data prediction modeling. A prototype of omniClassifier can be accessed at http://omniclassifier.bme.gatech.edu/. PMID:27532062

  17. Humans and machines in space: The vision, the challenge, the payoff; AAS Goddard Memorial Symposium, 29th, Washington, DC, March 14-15, 1991

    NASA Astrophysics Data System (ADS)

    Johnson, Bradley; May, Gayle L.; Korn, Paula

    A recent symposium produced papers in the areas of solar system exploration, man machine interfaces, cybernetics, virtual reality, telerobotics, life support systems and the scientific and technology spinoff from the NASA space program. A number of papers also addressed the social and economic impacts of the space program. For individual titles, see A95-87468 through A95-87479.

  18. Software Support Measurement and Estimating for Oracle Database Applications Using Mark II Function Points

    DTIC Science & Technology

    1992-12-01

    36 V.33. Coe ncint of De minstioi ........................ 37 V3A. F-Raio .................................... 37 V3.5... de ations. Instructions ae defined as lines of code or card images. Thus, a line containin two or mome souce statements counts as one instruction; a...understand the productivity paradox, recall de concept of virtual machines. When a higher level machine groups ogether many instructm of a lower level

  19. Taiwan Ascii and Idl_save Data Archives (AIDA) for THEMIS

    NASA Astrophysics Data System (ADS)

    Lee, B.; Hsieh, W.; Shue, J.; Angelopoulos, V.; Glassmeier, K. H.; McFadden, J. P.; Larson, D.

    2008-12-01

    THEMIS (Time History of Events and their Macroscopic Interactions during Substorms) is a satellite mission that aims to determine where and how substorms are triggered. The space research team in Taiwan has been involved in data promotion and scientific research. Taiwan Ascii and Idl_save Data Archives (AIDA) for THEMIS is the main work of the data promotion. Taiwan AIDA is developed for those who are not familiar with the Interactive Data Language (IDL) data analysis and visualization software, and those who have some basic IDL concepts and techniques and want more flexibilities in reading and plotting the THEMIS data. Two kinds of data format are stored in Taiwan AIDA: one is ASCII format for most users and the other is IDL SAVE format for IDL users. The public can download THEMIS data in either format through the Taiwan AIDA web site, http://themis.ss.ncu.edu.tw/e_data_download.php. Taiwan AIDA provides (1) plasma data including number density, average temperature, and velocity of ions and electrons, (2) magnetic field data, and (3) state information including the position and velocity of five THEMIS probes. On the Taiwan AIDA web site there are two data-downloading options. The public can download a large amount of data for a particular instrument in the FTP equivalent option; the public can also download all the data for a particular date in the Data Search option.

  20. Information integration and diagnosis analysis of equipment status and production quality for machining process

    NASA Astrophysics Data System (ADS)

    Zan, Tao; Wang, Min; Hu, Jianzhong

    2010-12-01

    Machining status monitoring technique by multi-sensors can acquire and analyze the machining process information to implement abnormity diagnosis and fault warning. Statistical quality control technique is normally used to distinguish abnormal fluctuations from normal fluctuations through statistical method. In this paper by comparing the advantages and disadvantages of the two methods, the necessity and feasibility of integration and fusion is introduced. Then an approach that integrates multi-sensors status monitoring and statistical process control based on artificial intelligent technique, internet technique and database technique is brought forward. Based on virtual instrument technique the author developed the machining quality assurance system - MoniSysOnline, which has been used to monitoring the grinding machining process. By analyzing the quality data and AE signal information of wheel dressing process the reason of machining quality fluctuation has been obtained. The experiment result indicates that the approach is suitable for the status monitoring and analyzing of machining process.

  1. Implementation of NASTRAN on the IBM/370 CMS operating system

    NASA Technical Reports Server (NTRS)

    Britten, S. S.; Schumacker, B.

    1980-01-01

    The NASA Structural Analysis (NASTRAN) computer program is operational on the IBM 360/370 series computers. While execution of NASTRAN has been described and implemented under the virtual storage operating systems of the IBM 370 models, the IBM 370/168 computer can also operate in a time-sharing mode under the virtual machine operating system using the Conversational Monitor System (CMS) subset. The changes required to make NASTRAN operational under the CMS operating system are described.

  2. Virtual reality hardware for use in interactive 3D data fusion and visualization

    NASA Astrophysics Data System (ADS)

    Gourley, Christopher S.; Abidi, Mongi A.

    1997-09-01

    Virtual reality has become a tool for use in many areas of research. We have designed and built a VR system for use in range data fusion and visualization. One major VR tool is the CAVE. This is the ultimate visualization tool, but comes with a large price tag. Our design uses a unique CAVE whose graphics are powered by a desktop computer instead of a larger rack machine making it much less costly. The system consists of a screen eight feet tall by twenty-seven feet wide giving a variable field-of-view currently set at 160 degrees. A silicon graphics Indigo2 MaxImpact with the impact channel option is used for display. This gives the capability to drive three projectors at a resolution of 640 by 480 for use in displaying the virtual environment and one 640 by 480 display for a user control interface. This machine is also the first desktop package which has built-in hardware texture mapping. This feature allows us to quickly fuse the range and intensity data and other multi-sensory data. The final goal is a complete 3D texture mapped model of the environment. A dataglove, magnetic tracker, and spaceball are to be used for manipulation of the data and navigation through the virtual environment. This system gives several users the ability to interactively create 3D models from multiple range images.

  3. An adaptive process-based cloud infrastructure for space situational awareness applications

    NASA Astrophysics Data System (ADS)

    Liu, Bingwei; Chen, Yu; Shen, Dan; Chen, Genshe; Pham, Khanh; Blasch, Erik; Rubin, Bruce

    2014-06-01

    Space situational awareness (SSA) and defense space control capabilities are top priorities for groups that own or operate man-made spacecraft. Also, with the growing amount of space debris, there is an increase in demand for contextual understanding that necessitates the capability of collecting and processing a vast amount sensor data. Cloud computing, which features scalable and flexible storage and computing services, has been recognized as an ideal candidate that can meet the large data contextual challenges as needed by SSA. Cloud computing consists of physical service providers and middleware virtual machines together with infrastructure, platform, and software as service (IaaS, PaaS, SaaS) models. However, the typical Virtual Machine (VM) abstraction is on a per operating systems basis, which is at too low-level and limits the flexibility of a mission application architecture. In responding to this technical challenge, a novel adaptive process based cloud infrastructure for SSA applications is proposed in this paper. In addition, the details for the design rationale and a prototype is further examined. The SSA Cloud (SSAC) conceptual capability will potentially support space situation monitoring and tracking, object identification, and threat assessment. Lastly, the benefits of a more granular and flexible cloud computing resources allocation are illustrated for data processing and implementation considerations within a representative SSA system environment. We show that the container-based virtualization performs better than hypervisor-based virtualization technology in an SSA scenario.

  4. Network Hardware Virtualization for Application Provisioning in Core Networks

    DOE PAGES

    Gumaste, Ashwin; Das, Tamal; Khandwala, Kandarp; ...

    2017-02-03

    We present that service providers and vendors are moving toward a network virtualized core, whereby multiple applications would be treated on their own merit in programmable hardware. Such a network would have the advantage of being customized for user requirements and allow provisioning of next generation services that are built specifically to meet user needs. In this article, we articulate the impact of network virtualization on networks that provide customized services and how a provider's business can grow with network virtualization. We outline a decision map that allows mapping of applications with technology that is supported in network-virtualization - orientedmore » equipment. Analogies to the world of virtual machines and generic virtualization show that hardware supporting network virtualization will facilitate new customer needs while optimizing the provider network from the cost and performance perspectives. A key conclusion of the article is that growth would yield sizable revenue when providers plan ahead in terms of supporting network-virtualization-oriented technology in their networks. To be precise, providers have to incorporate into their growth plans network elements capable of new service deployments while protecting network neutrality. Finally, a simulation study validates our NV-induced model.« less

  5. Network Hardware Virtualization for Application Provisioning in Core Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gumaste, Ashwin; Das, Tamal; Khandwala, Kandarp

    We present that service providers and vendors are moving toward a network virtualized core, whereby multiple applications would be treated on their own merit in programmable hardware. Such a network would have the advantage of being customized for user requirements and allow provisioning of next generation services that are built specifically to meet user needs. In this article, we articulate the impact of network virtualization on networks that provide customized services and how a provider's business can grow with network virtualization. We outline a decision map that allows mapping of applications with technology that is supported in network-virtualization - orientedmore » equipment. Analogies to the world of virtual machines and generic virtualization show that hardware supporting network virtualization will facilitate new customer needs while optimizing the provider network from the cost and performance perspectives. A key conclusion of the article is that growth would yield sizable revenue when providers plan ahead in terms of supporting network-virtualization-oriented technology in their networks. To be precise, providers have to incorporate into their growth plans network elements capable of new service deployments while protecting network neutrality. Finally, a simulation study validates our NV-induced model.« less

  6. Alternative Fuels Data Center

    Science.gov Websites

    pounds to compensate for the additional weight of the idle reduction technology. Upon request, vehicle operators must provide proof that the idle reduction technology is fully functional. (Reference Alaska

  7. Suitability of virtual prototypes to support human factors/ergonomics evaluation during the design.

    PubMed

    Aromaa, Susanna; Väänänen, Kaisa

    2016-09-01

    In recent years, the use of virtual prototyping has increased in product development processes, especially in the assessment of complex systems targeted at end-users. The purpose of this study was to evaluate the suitability of virtual prototyping to support human factors/ergonomics evaluation (HFE) during the design phase. Two different virtual prototypes were used: augmented reality (AR) and virtual environment (VE) prototypes of a maintenance platform of a rock crushing machine. Nineteen designers and other stakeholders were asked to assess the suitability of the prototype for HFE evaluation. Results indicate that the system model characteristics and user interface affect the experienced suitability. The VE system was valued as being more suitable to support the assessment of visibility, reach, and the use of tools than the AR system. The findings of this study can be used as a guidance for the implementing virtual prototypes in the product development process. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Design of virtual SCADA simulation system for pressurized water reactor

    NASA Astrophysics Data System (ADS)

    Wijaksono, Umar; Abdullah, Ade Gafar; Hakim, Dadang Lukman

    2016-02-01

    The Virtual SCADA system is a software-based Human-Machine Interface that can visualize the process of a plant. This paper described the results of the virtual SCADA system design that aims to recognize the principle of the Nuclear Power Plant type Pressurized Water Reactor. This simulation uses technical data of the Nuclear Power Plant Unit Olkiluoto 3 in Finland. This device was developed using Wonderware Intouch, which is equipped with manual books for each component, animation links, alarm systems, real time and historical trending, and security system. The results showed that in general this device can demonstrate clearly the principles of energy flow and energy conversion processes in Pressurized Water Reactors. This virtual SCADA simulation system can be used as instructional media to recognize the principle of Pressurized Water Reactor.

  9. Scientific Software

    NASA Technical Reports Server (NTRS)

    1995-01-01

    The Interactive Data Language (IDL), developed by Research Systems, Inc., is a tool for scientists to investigate their data without having to write a custom program for each study. IDL is based on the Mariners Mars spectral Editor (MMED) developed for studies from NASA's Mars spacecraft flights. The company has also developed Environment for Visualizing Images (ENVI), an image processing system for easily analyzing remotely sensed data written in IDL. The Visible Human CD, another Research Systems product, is the first complete digital reference of photographic images for exploring human anatomy.

  10. Long-Haul Truck Idling Burns Up Profits

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2015-08-12

    Long-haul truck drivers perform a vitally important service. In the course of their work, they must take rest periods as required by federal law. Most drivers remain in their trucks, which they keep running to provide power for heating, cooling, and other necessities. Such idling, however, comes at a cost; it is an expensive and polluting way to keep drivers safe and comfortable. Increasingly affordable alternatives to idling not only save money and reduce pollution, but also help drivers get a better night's rest.

  11. DU Processing Efficiency and Reclamation: Plasma Arc Melting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Imhoff, Seth D.; Aikin, Jr., Robert M.; Swenson, Hunter

    The work described here corresponds to one piece of a larger effort to increase material usage efficiency during DU processing operations. In order to achieve this goal, multiple technologies and approaches are being tested. These technologies occupy a spectrum of technology readiness levels (TRLs). Plasma arc melting (PAM) is one of the technologies being investigated. PAM utilizes a high temperature plasma to melt materials. Depending on process conditions, there are potential opportunities for recycling and material reclamation. When last routinely operational, the LANL research PAM showed extremely promising results for recycling and reclamation of DU and DU alloys. The currentmore » TRL is lower due to machine idleness for nearly two decades, which has proved difficult to restart. This report describes the existing results, promising techniques, and the process of bringing this technology back to readiness at LANL.« less

  12. Realizing a partial general quantum cloning machine with superconducting quantum-interference devices in a cavity QED

    NASA Astrophysics Data System (ADS)

    Fang, Bao-Long; Yang, Zhen; Ye, Liu

    2009-05-01

    We propose a scheme for implementing a partial general quantum cloning machine with superconducting quantum-interference devices coupled to a nonresonant cavity. By regulating the time parameters, our system can perform optimal symmetric (asymmetric) universal quantum cloning, optimal symmetric (asymmetric) phase-covariant cloning, and optimal symmetric economical phase-covariant cloning. In the scheme the cavity is only virtually excited, thus, the cavity decay is suppressed during the cloning operations.

  13. A Unified Access Model for Interconnecting Heterogeneous Wireless Networks

    DTIC Science & Technology

    2015-05-01

    Defined Networking, OpenFlow, WiFi, LTE 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT UU 18. NUMBER OF PAGES 18 19a. NAME OF...Machine Configurations with WiFi and LTE 4 2.3 Three Virtual Machine Configurations with WiFi and LTE 5 3. Results and Discussion 5 4. Summary and...WiFi and long-term evolution ( LTE ), and created a communication pathway between them via a central controller node. Our simulation serves as a

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuang, Yu; Wu, Lili; Department of Radiation Oncology, Cancer Hospital of Shantou University Medical College, Shantou, Guangdong

    Purpose: This study evaluated expected tumor control and normal tissue toxicity for prostate volumetric modulated arc therapy (VMAT) with and without radiation boosts to an intraprostatically dominant lesion (IDL), defined by {sup 18}F-choline positron emission tomography/computed tomography (PET/CT). Methods and Materials: Thirty patients with localized prostate cancer underwent {sup 18}F-choline PET/CT before treatment. Two VMAT plans, plan{sub 79} {sub Gy} and plan{sub 100-105} {sub Gy}, were compared for each patient. The whole-prostate planning target volume (PTV{sub prostate}) prescription was 79 Gy in both plans, but plan{sub 100-105} {sub Gy} added simultaneous boost doses of 100 Gy and 105 Gy to the IDL, definedmore » by 60% and 70% of maximum prostatic uptake on {sup 18}F-choline PET (IDL{sub suv60%} and IDL{sub suv70%}, respectively, with IDL{sub suv70%} nested inside IDL{sub suv60%} to potentially enhance tumor specificity of the maximum point dose). Plan evaluations included histopathological correspondence, isodose distributions, dose-volume histograms, tumor control probability (TCP), and normal tissue complication probability (NTCP). Results: Planning objectives and dose constraints proved feasible in 30 of 30 cases. Prostate sextant histopathology was available for 28 cases, confirming that IDL{sub suv60%} adequately covered all tumor-bearing prostate sextants in 27 cases and provided partial coverage in 1 case. Plan{sub 100-105} {sub Gy} had significantly higher TCP than plan{sub 79} {sub Gy} across all prostate regions for α/β ratios ranging from 1.5 Gy to 10 Gy (P<.001 for each case). There were no significant differences in bladder and femoral head NTCP between plans and slightly lower rectal NTCP (endpoint: grade ≥ 2 late toxicity or rectal bleeding) was found for plan{sub 100-105} {sub Gy}. Conclusions: VMAT can potentially increase the likelihood of tumor control in primary prostate cancer while observing normal tissue tolerances through simultaneous delivery of a steep radiation boost to a {sup 18}F-choline PET-defined IDL.« less

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garzoglio, Gabriele

    The Fermilab Grid and Cloud Computing Department and the KISTI Global Science experimental Data hub Center are working on a multi-year Collaborative Research and Development Agreement.With the knowledge developed in the first year on how to provision and manage a federation of virtual machines through Cloud management systems. In this second year, we expanded the work on provisioning and federation, increasing both scale and diversity of solutions, and we started to build on-demand services on the established fabric, introducing the paradigm of Platform as a Service to assist with the execution of scientific workflows. We have enabled scientific workflows ofmore » stakeholders to run on multiple cloud resources at the scale of 1,000 concurrent machines. The demonstrations have been in the areas of (a) Virtual Infrastructure Automation and Provisioning, (b) Interoperability and Federation of Cloud Resources, and (c) On-demand Services for ScientificWorkflows.« less

  16. Learning Motion Features for Example-Based Finger Motion Estimation for Virtual Characters

    NASA Astrophysics Data System (ADS)

    Mousas, Christos; Anagnostopoulos, Christos-Nikolaos

    2017-09-01

    This paper presents a methodology for estimating the motion of a character's fingers based on the use of motion features provided by a virtual character's hand. In the presented methodology, firstly, the motion data is segmented into discrete phases. Then, a number of motion features are computed for each motion segment of a character's hand. The motion features are pre-processed using restricted Boltzmann machines, and by using the different variations of semantically similar finger gestures in a support vector machine learning mechanism, the optimal weights for each feature assigned to a metric are computed. The advantages of the presented methodology in comparison to previous solutions are the following: First, we automate the computation of optimal weights that are assigned to each motion feature counted in our metric. Second, the presented methodology achieves an increase (about 17%) in correctly estimated finger gestures in comparison to a previous method.

  17. A Cloud-based Approach to Medical NLP

    PubMed Central

    Chard, Kyle; Russell, Michael; Lussier, Yves A.; Mendonça, Eneida A; Silverstein, Jonathan C.

    2011-01-01

    Natural Language Processing (NLP) enables access to deep content embedded in medical texts. To date, NLP has not fulfilled its promise of enabling robust clinical encoding, clinical use, quality improvement, and research. We submit that this is in part due to poor accessibility, scalability, and flexibility of NLP systems. We describe here an approach and system which leverages cloud-based approaches such as virtual machines and Representational State Transfer (REST) to extract, process, synthesize, mine, compare/contrast, explore, and manage medical text data in a flexibly secure and scalable architecture. Available architectures in which our Smntx (pronounced as semantics) system can be deployed include: virtual machines in a HIPAA-protected hospital environment, brought up to run analysis over bulk data and destroyed in a local cloud; a commercial cloud for a large complex multi-institutional trial; and within other architectures such as caGrid, i2b2, or NHIN. PMID:22195072

  18. A cloud-based approach to medical NLP.

    PubMed

    Chard, Kyle; Russell, Michael; Lussier, Yves A; Mendonça, Eneida A; Silverstein, Jonathan C

    2011-01-01

    Natural Language Processing (NLP) enables access to deep content embedded in medical texts. To date, NLP has not fulfilled its promise of enabling robust clinical encoding, clinical use, quality improvement, and research. We submit that this is in part due to poor accessibility, scalability, and flexibility of NLP systems. We describe here an approach and system which leverages cloud-based approaches such as virtual machines and Representational State Transfer (REST) to extract, process, synthesize, mine, compare/contrast, explore, and manage medical text data in a flexibly secure and scalable architecture. Available architectures in which our Smntx (pronounced as semantics) system can be deployed include: virtual machines in a HIPAA-protected hospital environment, brought up to run analysis over bulk data and destroyed in a local cloud; a commercial cloud for a large complex multi-institutional trial; and within other architectures such as caGrid, i2b2, or NHIN.

  19. Hardware/software codesign for embedded RISC core

    NASA Astrophysics Data System (ADS)

    Liu, Peng

    2001-12-01

    This paper describes hardware/software codesign method of the extendible embedded RISC core VIRGO, which based on MIPS-I instruction set architecture. VIRGO is described by Verilog hardware description language that has five-stage pipeline with shared 32-bit cache/memory interface, and it is controlled by distributed control scheme. Every pipeline stage has one small controller, which controls the pipeline stage status and cooperation among the pipeline phase. Since description use high level language and structure is distributed, VIRGO core has highly extension that can meet the requirements of application. We take look at the high-definition television MPEG2 MPHL decoder chip, constructed the hardware/software codesign virtual prototyping machine that can research on VIRGO core instruction set architecture, and system on chip memory size requirements, and system on chip software, etc. We also can evaluate the system on chip design and RISC instruction set based on the virtual prototyping machine platform.

  20. Human intestinal transporter database: QSAR modeling and virtual profiling of drug uptake, efflux and interactions.

    PubMed

    Sedykh, Alexander; Fourches, Denis; Duan, Jianmin; Hucke, Oliver; Garneau, Michel; Zhu, Hao; Bonneau, Pierre; Tropsha, Alexander

    2013-04-01

    Membrane transporters mediate many biological effects of chemicals and play a major role in pharmacokinetics and drug resistance. The selection of viable drug candidates among biologically active compounds requires the assessment of their transporter interaction profiles. Using public sources, we have assembled and curated the largest, to our knowledge, human intestinal transporter database (>5,000 interaction entries for >3,700 molecules). This data was used to develop thoroughly validated classification Quantitative Structure-Activity Relationship (QSAR) models of transport and/or inhibition of several major transporters including MDR1, BCRP, MRP1-4, PEPT1, ASBT, OATP2B1, OCT1, and MCT1. QSAR models have been developed with advanced machine learning techniques such as Support Vector Machines, Random Forest, and k Nearest Neighbors using Dragon and MOE chemical descriptors. These models afforded high external prediction accuracies of 71-100% estimated by 5-fold external validation, and showed hit retrieval rates with up to 20-fold enrichment in the virtual screening of DrugBank compounds. The compendium of predictive QSAR models developed in this study can be used for virtual profiling of drug candidates and/or environmental agents with the optimal transporter profiles.

  1. A Distributed Parallel Genetic Algorithm of Placement Strategy for Virtual Machines Deployment on Cloud Platform

    PubMed Central

    Dong, Yu-Shuang; Xu, Gao-Chao; Fu, Xiao-Dong

    2014-01-01

    The cloud platform provides various services to users. More and more cloud centers provide infrastructure as the main way of operating. To improve the utilization rate of the cloud center and to decrease the operating cost, the cloud center provides services according to requirements of users by sharding the resources with virtualization. Considering both QoS for users and cost saving for cloud computing providers, we try to maximize performance and minimize energy cost as well. In this paper, we propose a distributed parallel genetic algorithm (DPGA) of placement strategy for virtual machines deployment on cloud platform. It executes the genetic algorithm parallelly and distributedly on several selected physical hosts in the first stage. Then it continues to execute the genetic algorithm of the second stage with solutions obtained from the first stage as the initial population. The solution calculated by the genetic algorithm of the second stage is the optimal one of the proposed approach. The experimental results show that the proposed placement strategy of VM deployment can ensure QoS for users and it is more effective and more energy efficient than other placement strategies on the cloud platform. PMID:25097872

  2. Implementation of Grid Tier 2 and Tier 3 facilities on a Distributed OpenStack Cloud

    NASA Astrophysics Data System (ADS)

    Limosani, Antonio; Boland, Lucien; Coddington, Paul; Crosby, Sean; Huang, Joanna; Sevior, Martin; Wilson, Ross; Zhang, Shunde

    2014-06-01

    The Australian Government is making a AUD 100 million investment in Compute and Storage for the academic community. The Compute facilities are provided in the form of 30,000 CPU cores located at 8 nodes around Australia in a distributed virtualized Infrastructure as a Service facility based on OpenStack. The storage will eventually consist of over 100 petabytes located at 6 nodes. All will be linked via a 100 Gb/s network. This proceeding describes the development of a fully connected WLCG Tier-2 grid site as well as a general purpose Tier-3 computing cluster based on this architecture. The facility employs an extension to Torque to enable dynamic allocations of virtual machine instances. A base Scientific Linux virtual machine (VM) image is deployed in the OpenStack cloud and automatically configured as required using Puppet. Custom scripts are used to launch multiple VMs, integrate them into the dynamic Torque cluster and to mount remote file systems. We report on our experience in developing this nation-wide ATLAS and Belle II Tier 2 and Tier 3 computing infrastructure using the national Research Cloud and storage facilities.

  3. A distributed parallel genetic algorithm of placement strategy for virtual machines deployment on cloud platform.

    PubMed

    Dong, Yu-Shuang; Xu, Gao-Chao; Fu, Xiao-Dong

    2014-01-01

    The cloud platform provides various services to users. More and more cloud centers provide infrastructure as the main way of operating. To improve the utilization rate of the cloud center and to decrease the operating cost, the cloud center provides services according to requirements of users by sharding the resources with virtualization. Considering both QoS for users and cost saving for cloud computing providers, we try to maximize performance and minimize energy cost as well. In this paper, we propose a distributed parallel genetic algorithm (DPGA) of placement strategy for virtual machines deployment on cloud platform. It executes the genetic algorithm parallelly and distributedly on several selected physical hosts in the first stage. Then it continues to execute the genetic algorithm of the second stage with solutions obtained from the first stage as the initial population. The solution calculated by the genetic algorithm of the second stage is the optimal one of the proposed approach. The experimental results show that the proposed placement strategy of VM deployment can ensure QoS for users and it is more effective and more energy efficient than other placement strategies on the cloud platform.

  4. Virtual reality and neuropsychological assessment: The reliability of a virtual kitchen to assess daily-life activities in victims of traumatic brain injury.

    PubMed

    Besnard, Jeremy; Richard, Paul; Banville, Frederic; Nolin, Pierre; Aubin, Ghislaine; Le Gall, Didier; Richard, Isabelle; Allain, Phillippe

    2016-01-01

    Traumatic brain injury (TBI) causes impairments affecting instrumental activities of daily living (IADL). However, few studies have considered virtual reality as an ecologically valid tool for the assessment of IADL in patients who have sustained a TBI. The main objective of the present study was to examine the use of the Nonimmersive Virtual Coffee Task (NI-VCT) for IADL assessment in patients with TBI. We analyzed the performance of 19 adults suffering from TBI and 19 healthy controls (HCs) in the real and virtual tasks of making coffee with a coffee machine, as well as in global IQ and executive functions. Patients performed worse than HCs on both real and virtual tasks and on all tests of executive functions. Correlation analyses revealed that NI-VCT scores were related to scores on the real task. Moreover, regression analyses demonstrated that performance on NI-VCT matched real-task performance. Our results support the idea that the virtual kitchen is a valid tool for IADL assessment in patients who have sustained a TBI.

  5. Virtualization - A Key Cost Saver in NASA Multi-Mission Ground System Architecture

    NASA Technical Reports Server (NTRS)

    Swenson, Paul; Kreisler, Stephen; Sager, Jennifer A.; Smith, Dan

    2014-01-01

    With science team budgets being slashed, and a lack of adequate facilities for science payload teams to operate their instruments, there is a strong need for innovative new ground systems that are able to provide necessary levels of capability processing power, system availability and redundancy while maintaining a small footprint in terms of physical space, power utilization and cooling.The ground system architecture being presented is based off of heritage from several other projects currently in development or operations at Goddard, but was designed and built specifically to meet the needs of the Science and Planetary Operations Control Center (SPOCC) as a low-cost payload command, control, planning and analysis operations center. However, this SPOCC architecture was designed to be generic enough to be re-used partially or in whole by other labs and missions (since its inception that has already happened in several cases!)The SPOCC architecture leverages a highly available VMware-based virtualization cluster with shared SAS Direct-Attached Storage (DAS) to provide an extremely high-performing, low-power-utilization and small-footprint compute environment that provides Virtual Machine resources shared among the various tenant missions in the SPOCC. The storage is also expandable, allowing future missions to chain up to 7 additional 2U chassis of storage at an extremely competitive cost if they require additional archive or virtual machine storage space.The software architecture provides a fully-redundant GMSEC-based message bus architecture based on the ActiveMQ middleware to track all health and safety status within the SPOCC ground system. All virtual machines utilize the GMSEC system agents to report system host health over the GMSEC bus, and spacecraft payload health is monitored using the Hammers Integrated Test and Operations System (ITOS) Galaxy Telemetry and Command (TC) system, which performs near-real-time limit checking and data processing on the downlinked data stream and injects messages into the GMSEC bus that are monitored to automatically page the on-call operator or Systems Administrator (SA) when an off-nominal condition is detected. This architecture, like the LTSP thin clients, are shared across all tenant missions.Other required IT security controls are implemented at the ground system level, including physical access controls, logical system-level authentication authorization management, auditing and reporting, network management and a NIST 800-53 FISMA-Moderate IT Security plan Risk Assessment Contingency Plan, helping multiple missions share the cost of compliance with agency-mandated directives.The SPOCC architecture provides science payload control centers and backup mission operations centers with a cost-effective, standardized approach to virtualizing and monitoring resources that were traditionally multiple racks full of physical machines. The increased agility in deploying new virtual systems and thin client workstations can provide significant savings in personnel costs for maintaining the ground system. The cost savings in procurement, power, rack footprint and cooling as well as the shared multi-mission design greatly reduces upfront cost for missions moving into the facility. Overall, the authors hope that this architecture will become a model for how future NASA operations centers are constructed!

  6. Hypersonic MHD Propulsion System Integration for the Mercury Lightcraft

    NASA Astrophysics Data System (ADS)

    Myrabo, L. N.; Rosa, R. J.

    2004-03-01

    Introduced herein are the design, systems integration, and performance analysis of an exotic magnetohydrodynamic (MHD) slipstream accelerator engine for a single-occupant ``Mercury'' lightcraft. This ultra-energetic, laser-boosted vehicle is designed to ride a `tractor beam' into space, transmitted from a future orbital network of satellite solar power stations. The lightcraft's airbreathing combined-cycle engine employs a rotary pulsed detonation thruster mode for lift-off & landing, and an MHD slipstream accelerator mode at hypersonic speeds. The latter engine transforms the transatmospheric acceleration path into a virtual electromagnetic `mass-driver' channel; the hypersonic momentum exchange process (with the atmosphere) enables engine specific impulses in the range of 6000 to 16,000 seconds, and propellant mass fractions as low as 10%. The single-stage-to-orbit, highly reusable lightcraft can accelerate at 3 Gs into low Earth orbit with its throttle just barely beyond `idle' power, or virtually `disappear' at 30 G's and beyond. The objective of this advanced lightcraft design is to lay the technological foundations for a safe, very low cost (e.g., 1000X below chemical rockets) air and space transportation for human life in the mid-21st Century - a system that will be completely `green' and independent of Earth's limited fossil fuel reserves.

  7. Reliable Geographical Forwarding in Cognitive Radio Sensor Networks Using Virtual Clusters

    PubMed Central

    Zubair, Suleiman; Fisal, Norsheila

    2014-01-01

    The need for implementing reliable data transfer in resource-constrained cognitive radio ad hoc networks is still an open issue in the research community. Although geographical forwarding schemes are characterized by their low overhead and efficiency in reliable data transfer in traditional wireless sensor network, this potential is still yet to be utilized for viable routing options in resource-constrained cognitive radio ad hoc networks in the presence of lossy links. In this paper, a novel geographical forwarding technique that does not restrict the choice of the next hop to the nodes in the selected route is presented. This is achieved by the creation of virtual clusters based on spectrum correlation from which the next hop choice is made based on link quality. The design maximizes the use of idle listening and receiver contention prioritization for energy efficiency, the avoidance of routing hot spots and stability. The validation result, which closely follows the simulation result, shows that the developed scheme can make more advancement to the sink as against the usual decisions of relevant ad hoc on-demand distance vector route select operations, while ensuring channel quality. Further simulation results have shown the enhanced reliability, lower latency and energy efficiency of the presented scheme. PMID:24854362

  8. Media-Augmented Exercise Machines

    NASA Astrophysics Data System (ADS)

    Krueger, T.

    2002-01-01

    Cardio-vascular exercise has been used to mitigate the muscle and cardiac atrophy associated with adaptation to micro-gravity environments. Several hours per day may be required. In confined spaces and long duration missions this kind of exercise is inevitably repetitive and rapidly becomes uninteresting. At the same time, there are pressures to accomplish as much as possible given the cost- per-hour for humans occupying orbiting or interplanetary. Media augmentation provides a the means to overlap activities in time by supplementing the exercise with social, recreational, training or collaborative activities and thereby reducing time pressures. In addition, the machine functions as an interface to a wide range of digital environments allowing for spatial variety in an otherwise confined environment. We hypothesize that the adoption of media augmented exercise machines will have a positive effect on psycho-social well-being on long duration missions. By organizing and supplementing exercise machines, data acquisition hardware, computers and displays into an interacting system this proposal increases functionality with limited additional mass. This paper reviews preliminary work on a project to augment exercise equipment in a manner that addresses these issues and at the same time opens possibilities for additional benefits. A testbed augmented exercise machine uses a specialty built cycle trainer as both input to a virtual environment and as an output device from it using spatialized sound, and visual displays, vibration transducers and variable resistance. The resulting interactivity increases a sense of engagement in the exercise, provides a rich experience of the digital environments. Activities in the virtual environment and accompanying physiological and psychological indicators may be correlated to track and evaluate the health of the crew.

  9. MO-FG-202-09: Virtual IMRT QA Using Machine Learning: A Multi-Institutional Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valdes, G; Scheuermann, R; Solberg, T

    Purpose: To validate a machine learning approach to Virtual IMRT QA for accurately predicting gamma passing rates using different QA devices at different institutions. Methods: A Virtual IMRT QA was constructed using a machine learning algorithm based on 416 IMRT plans, in which QA measurements were performed using diode-array detectors and a 3%local/3mm with 10% threshold. An independent set of 139 IMRT measurements from a different institution, with QA data based on portal dosimetry using the same gamma index and 10% threshold, was used to further test the algorithm. Plans were characterized by 90 different complexity metrics. A weighted poisonmore » regression with Lasso regularization was trained to predict passing rates using the complexity metrics as input. Results: In addition to predicting passing rates with 3% accuracy for all composite plans using diode-array detectors, passing rates for portal dosimetry on per-beam basis were predicted with an error <3.5% for 120 IMRT measurements. The remaining measurements (19) had large areas of low CU, where portal dosimetry has larger disagreement with the calculated dose and, as such, large errors were expected. These beams need to be further modeled to correct the under-response in low dose regions. Important features selected by Lasso to predict gamma passing rates were: complete irradiated area outline (CIAO) area, jaw position, fraction of MLC leafs with gaps smaller than 20 mm or 5mm, fraction of area receiving less than 50% of the total CU, fraction of the area receiving dose from penumbra, weighted Average Irregularity Factor, duty cycle among others. Conclusion: We have demonstrated that the Virtual IMRT QA can predict passing rates using different QA devices and across multiple institutions. Prediction of QA passing rates could have profound implications on the current IMRT process.« less

  10. Down-Time During Work-Time.

    PubMed

    Gupta, Deepak; Restum, Adnan; McKelvey, George

    2018-01-01

    An idle body can harbor an idle mind that often brews something appalling in emptiness. Refreshing one's mind during Down-Time (Me-Time) with "harmless" activities is a must whether at home or at the workplace.

  11. Estimation of fuel loss due to idling of vehicles at a signalized intersection in Chennai, India

    NASA Astrophysics Data System (ADS)

    Vasantha Kumar, S.; Gulati, Himanshu; Arora, Shivam

    2017-11-01

    The vehicles while waiting at signalized intersections are generally found to be in idling condition, i.e., not switching off their vehicles during red times. This phenomenon of idling of vehicles during red times at signalized intersections may lead to huge economic loss as lot of fuel is consumed by vehicles when they are in idling condition. The situation may even be worse in countries like India as different vehicle types consume varying amount of fuel. Only limited studies have been reported on estimation of fuel loss due to idling of vehicles in India. In the present study, one of the busy intersections in Chennai, namely, Tidel Park Junction in Rajiv Gandhi salai was considered. Data collection was carried out in one approach road of the intersection during morning and evening peak hours on a typical working day by manually noting down the red timings of each cycle and the corresponding number of two-wheelers, three-wheelers, passenger cars, light commercial vehicles (LCV) and heavy motorized vehicles (HMV) that were in idling mode. Using the fuel consumption values of various vehicles types suggested by Central Road Research Institute (CRRI), the total fuel loss during the study period was found to be Rs. 4,93,849/-. The installation of red timers, synchronization of signals, use of non-motorized transport for short trips and public awareness are some of the measures which government need to focus to save the fuel wasted at signalized intersections in major cities of India.

  12. Characterization of fine particle and gaseous emissions during school bus idling.

    PubMed

    Kinsey, J S; Williams, D C; Dong, Y; Logan, R

    2007-07-15

    The particulate matter (PM) and gaseous emissions from six diesel school buses were determined over a simulated waiting period typical of schools in the northeastern U.S. Testing was conducted for both continuous idle and hot restart conditions using a suite of on-line particle and gas analyzers installed in the U.S. Environmental Protection Agency's Diesel Emissions Aerosol Laboratory. The specific pollutants measured encompassed total PM-2.5 mass (PM < or = 2.5 microm in aerodynamic diameter), PM-2.5 number concentration, particle size distribution, particle-surface polycyclic aromatic hydrocarbons (PAHs), and a tracer gas (1,1,1,2,3,3,3-heptafluoropropane) in the diluted sample stream. Carbon monoxide (CO), carbon dioxide, nitrogen oxides (NO(x)), total hydrocarbons (THC), oxygen, formaldehyde, and the tracer gas were also measured in the raw exhaust. Results of the study showed little difference in the measured emissions between a 10 min post-restart idle and a 10 min continuous idle with the exception of THC and formaldehyde. However, an emissions pulse was observed during engine restart. A predictive equation was developed from the experimental data, which allows a comparison between continuous idle and hot restart for NO(x), CO, PM2.5, and PAHs and which considers factors such as the restart emissions pulse and periods when the engine is not running. This equation indicates that restart is the preferred operating scenario as long as there is no extended idling after the engine is restarted.

  13. Virtualization and cloud computing in dentistry.

    PubMed

    Chow, Frank; Muftu, Ali; Shorter, Richard

    2014-01-01

    The use of virtualization and cloud computing has changed the way we use computers. Virtualization is a method of placing software called a hypervisor on the hardware of a computer or a host operating system. It allows a guest operating system to run on top of the physical computer with a virtual machine (i.e., virtual computer). Virtualization allows multiple virtual computers to run on top of one physical computer and to share its hardware resources, such as printers, scanners, and modems. This increases the efficient use of the computer by decreasing costs (e.g., hardware, electricity administration, and management) since only one physical computer is needed and running. This virtualization platform is the basis for cloud computing. It has expanded into areas of server and storage virtualization. One of the commonly used dental storage systems is cloud storage. Patient information is encrypted as required by the Health Insurance Portability and Accountability Act (HIPAA) and stored on off-site private cloud services for a monthly service fee. As computer costs continue to increase, so too will the need for more storage and processing power. Virtual and cloud computing will be a method for dentists to minimize costs and maximize computer efficiency in the near future. This article will provide some useful information on current uses of cloud computing.

  14. Helioviewer.org: Enhanced Solar & Heliospheric Data Visualization

    NASA Astrophysics Data System (ADS)

    Stys, J. E.; Ireland, J.; Hughitt, V. K.; Mueller, D.

    2013-12-01

    Helioviewer.org enables the simultaneous exploration of multiple heterogeneous solar data sets. In the latest iteration of this open-source web application, Hinode XRT and Yohkoh SXT join SDO, SOHO, STEREO, and PROBA2 as supported data sources. A newly enhanced user-interface expands the utility of Helioviewer.org by adding annotations backed by data from the Heliospheric Events Knowledgebase (HEK). Helioviewer.org can now overlay solar feature and event data via interactive marker pins, extended regions, data labels, and information panels. An interactive time-line provides enhanced browsing and visualization to image data set coverage and solar events. The addition of a size-of-the-Earth indicator provides a sense of the scale to solar and heliospheric features for education and public outreach purposes. Tight integration with the Virtual Solar Observatory and SDO AIA cutout service enable solar physicists to seamlessly import science data into their SSW/IDL or SunPy/Python data analysis environments.

  15. Experimental clean combustor program: Diesel no. 2 fuel addendum, phase 3

    NASA Technical Reports Server (NTRS)

    Gleason, C. C.; Bahr, D. W.

    1979-01-01

    A CF6-50 engine equipped with an advanced, low emission, double annular combustor was operated 4.8 hours with No. 2 diesel fuel. Fourteen steady-state operating conditions ranging from idle to full power were investigated. Engine/combustor performance and exhaust emissions were obtained and compared to JF-5 fueled test results. With one exception, fuel effects were very small and in agreement with previously obtained combustor test rig results. At high power operating condition, the two fuels produced virtually the same peak metal temperatures and exhaust emission levels. At low power operating conditions, where only the pilot stage was fueled, smoke levels tended to be significantly higher with No. 2 diesel fuel. Additional development of this combustor concept is needed in the areas of exit temperature distribution, engine fuel control, and exhaust emission levels before it can be considered for production engine use.

  16. An Analysis Platform for Mobile Ad Hoc Network (MANET) Scenario Execution Log Data

    DTIC Science & Technology

    2016-01-01

    these technologies. 4.1 Backend Technologies • Java 1.8 • my-sql-connector- java -5.0.8.jar • Tomcat • VirtualBox • Kali MANET Virtual Machine 4.2...Frontend Technologies • LAMPP 4.3 Database • MySQL Server 5. Database The SEDAP database settings and structure are described in this section...contains all the backend java functionality including the web services, should be placed in the webapps directory inside the Tomcat installation

  17. Virtual reality and planetary exploration

    NASA Technical Reports Server (NTRS)

    Mcgreevy, Michael W.

    1992-01-01

    NASA-Ames is intensively developing virtual-reality (VR) capabilities that can extend and augment computer-generated and remote spatial environments. VR is envisioned not only as a basis for improving human/machine interactions involved in planetary exploration, but also as a medium for the more widespread sharing of the experience of exploration, thereby broadening the support-base for the lunar and planetary-exploration endeavors. Imagery representative of Mars are being gathered for VR presentation at such terrestrial sites as Antarctica and Death Valley.

  18. A Lightweight Intelligent Virtual Cinematography System for Machinima Production

    DTIC Science & Technology

    2007-01-01

    portmanteau of machine and cinema , machinima refers to the innovation of leveraging video game technology to greatly ease the creation of computer...selecting camera angles to capture the action of an a priori unknown script as aesthetically appropriate cinema . There are a number of challenges therein...Proc. of the 4th International Conf. on Autonomous Agents. Young, R.M. and Riedl, M.O. 2003. Towards an Architecture for Intelligent Control of Narrative in Interactive Virtual Worlds. In Proc. of IUI 2003.

  19. The use of physical and virtual manipulatives in an undergraduate mechanical engineering (Dynamics) course

    NASA Astrophysics Data System (ADS)

    Pan, Edward A.

    Science, technology, engineering, and mathematics (STEM) education is a national focus. Engineering education, as part of STEM education, needs to adapt to meet the needs of the nation in a rapidly changing world. Using computer-based visualization tools and corresponding 3D printed physical objects may help nontraditional students succeed in engineering classes. This dissertation investigated how adding physical or virtual learning objects (called manipulatives) to courses that require mental visualization of mechanical systems can aid student performance. Dynamics is one such course, and tends to be taught using lecture and textbooks with static diagrams of moving systems. Students often fail to solve the problems correctly and an inability to mentally visualize the system can contribute to student difficulties. This study found no differences between treatment groups on quantitative measures of spatial ability and conceptual knowledge. There were differences between treatments on measures of mechanical reasoning ability, in favor of the use of physical and virtual manipulatives over static diagrams alone. There were no major differences in student performance between the use of physical and virtual manipulatives. Students used the physical and virtual manipulatives to test their theories about how the machines worked, however their actual time handling the manipulatives was extremely limited relative to the amount of time they spent working on the problems. Students used the physical and virtual manipulatives as visual aids when communicating about the problem with their partners, and this behavior was also seen with Traditional group students who had to use the static diagrams and gesture instead. The explanations students gave for how the machines worked provided evidence of mental simulation; however, their causal chain analyses were often flawed, probably due to attempts to decrease cognitive load. Student opinions about the static diagrams and dynamic models varied by type of model (static, physical, virtual), but were generally favorable. The Traditional group students, however, indicated that the lack of adequate representation of motion in the static diagrams was a problem, and wished they had access to the physical and virtual models.

  20. Automated Inference of Chemical Discriminants of Biological Activity.

    PubMed

    Raschka, Sebastian; Scott, Anne M; Huertas, Mar; Li, Weiming; Kuhn, Leslie A

    2018-01-01

    Ligand-based virtual screening has become a standard technique for the efficient discovery of bioactive small molecules. Following assays to determine the activity of compounds selected by virtual screening, or other approaches in which dozens to thousands of molecules have been tested, machine learning techniques make it straightforward to discover the patterns of chemical groups that correlate with the desired biological activity. Defining the chemical features that generate activity can be used to guide the selection of molecules for subsequent rounds of screening and assaying, as well as help design new, more active molecules for organic synthesis.The quantitative structure-activity relationship machine learning protocols we describe here, using decision trees, random forests, and sequential feature selection, take as input the chemical structure of a single, known active small molecule (e.g., an inhibitor, agonist, or substrate) for comparison with the structure of each tested molecule. Knowledge of the atomic structure of the protein target and its interactions with the active compound are not required. These protocols can be modified and applied to any data set that consists of a series of measured structural, chemical, or other features for each tested molecule, along with the experimentally measured value of the response variable you would like to predict or optimize for your project, for instance, inhibitory activity in a biological assay or ΔG binding . To illustrate the use of different machine learning algorithms, we step through the analysis of a dataset of inhibitor candidates from virtual screening that were tested recently for their ability to inhibit GPCR-mediated signaling in a vertebrate.

  1. Frontal Alpha Oscillations and Attentional Control: A Virtual Reality Neurofeedback Study.

    PubMed

    Berger, Anna M; Davelaar, Eddy J

    2018-05-15

    Two competing views about alpha oscillations suggest that cortical alpha reflect either cortical inactivity or cortical processing efficiency. We investigated the role of alpha oscillations in attentional control, as measured with a Stroop task. We used neurofeedback to train 22 participants to increase their level of alpha amplitude. Based on the conflict/control loop theory, we selected to train prefrontal alpha and focus on the Gratton effect as an index of deployment of attentional control. We expected an increase or a decrease in the Gratton effect with increase in neural learning depending on whether frontal alpha oscillations reflect cortical idling or enhanced processing efficiency, respectively. In order to induce variability in neural learning beyond natural occurring individual differences, we provided half of the participants with feedback on alpha amplitude in a 3-dimensional (3D) virtual reality environment and the other half received feedback in a 2D environment. Our results showed variable neural learning rates, with larger rates in the 3D compared to the 2D group, corroborating prior evidence of individual differences in EEG-based learning and the influence of a virtual environment. Regression analyses revealed a significant association between the learning rate and changes on deployment of attentional control, with larger learning rates being associated with larger decreases in the Gratton effect. This association was not modulated by feedback medium. The study supports the view of frontal alpha oscillations being associated with efficient neurocognitive processing and demonstrates the utility of neurofeedback training in addressing theoretical questions in the non-neurofeedback literature. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  2. Measurements of major VOCs released into the closed cabin environment of different automobiles under various engine and ventilation scenarios.

    PubMed

    Kim, Ki-Hyun; Szulejko, Jan E; Jo, Hyo-Jae; Lee, Min-Hee; Kim, Yong-Hyun; Kwon, Eilhann; Ma, Chang-Jin; Kumar, Pawan

    2016-08-01

    Volatile organic compounds (VOCs) in automobile cabins were measured quantitatively to describe their emission characteristics in relation to various idling scenarios using three used automobiles (compact, intermediate sedan, and large sedan) under three different idling conditions ([1] cold engine off and ventilation off, [2] exterior air ventilation with idling warm engine, and [3] internal air recirculation with idling warm engine). The ambient air outside the vehicle was also analyzed as a reference. A total of 24 VOCs (with six functional groups) were selected as target compounds. Accordingly, the concentration of 24 VOC quantified as key target compounds averaged 4.58 ± 3.62 ppb (range: 0.05 (isobutyl alcohol) ∼ 38.2 ppb (formaldehyde)). Moreover, if their concentrations are compared between different automobile operational modes: the 'idling engine' levels (5.24 ± 4.07) was 1.3-5 times higher than the 'engine off' levels (4.09 ± 3.23) across all 3 automobile classes. In summary, automobile in-cabin VOC emissions are highly contingent on changes in engine and ventilation modes. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Alternative Fuels Data Center

    Science.gov Websites

    Idle Reduction Requirement A person that operates a diesel powered motor vehicle in certain counties and townships may not cause or allow the motor vehicle, when it is not in motion, to idle for more

  4. 40 CFR 92.106 - Equipment for loading the engine.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... settings except idle and dynamic brake; and (ii) Less accuracy and precision is allowed at idle and dynamic...) For engine testing using a dynamometer, the engine dynamometer system must be capable of controlling...

  5. Electro-chemical grinding

    NASA Technical Reports Server (NTRS)

    Feagans, P. L.

    1972-01-01

    Electro-chemical grinding technique has rotation speed control, constant feed rates, and contour control. Hypersonic engine parts of nickel alloys can be almost 100% machined, keeping tool pressure at virtual zero. Technique eliminates galling and permits constant surface finish and burr-free interrupted cutting.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simpson, Wayne; Borders, Tammie

    INL successfully developed a proof of concept for "Software Defined Anything" by emulating the laboratory's business applications that run on Virtual Machines. The work INL conducted demonstrates to industry on how this methodology can be used to improve security, automate and repeat processes, and improve consistency.

  7. Develop, Build, and Test a Virtual Lab to Support a Vulnerability Training System

    DTIC Science & Technology

    2004-09-01

    docs.us.dell.com/support/edocs/systems/pe1650/ en /it/index.htm> (20 August 2004) “HOWTO: Installing Web Services with Linux /Tomcat/Apache/Struts...configured as host machines with VMware and VNC running on a Linux RedHat 9 Kernel. An Apache-Tomcat web server was configured as the external interface to...1650, dual processor, blade servers were configured as host machines with VMware and VNC running on a Linux RedHat 9 Kernel. An Apache-Tomcat web

  8. Increasing Realism in Virtual Marksmanship Simulators

    DTIC Science & Technology

    2012-12-01

    M16 5.56 mm service rifle M2 .50-caliber machine gun M240 7.62 mm machine gun M9 9 mm Berretta MPI Mean Point of Impact NHQC Navy Handgun...Corps 14 Concepts in Programs, 2008, p. 214). ISMT has the capability to use a wide variety of weapons, including the .50cal. machinegun ( M2 ), 9...a time. ISMT has the unique capability to “provide immediate feedback to the instructor and trainee on weapon trigger pull, cant position, barrel

  9. Mega-Amp Opening Switch with Nested Electrodes/Pulsed Generator of Ion and Ion Cluster Beams

    DTIC Science & Technology

    1987-07-30

    The use of a plasma focus as a mega-amp opening switch has been demonstrated by two modes of operation: (a) Single shot mode; (b) Repetitive Mode...energy level and under the same voltage and filling-pressure conditions but without field distortion elements. Misfirings of the plasma focus machine...are also virtually eliminated by using FDE at the coaxial electrode breech. The tests (based on about 10000 shots and five plasma focus machines

  10. IDL Object Oriented Software for Hinode/XRT Image Analysis

    NASA Astrophysics Data System (ADS)

    Higgins, P. A.; Gallagher, P. T.

    2008-09-01

    We have developed a set of object oriented IDL routines that enable users to search, download and analyse images from the X-Ray Telescope (XRT) on-board Hinode. In this paper, we give specific examples of how the object can be used and how multi-instrument data analysis can be performed. The XRT object is a highly versatile and powerful IDL object, which will prove to be a useful tool for solar researchers. This software utilizes the generic Framework object available within the GEN branch of SolarSoft.

  11. Handling knowledge via Concept Maps: a space weather use case

    NASA Astrophysics Data System (ADS)

    Messerotti, Mauro; Fox, Peter

    Concept Maps (Cmaps) are powerful means for knowledge coding in graphical form. As flexible software tools exist to manipulate the knowledge embedded in Cmaps in machine-readable form, such complex entities are suitable candidates not only for the representation of ontologies and semantics in Virtual Observatory (VO) architectures, but also for knowledge handling and knowledge discovery. In this work, we present a use case relevant to space weather applications and we elaborate on its possible implementation and adavanced use in Semantic Virtual Observatories dedicated to Sun-Earth Connections. This analysis was carried out in the framework of the Electronic Geophysical Year (eGY) and represents an achievement synergized by the eGY Virtual Observatories Working Group.

  12. Idle emissions from heavy-duty diesel and natural gas vehicles at high altitude.

    PubMed

    McCormick, R L; Graboski, M S; Alleman, T L; Yanowitz, J

    2000-11-01

    Idle emissions of total hydrocarbon (THC), CO, NOx, and particulate matter (PM) were measured from 24 heavy-duty diesel-fueled (12 trucks and 12 buses) and 4 heavy-duty compressed natural gas (CNG)-fueled vehicles. The volatile organic fraction (VOF) of PM and aldehyde emissions were also measured for many of the diesel vehicles. Experiments were conducted at 1609 m above sea level using a full exhaust flow dilution tunnel method identical to that used for heavy-duty engine Federal Test Procedure (FTP) testing. Diesel trucks averaged 0.170 g/min THC, 1.183 g/min CO, 1.416 g/min NOx, and 0.030 g/min PM. Diesel buses averaged 0.137 g/min THC, 1.326 g/min CO, 2.015 g/min NOx, and 0.048 g/min PM. Results are compared to idle emission factors from the MOBILE5 and PART5 inventory models. The models significantly (45-75%) overestimate emissions of THC and CO in comparison with results measured from the fleet of vehicles examined in this study. Measured NOx emissions were significantly higher (30-100%) than model predictions. For the pre-1999 (pre-consent decree) truck engines examined in this study, idle NOx emissions increased with model year with a linear fit (r2 = 0.6). PART5 nationwide fleet average emissions are within 1 order of magnitude of emissions for the group of vehicles tested in this study. Aldehyde emissions for bus idling averaged 6 mg/min. The VOF averaged 19% of total PM for buses and 49% for trucks. CNG vehicle idle emissions averaged 1.435 g/min for THC, 1.119 g/min for CO, 0.267 g/min for NOx, and 0.003 g/min for PM. The g/min PM emissions are only a small fraction of g/min PM emissions during vehicle driving. However, idle emissions of NOx, CO, and THC are significant in comparison with driving emissions.

  13. Design of virtual SCADA simulation system for pressurized water reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wijaksono, Umar, E-mail: umar.wijaksono@student.upi.edu; Abdullah, Ade Gafar; Hakim, Dadang Lukman

    The Virtual SCADA system is a software-based Human-Machine Interface that can visualize the process of a plant. This paper described the results of the virtual SCADA system design that aims to recognize the principle of the Nuclear Power Plant type Pressurized Water Reactor. This simulation uses technical data of the Nuclear Power Plant Unit Olkiluoto 3 in Finland. This device was developed using Wonderware Intouch, which is equipped with manual books for each component, animation links, alarm systems, real time and historical trending, and security system. The results showed that in general this device can demonstrate clearly the principles ofmore » energy flow and energy conversion processes in Pressurized Water Reactors. This virtual SCADA simulation system can be used as instructional media to recognize the principle of Pressurized Water Reactor.« less

  14. Incoherent dictionary learning for reducing crosstalk noise in least-squares reverse time migration

    NASA Astrophysics Data System (ADS)

    Wu, Juan; Bai, Min

    2018-05-01

    We propose to apply a novel incoherent dictionary learning (IDL) algorithm for regularizing the least-squares inversion in seismic imaging. The IDL is proposed to overcome the drawback of traditional dictionary learning algorithm in losing partial texture information. Firstly, the noisy image is divided into overlapped image patches, and some random patches are extracted for dictionary learning. Then, we apply the IDL technology to minimize the coherency between atoms during dictionary learning. Finally, the sparse representation problem is solved by a sparse coding algorithm, and image is restored by those sparse coefficients. By reducing the correlation among atoms, it is possible to preserve most of the small-scale features in the image while removing much of the long-wavelength noise. The application of the IDL method to regularization of seismic images from least-squares reverse time migration shows successful performance.

  15. The fully programmable spacecraft: procedural sequencing for JPL deep space missions using VML (Virtual Machine Language)

    NASA Technical Reports Server (NTRS)

    Grasso, C. A.

    2002-01-01

    This paper lays out language constructs and capabilities, code features, and VML operations development concepts. The ability to migrate to the spacecraft functionality which is more traditionally implemented on the ground is examined.

  16. IPCS user's manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGoldrick, P.R.

    1980-12-11

    The Interprocess Communications System (IPCS) was written to provide a virtual machine upon which the Supervisory Control and Diagnostic System (SCDS) for the Mirror Fusion Test Facility (MFTF) could be built. The hardware upon which the IPCS runs consists of nine minicomputers sharing some common memory.

  17. Alternative Fuels Data Center: Idle Reduction Benefits and Considerations

    Science.gov Websites

    money, protects public health and the environment, and increases U.S. energy security. Reducing idle time can also reduce engine wear and associated maintenance costs. Saving Fuel and Money A photo of an

  18. Tracking Fallow Land in California Using USDA's Cropland Data Layer

    NASA Astrophysics Data System (ADS)

    Zakzeski, A.; mueller, R.; Rosevelt, C.; Melton, F. S.; Johnson, L.; Verdin, J. P.; Thenkabail, P.; Jones, J.

    2013-12-01

    The agricultural landscape of California has become the focus of a new research project combining the efforts of the US Department of Agriculture (USDA) National Agricultural Statistics Service (NASS), the US Geological Survey (USGS), and the National Aeronautics and Space Administration (NASA). The project's goal is to provide quantitative early and in season estimates derived from satellite data on the fallow/idle agricultural land throughout the State of California since water resources have become so constrained due to inadequate amounts of precipitation and high temperatures. As part of the research effort NASS has agreed to accelerate their established remote sensing program known as the Cropland Data Layer (CDL) in order to produce an idle mask derived over California as early as June with continued iterations throughout the growing season through October. The Cropland Data Layer is a land cover classification product produced by combining up to date, field level farm data from the Farm Service Agency's (FSA) 578 survey with a collection of satellite data over the growing season from both the Disaster Monitoring Constellation (DMC) and the newly launched Landsat-8 satellite. The combination of ground data and satellite data is used to derive a complex decision tree defining the phenological profiles of each type of agricultural land cover, including fallow and idle, throughout the state. Each CDL categorizes over a hundred types of land cover however for this project NASS creates a binary mask focusing solely on fallow/idle land cover. Each month NASS receives updates on field level farm data from FSA and collects more satellite imagery therefore the accuracies of the CDL and the subsequent idle masks used in this project continually improve as the season progresses. These fallow/idle masks will be made available to the public in the future for other research efforts. Each monthly iteration of the 30 meter CDL and subsequent fallow mask over California allows NASS to fill a data information gap and provide fellow researchers with an early glimpse into the estimated amount of farm land classified as fallow or idle which is used to further refine other fallow idle identification algorithms. This capability is complemented by the production of early season estimates derived from satellite data only using algorithms developed by NASA under this project.

  19. Virtual machine-based simulation platform for mobile ad-hoc network-based cyber infrastructure

    DOE PAGES

    Yoginath, Srikanth B.; Perumalla, Kayla S.; Henz, Brian J.

    2015-09-29

    In modeling and simulating complex systems such as mobile ad-hoc networks (MANETs) in de-fense communications, it is a major challenge to reconcile multiple important considerations: the rapidity of unavoidable changes to the software (network layers and applications), the difficulty of modeling the critical, implementation-dependent behavioral effects, the need to sustain larger scale scenarios, and the desire for faster simulations. Here we present our approach in success-fully reconciling them using a virtual time-synchronized virtual machine(VM)-based parallel ex-ecution framework that accurately lifts both the devices as well as the network communications to a virtual time plane while retaining full fidelity. At themore » core of our framework is a scheduling engine that operates at the level of a hypervisor scheduler, offering a unique ability to execute multi-core guest nodes over multi-core host nodes in an accurate, virtual time-synchronized manner. In contrast to other related approaches that suffer from either speed or accuracy issues, our framework provides MANET node-wise scalability, high fidelity of software behaviors, and time-ordering accuracy. The design and development of this framework is presented, and an ac-tual implementation based on the widely used Xen hypervisor system is described. Benchmarks with synthetic and actual applications are used to identify the benefits of our approach. The time inaccuracy of traditional emulation methods is demonstrated, in comparison with the accurate execution of our framework verified by theoretically correct results expected from analytical models of the same scenarios. In the largest high fidelity tests, we are able to perform virtual time-synchronized simulation of 64-node VM-based full-stack, actual software behaviors of MANETs containing a mix of static and mobile (unmanned airborne vehicle) nodes, hosted on a 32-core host, with full fidelity of unmodified ad-hoc routing protocols, unmodified application executables, and user-controllable physical layer effects including inter-device wireless signal strength, reachability, and connectivity.« less

  20. Virtual machine-based simulation platform for mobile ad-hoc network-based cyber infrastructure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoginath, Srikanth B.; Perumalla, Kayla S.; Henz, Brian J.

    In modeling and simulating complex systems such as mobile ad-hoc networks (MANETs) in de-fense communications, it is a major challenge to reconcile multiple important considerations: the rapidity of unavoidable changes to the software (network layers and applications), the difficulty of modeling the critical, implementation-dependent behavioral effects, the need to sustain larger scale scenarios, and the desire for faster simulations. Here we present our approach in success-fully reconciling them using a virtual time-synchronized virtual machine(VM)-based parallel ex-ecution framework that accurately lifts both the devices as well as the network communications to a virtual time plane while retaining full fidelity. At themore » core of our framework is a scheduling engine that operates at the level of a hypervisor scheduler, offering a unique ability to execute multi-core guest nodes over multi-core host nodes in an accurate, virtual time-synchronized manner. In contrast to other related approaches that suffer from either speed or accuracy issues, our framework provides MANET node-wise scalability, high fidelity of software behaviors, and time-ordering accuracy. The design and development of this framework is presented, and an ac-tual implementation based on the widely used Xen hypervisor system is described. Benchmarks with synthetic and actual applications are used to identify the benefits of our approach. The time inaccuracy of traditional emulation methods is demonstrated, in comparison with the accurate execution of our framework verified by theoretically correct results expected from analytical models of the same scenarios. In the largest high fidelity tests, we are able to perform virtual time-synchronized simulation of 64-node VM-based full-stack, actual software behaviors of MANETs containing a mix of static and mobile (unmanned airborne vehicle) nodes, hosted on a 32-core host, with full fidelity of unmodified ad-hoc routing protocols, unmodified application executables, and user-controllable physical layer effects including inter-device wireless signal strength, reachability, and connectivity.« less

  1. Operation of a brain-computer interface walking simulator for individuals with spinal cord injury

    PubMed Central

    2013-01-01

    Background Spinal cord injury (SCI) can leave the affected individuals with paraparesis or paraplegia, thus rendering them unable to ambulate. Since there are currently no restorative treatments for this population, novel approaches such as brain-controlled prostheses have been sought. Our recent studies show that a brain-computer interface (BCI) can be used to control ambulation within a virtual reality environment (VRE), suggesting that a BCI-controlled lower extremity prosthesis for ambulation may be feasible. However, the operability of our BCI has not yet been tested in a SCI population. Methods Five participants with paraplegia or tetraplegia due to SCI underwent a 10-min training session in which they alternated between kinesthetic motor imagery (KMI) of idling and walking while their electroencephalogram (EEG) were recorded. Participants then performed a goal-oriented online task, where they utilized KMI to control the linear ambulation of an avatar while making 10 sequential stops at designated points within the VRE. Multiple online trials were performed in a single day, and this procedure was repeated across 5 experimental days. Results Classification accuracy of idling and walking was estimated offline and ranged from 60.5% (p = 0.0176) to 92.3% (p = 1.36×10−20) across participants and days. Offline analysis revealed that the activation of mid-frontal areas mostly in the μ and low β bands was the most consistent feature for differentiating between idling and walking KMI. In the online task, participants achieved an average performance of 7.4±2.3 successful stops in 273±51 sec. These performances were purposeful, i.e. significantly different from the random walk Monte Carlo simulations (p<0.01), and all but one participant achieved purposeful control within the first day of the experiments. Finally, all participants were able to maintain purposeful control throughout the study, and their online performances improved over time. Conclusions The results of this study demonstrate that SCI participants can purposefully operate a self-paced BCI walking simulator to complete a goal-oriented ambulation task. The operation of the proposed BCI system requires short training, is intuitive, and robust against participant-to-participant and day-to-day neurophysiological variations. These findings indicate that BCI-controlled lower extremity prostheses for gait rehabilitation or restoration after SCI may be feasible in the future. PMID:23866985

  2. Virtual Distances Methodology as Verification Technique for AACMMs with a Capacitive Sensor Based Indexed Metrology Platform

    PubMed Central

    Acero, Raquel; Santolaria, Jorge; Brau, Agustin; Pueo, Marcos

    2016-01-01

    This paper presents a new verification procedure for articulated arm coordinate measuring machines (AACMMs) together with a capacitive sensor-based indexed metrology platform (IMP) based on the generation of virtual reference distances. The novelty of this procedure lays on the possibility of creating virtual points, virtual gauges and virtual distances through the indexed metrology platform’s mathematical model taking as a reference the measurements of a ball bar gauge located in a fixed position of the instrument’s working volume. The measurements are carried out with the AACMM assembled on the IMP from the six rotating positions of the platform. In this way, an unlimited number and types of reference distances could be created without the need of using a physical gauge, therefore optimizing the testing time, the number of gauge positions and the space needed in the calibration and verification procedures. Four evaluation methods are presented to assess the volumetric performance of the AACMM. The results obtained proved the suitability of the virtual distances methodology as an alternative procedure for verification of AACMMs using the indexed metrology platform. PMID:27869722

  3. Simulation of Physical Experiments in Immersive Virtual Environments

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Wasfy, Tamer M.

    2001-01-01

    An object-oriented event-driven immersive Virtual environment is described for the creation of virtual labs (VLs) for simulating physical experiments. Discussion focuses on a number of aspects of the VLs, including interface devices, software objects, and various applications. The VLs interface with output devices, including immersive stereoscopic screed(s) and stereo speakers; and a variety of input devices, including body tracking (head and hands), haptic gloves, wand, joystick, mouse, microphone, and keyboard. The VL incorporates the following types of primitive software objects: interface objects, support objects, geometric entities, and finite elements. Each object encapsulates a set of properties, methods, and events that define its behavior, appearance, and functions. A container object allows grouping of several objects. Applications of the VLs include viewing the results of the physical experiment, viewing a computer simulation of the physical experiment, simulation of the experiments procedure, computational steering, and remote control of the physical experiment. In addition, the VL can be used as a risk-free (safe) environment for training. The implementation of virtual structures testing machines, virtual wind tunnels, and a virtual acoustic testing facility is described.

  4. Virtual Distances Methodology as Verification Technique for AACMMs with a Capacitive Sensor Based Indexed Metrology Platform.

    PubMed

    Acero, Raquel; Santolaria, Jorge; Brau, Agustin; Pueo, Marcos

    2016-11-18

    This paper presents a new verification procedure for articulated arm coordinate measuring machines (AACMMs) together with a capacitive sensor-based indexed metrology platform (IMP) based on the generation of virtual reference distances. The novelty of this procedure lays on the possibility of creating virtual points, virtual gauges and virtual distances through the indexed metrology platform's mathematical model taking as a reference the measurements of a ball bar gauge located in a fixed position of the instrument's working volume. The measurements are carried out with the AACMM assembled on the IMP from the six rotating positions of the platform. In this way, an unlimited number and types of reference distances could be created without the need of using a physical gauge, therefore optimizing the testing time, the number of gauge positions and the space needed in the calibration and verification procedures. Four evaluation methods are presented to assess the volumetric performance of the AACMM. The results obtained proved the suitability of the virtual distances methodology as an alternative procedure for verification of AACMMs using the indexed metrology platform.

  5. Game controller modification for fMRI hyperscanning experiments in a cooperative virtual reality environment.

    PubMed

    Trees, Jason; Snider, Joseph; Falahpour, Maryam; Guo, Nick; Lu, Kun; Johnson, Douglas C; Poizner, Howard; Liu, Thomas T

    2014-01-01

    Hyperscanning, an emerging technique in which data from multiple interacting subjects' brains are simultaneously recorded, has become an increasingly popular way to address complex topics, such as "theory of mind." However, most previous fMRI hyperscanning experiments have been limited to abstract social interactions (e.g. phone conversations). Our new method utilizes a virtual reality (VR) environment used for military training, Virtual Battlespace 2 (VBS2), to create realistic avatar-avatar interactions and cooperative tasks. To control the virtual avatar, subjects use a MRI compatible Playstation 3 game controller, modified by removing all extraneous metal components and replacing any necessary ones with 3D printed plastic models. Control of both scanners' operation is initiated by a VBS2 plugin to sync scanner time to the known time within the VR environment. Our modifications include:•Modification of game controller to be MRI compatible.•Design of VBS2 virtual environment for cooperative interactions.•Syncing two MRI machines for simultaneous recording.

  6. Virtualization of open-source secure web services to support data exchange in a pediatric critical care research network.

    PubMed

    Frey, Lewis J; Sward, Katherine A; Newth, Christopher J L; Khemani, Robinder G; Cryer, Martin E; Thelen, Julie L; Enriquez, Rene; Shaoyu, Su; Pollack, Murray M; Harrison, Rick E; Meert, Kathleen L; Berg, Robert A; Wessel, David L; Shanley, Thomas P; Dalton, Heidi; Carcillo, Joseph; Jenkins, Tammara L; Dean, J Michael

    2015-11-01

    To examine the feasibility of deploying a virtual web service for sharing data within a research network, and to evaluate the impact on data consistency and quality. Virtual machines (VMs) encapsulated an open-source, semantically and syntactically interoperable secure web service infrastructure along with a shadow database. The VMs were deployed to 8 Collaborative Pediatric Critical Care Research Network Clinical Centers. Virtual web services could be deployed in hours. The interoperability of the web services reduced format misalignment from 56% to 1% and demonstrated that 99% of the data consistently transferred using the data dictionary and 1% needed human curation. Use of virtualized open-source secure web service technology could enable direct electronic abstraction of data from hospital databases for research purposes. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  7. Virtual reality and planetary exploration

    NASA Technical Reports Server (NTRS)

    Mcgreevy, Michael W.

    1992-01-01

    Exploring planetary environments is central to NASA's missions and goals. A new computing technology called Virtual Reality has much to offer in support of planetary exploration. This technology augments and extends human presence within computer-generated and remote spatial environments. Historically, NASA has been a leader in many of the fundamental concepts and technologies that comprise Virtual Reality. Indeed, Ames Research Center has a central role in the development of this rapidly emerging approach to using computers. This ground breaking work has inspired researchers in academia, industry, and the military. Further, NASA's leadership in this technology has spun off new businesses, has caught the attention of the international business community, and has generated several years of positive international media coverage. In the future, Virtual Reality technology will enable greatly improved human-machine interactions for more productive planetary surface exploration. Perhaps more importantly, Virtual Reality technology will democratize the experience of planetary exploration and thereby broaden understanding of, and support for, this historic enterprise.

  8. Virtual reality and planetary exploration

    NASA Astrophysics Data System (ADS)

    McGreevy, Michael W.

    Exploring planetary environments is central to NASA's missions and goals. A new computing technology called Virtual Reality has much to offer in support of planetary exploration. This technology augments and extends human presence within computer-generated and remote spatial environments. Historically, NASA has been a leader in many of the fundamental concepts and technologies that comprise Virtual Reality. Indeed, Ames Research Center has a central role in the development of this rapidly emerging approach to using computers. This ground breaking work has inspired researchers in academia, industry, and the military. Further, NASA's leadership in this technology has spun off new businesses, has caught the attention of the international business community, and has generated several years of positive international media coverage. In the future, Virtual Reality technology will enable greatly improved human-machine interactions for more productive planetary surface exploration. Perhaps more importantly, Virtual Reality technology will democratize the experience of planetary exploration and thereby broaden understanding of, and support for, this historic enterprise.

  9. Game controller modification for fMRI hyperscanning experiments in a cooperative virtual reality environment

    PubMed Central

    Trees, Jason; Snider, Joseph; Falahpour, Maryam; Guo, Nick; Lu, Kun; Johnson, Douglas C.; Poizner, Howard; Liu, Thomas T.

    2014-01-01

    Hyperscanning, an emerging technique in which data from multiple interacting subjects’ brains are simultaneously recorded, has become an increasingly popular way to address complex topics, such as “theory of mind.” However, most previous fMRI hyperscanning experiments have been limited to abstract social interactions (e.g. phone conversations). Our new method utilizes a virtual reality (VR) environment used for military training, Virtual Battlespace 2 (VBS2), to create realistic avatar-avatar interactions and cooperative tasks. To control the virtual avatar, subjects use a MRI compatible Playstation 3 game controller, modified by removing all extraneous metal components and replacing any necessary ones with 3D printed plastic models. Control of both scanners’ operation is initiated by a VBS2 plugin to sync scanner time to the known time within the VR environment. Our modifications include:•Modification of game controller to be MRI compatible.•Design of VBS2 virtual environment for cooperative interactions.•Syncing two MRI machines for simultaneous recording. PMID:26150964

  10. Man, mind, and machine: the past and future of virtual reality simulation in neurologic surgery.

    PubMed

    Robison, R Aaron; Liu, Charles Y; Apuzzo, Michael L J

    2011-11-01

    To review virtual reality in neurosurgery, including the history of simulation and virtual reality and some of the current implementations; to examine some of the technical challenges involved; and to propose a potential paradigm for the development of virtual reality in neurosurgery going forward. A search was made on PubMed using key words surgical simulation, virtual reality, haptics, collision detection, and volumetric modeling to assess the current status of virtual reality in neurosurgery. Based on previous results, investigators extrapolated the possible integration of existing efforts and potential future directions. Simulation has a rich history in surgical training, and there are numerous currently existing applications and systems that involve virtual reality. All existing applications are limited to specific task-oriented functions and typically sacrifice visual realism for real-time interactivity or vice versa, owing to numerous technical challenges in rendering a virtual space in real time, including graphic and tissue modeling, collision detection, and direction of the haptic interface. With ongoing technical advancements in computer hardware and graphic and physical rendering, incremental or modular development of a fully immersive, multipurpose virtual reality neurosurgical simulator is feasible. The use of virtual reality in neurosurgery is predicted to change the nature of neurosurgical education, and to play an increased role in surgical rehearsal and the continuing education and credentialing of surgical practitioners. Copyright © 2011 Elsevier Inc. All rights reserved.

  11. Idle reduction assessment for the New York State Department of Transportation region 4 fleet.

    DOT National Transportation Integrated Search

    2015-03-01

    Energetics Incorporated conducted a study to evaluate the operational, economic, and environmental impacts of advanced technologies to reduce idling in : the New York State Department of Transportation (NYSDOT) Region 4 fleet without compromising fun...

  12. Effects of idle reduction technologies on real world fuel use and exhaust emissions of idling long-haul trucks.

    PubMed

    Frey, H Christopher; Kuo, Po-Yao; Villa, Charles

    2009-09-01

    Idling long-haul freight tucks may consume nearly one billion gallons of diesel fuel per year in the U.S. There is a need for real-world data by which to quantify avoided fuel use and emissions attributable to idle reduction techniques of auxiliary power units (APUs) and shore-power (SP). Field data were obtained from 20 APU-equipped and SP-compatible trucks observed during 2.8 million miles of travel in 42 states. Base engine fuel use and emission rates varied depending on ambient temperature. APU and SP energy use and emission rates varied depending on electrical load. APUs reduced idling fuel use and CO2 emissions for single and team drivers by 22 and 5% annually, respectively. SP offers greater reductions in energy use of 48% for single drivers, as well as in emissions, except for SO2. APUs were cost-effective for single drivers with a large number of APU usage hours per year, but not for team drivers or for single drivers with low APU utilization rates. The findings support more accurate assessments of avoided fuel use and emissions, and recommendations to encourage greater APU utilization by single drivers and to further develop infrastructure for SP.

  13. Alternative Method to Simulate a Sub-idle Engine Operation in Order to Synthesize Its Control System

    NASA Astrophysics Data System (ADS)

    Sukhovii, Sergii I.; Sirenko, Feliks F.; Yepifanov, Sergiy V.; Loboda, Igor

    2016-09-01

    The steady-state and transient engine performances in control systems are usually evaluated by applying thermodynamic engine models. Most models operate between the idle and maximum power points, only recently, they sometimes address a sub-idle operating range. The lack of information about the component maps at the sub-idle modes presents a challenging problem. A common method to cope with the problem is to extrapolate the component performances to the sub-idle range. Precise extrapolation is also a challenge. As a rule, many scientists concern only particular aspects of the problem such as the lighting combustion chamber or the turbine operation under the turned-off conditions of the combustion chamber. However, there are no reports about a model that considers all of these aspects and simulates the engine starting. The proposed paper addresses a new method to simulate the starting. The method substitutes the non-linear thermodynamic model with a linear dynamic model, which is supplemented with a simplified static model. The latter model is the set of direct relations between parameters that are used in the control algorithms instead of commonly used component performances. Specifically, this model consists of simplified relations between the gas path parameters and the corrected rotational speed.

  14. Characterization of Gas-Phase Organics Using Proton Transfer Reaction Time-of-Flight Mass Spectrometry: Aircraft Turbine Engines.

    PubMed

    Kilic, Dogushan; Brem, Benjamin T; Klein, Felix; El-Haddad, Imad; Durdina, Lukas; Rindlisbacher, Theo; Setyan, Ari; Huang, Rujin; Wang, Jing; Slowik, Jay G; Baltensperger, Urs; Prevot, Andre S H

    2017-04-04

    Nonmethane organic gas emissions (NMOGs) from in-service aircraft turbine engines were investigated using a proton transfer reaction time-of-flight mass spectrometer (PTR-ToF-MS) at an engine test facility at Zurich Airport, Switzerland. Experiments consisted of 60 exhaust samples for seven engine types (used in commercial aviation) from two manufacturers at thrust levels ranging from idle to takeoff. Emission indices (EIs) for more than 200 NMOGs were quantified, and the functional group fractions (including acids, carbonyls, aromatics, and aliphatics) were calculated to characterize the exhaust chemical composition at different engine operation modes. Total NMOG emissions were highest at idling with an average EI of 7.8 g/kg fuel and were a factor of ∼40 lower at takeoff thrust. The relative contribution of pure hydrocarbons (particularly aromatics and aliphatics) of the engine exhaust decreased with increasing thrust while the fraction of oxidized compounds, for example, acids and carbonyls increased. Exhaust chemical composition at idle was also affected by engine technology. Older engines emitted a higher fraction of nonoxidized NMOGs compared to newer ones. Idling conditions dominated ground level organic gas emissions. Based on the EI determined here, we estimate that reducing idle emissions could substantially improve air quality near airports.

  15. Secure Autonomous Automated Scheduling (SAAS). Rev. 1.1

    NASA Technical Reports Server (NTRS)

    Walke, Jon G.; Dikeman, Larry; Sage, Stephen P.; Miller, Eric M.

    2010-01-01

    This report describes network-centric operations, where a virtual mission operations center autonomously receives sensor triggers, and schedules space and ground assets using Internet-based technologies and service-oriented architectures. For proof-of-concept purposes, sensor triggers are received from the United States Geological Survey (USGS) to determine targets for space-based sensors. The Surrey Satellite Technology Limited (SSTL) Disaster Monitoring Constellation satellite, the UK-DMC, is used as the space-based sensor. The UK-DMC's availability is determined via machine-to-machine communications using SSTL's mission planning system. Access to/from the UK-DMC for tasking and sensor data is via SSTL's and Universal Space Network's (USN) ground assets. The availability and scheduling of USN's assets can also be performed autonomously via machine-to-machine communications. All communication, both on the ground and between ground and space, uses open Internet standards

  16. Performance prediction: A case study using a multi-ring KSR-1 machine

    NASA Technical Reports Server (NTRS)

    Sun, Xian-He; Zhu, Jianping

    1995-01-01

    While computers with tens of thousands of processors have successfully delivered high performance power for solving some of the so-called 'grand-challenge' applications, the notion of scalability is becoming an important metric in the evaluation of parallel machine architectures and algorithms. In this study, the prediction of scalability and its application are carefully investigated. A simple formula is presented to show the relation between scalability, single processor computing power, and degradation of parallelism. A case study is conducted on a multi-ring KSR1 shared virtual memory machine. Experimental and theoretical results show that the influence of topology variation of an architecture is predictable. Therefore, the performance of an algorithm on a sophisticated, heirarchical architecture can be predicted and the best algorithm-machine combination can be selected for a given application.

  17. Grids, virtualization, and clouds at Fermilab

    DOE PAGES

    Timm, S.; Chadwick, K.; Garzoglio, G.; ...

    2014-06-11

    Fermilab supports a scientific program that includes experiments and scientists located across the globe. To better serve this community, in 2004, the (then) Computing Division undertook the strategy of placing all of the High Throughput Computing (HTC) resources in a Campus Grid known as FermiGrid, supported by common shared services. In 2007, the FermiGrid Services group deployed a service infrastructure that utilized Xen virtualization, LVS network routing and MySQL circular replication to deliver highly available services that offered significant performance, reliability and serviceability improvements. This deployment was further enhanced through the deployment of a distributed redundant network core architecture andmore » the physical distribution of the systems that host the virtual machines across multiple buildings on the Fermilab Campus. In 2010, building on the experience pioneered by FermiGrid in delivering production services in a virtual infrastructure, the Computing Sector commissioned the FermiCloud, General Physics Computing Facility and Virtual Services projects to serve as platforms for support of scientific computing (FermiCloud 6 GPCF) and core computing (Virtual Services). Lastly, this work will present the evolution of the Fermilab Campus Grid, Virtualization and Cloud Computing infrastructure together with plans for the future.« less

  18. Grids, virtualization, and clouds at Fermilab

    NASA Astrophysics Data System (ADS)

    Timm, S.; Chadwick, K.; Garzoglio, G.; Noh, S.

    2014-06-01

    Fermilab supports a scientific program that includes experiments and scientists located across the globe. To better serve this community, in 2004, the (then) Computing Division undertook the strategy of placing all of the High Throughput Computing (HTC) resources in a Campus Grid known as FermiGrid, supported by common shared services. In 2007, the FermiGrid Services group deployed a service infrastructure that utilized Xen virtualization, LVS network routing and MySQL circular replication to deliver highly available services that offered significant performance, reliability and serviceability improvements. This deployment was further enhanced through the deployment of a distributed redundant network core architecture and the physical distribution of the systems that host the virtual machines across multiple buildings on the Fermilab Campus. In 2010, building on the experience pioneered by FermiGrid in delivering production services in a virtual infrastructure, the Computing Sector commissioned the FermiCloud, General Physics Computing Facility and Virtual Services projects to serve as platforms for support of scientific computing (FermiCloud 6 GPCF) and core computing (Virtual Services). This work will present the evolution of the Fermilab Campus Grid, Virtualization and Cloud Computing infrastructure together with plans for the future.

  19. The Virtual Climate Data Server (vCDS): An iRODS-Based Data Management Software Appliance Supporting Climate Data Services and Virtualization-as-a-Service in the NASA Center for Climate Simulation

    NASA Technical Reports Server (NTRS)

    Schnase, John L.; Tamkin, Glenn S.; Ripley, W. David III; Stong, Savannah; Gill, Roger; Duffy, Daniel Q.

    2012-01-01

    Scientific data services are becoming an important part of the NASA Center for Climate Simulation's mission. Our technological response to this expanding role is built around the concept of a Virtual Climate Data Server (vCDS), repetitive provisioning, image-based deployment and distribution, and virtualization-as-a-service. The vCDS is an iRODS-based data server specialized to the needs of a particular data-centric application. We use RPM scripts to build vCDS images in our local computing environment, our local Virtual Machine Environment, NASA s Nebula Cloud Services, and Amazon's Elastic Compute Cloud. Once provisioned into one or more of these virtualized resource classes, vCDSs can use iRODS s federation capabilities to create an integrated ecosystem of managed collections that is scalable and adaptable to changing resource requirements. This approach enables platform- or software-asa- service deployment of vCDS and allows the NCCS to offer virtualization-as-a-service: a capacity to respond in an agile way to new customer requests for data services.

  20. Integration of Openstack cloud resources in BES III computing cluster

    NASA Astrophysics Data System (ADS)

    Li, Haibo; Cheng, Yaodong; Huang, Qiulan; Cheng, Zhenjing; Shi, Jingyan

    2017-10-01

    Cloud computing provides a new technical means for data processing of high energy physics experiment. However, the resource of each queue is fixed and the usage of the resource is static in traditional job management system. In order to make it simple and transparent for physicist to use, we developed a virtual cluster system (vpmanager) to integrate IHEPCloud and different batch systems such as Torque and HTCondor. Vpmanager provides dynamic virtual machines scheduling according to the job queue. The BES III use case results show that resource efficiency is greatly improved.

  1. A location selection policy of live virtual machine migration for power saving and load balancing.

    PubMed

    Zhao, Jia; Ding, Yan; Xu, Gaochao; Hu, Liang; Dong, Yushuang; Fu, Xiaodong

    2013-01-01

    Green cloud data center has become a research hotspot of virtualized cloud computing architecture. And load balancing has also been one of the most important goals in cloud data centers. Since live virtual machine (VM) migration technology is widely used and studied in cloud computing, we have focused on location selection (migration policy) of live VM migration for power saving and load balancing. We propose a novel approach MOGA-LS, which is a heuristic and self-adaptive multiobjective optimization algorithm based on the improved genetic algorithm (GA). This paper has presented the specific design and implementation of MOGA-LS such as the design of the genetic operators, fitness values, and elitism. We have introduced the Pareto dominance theory and the simulated annealing (SA) idea into MOGA-LS and have presented the specific process to get the final solution, and thus, the whole approach achieves a long-term efficient optimization for power saving and load balancing. The experimental results demonstrate that MOGA-LS evidently reduces the total incremental power consumption and better protects the performance of VM migration and achieves the balancing of system load compared with the existing research. It makes the result of live VM migration more high-effective and meaningful.

  2. A Location Selection Policy of Live Virtual Machine Migration for Power Saving and Load Balancing

    PubMed Central

    Xu, Gaochao; Hu, Liang; Dong, Yushuang; Fu, Xiaodong

    2013-01-01

    Green cloud data center has become a research hotspot of virtualized cloud computing architecture. And load balancing has also been one of the most important goals in cloud data centers. Since live virtual machine (VM) migration technology is widely used and studied in cloud computing, we have focused on location selection (migration policy) of live VM migration for power saving and load balancing. We propose a novel approach MOGA-LS, which is a heuristic and self-adaptive multiobjective optimization algorithm based on the improved genetic algorithm (GA). This paper has presented the specific design and implementation of MOGA-LS such as the design of the genetic operators, fitness values, and elitism. We have introduced the Pareto dominance theory and the simulated annealing (SA) idea into MOGA-LS and have presented the specific process to get the final solution, and thus, the whole approach achieves a long-term efficient optimization for power saving and load balancing. The experimental results demonstrate that MOGA-LS evidently reduces the total incremental power consumption and better protects the performance of VM migration and achieves the balancing of system load compared with the existing research. It makes the result of live VM migration more high-effective and meaningful. PMID:24348165

  3. Dynamic virtual machine allocation policy in cloud computing complying with service level agreement using CloudSim

    NASA Astrophysics Data System (ADS)

    Aneri, Parikh; Sumathy, S.

    2017-11-01

    Cloud computing provides services over the internet and provides application resources and data to the users based on their demand. Base of the Cloud Computing is consumer provider model. Cloud provider provides resources which consumer can access using cloud computing model in order to build their application based on their demand. Cloud data center is a bulk of resources on shared pool architecture for cloud user to access. Virtualization is the heart of the Cloud computing model, it provides virtual machine as per application specific configuration and those applications are free to choose their own configuration. On one hand, there is huge number of resources and on other hand it has to serve huge number of requests effectively. Therefore, resource allocation policy and scheduling policy play very important role in allocation and managing resources in this cloud computing model. This paper proposes the load balancing policy using Hungarian algorithm. Hungarian Algorithm provides dynamic load balancing policy with a monitor component. Monitor component helps to increase cloud resource utilization by managing the Hungarian algorithm by monitoring its state and altering its state based on artificial intelligent. CloudSim used in this proposal is an extensible toolkit and it simulates cloud computing environment.

  4. Synthetic hardware performance analysis in virtualized cloud environment for healthcare organization.

    PubMed

    Tan, Chee-Heng; Teh, Ying-Wah

    2013-08-01

    The main obstacles in mass adoption of cloud computing for database operations in healthcare organization are the data security and privacy issues. In this paper, it is shown that IT services particularly in hardware performance evaluation in virtual machine can be accomplished effectively without IT personnel gaining access to actual data for diagnostic and remediation purposes. The proposed mechanisms utilized the hypothetical data from TPC-H benchmark, to achieve 2 objectives. First, the underlying hardware performance and consistency is monitored via a control system, which is constructed using TPC-H queries. Second, the mechanism to construct stress-testing scenario is envisaged in the host, using a single or combination of TPC-H queries, so that the resource threshold point can be verified, if the virtual machine is still capable of serving critical transactions at this constraining juncture. This threshold point uses server run queue size as input parameter, and it serves 2 purposes: It provides the boundary threshold to the control system, so that periodic learning of the synthetic data sets for performance evaluation does not reach the host's constraint level. Secondly, when the host undergoes hardware change, stress-testing scenarios are simulated in the host by loading up to this resource threshold level, for subsequent response time verification from real and critical transactions.

  5. A 3-D mixed-reality system for stereoscopic visualization of medical dataset.

    PubMed

    Ferrari, Vincenzo; Megali, Giuseppe; Troia, Elena; Pietrabissa, Andrea; Mosca, Franco

    2009-11-01

    We developed a simple, light, and cheap 3-D visualization device based on mixed reality that can be used by physicians to see preoperative radiological exams in a natural way. The system allows the user to see stereoscopic "augmented images," which are created by mixing 3-D virtual models of anatomies obtained by processing preoperative volumetric radiological images (computed tomography or MRI) with real patient live images, grabbed by means of cameras. The interface of the system consists of a head-mounted display equipped with two high-definition cameras. Cameras are mounted in correspondence of the user's eyes and allow one to grab live images of the patient with the same point of view of the user. The system does not use any external tracker to detect movements of the user or the patient. The movements of the user's head and the alignment of virtual patient with the real one are done using machine vision methods applied on pairs of live images. Experimental results, concerning frame rate and alignment precision between virtual and real patient, demonstrate that machine vision methods used for localization are appropriate for the specific application and that systems based on stereoscopic mixed reality are feasible and can be proficiently adopted in clinical practice.

  6. PISCES: An environment for parallel scientific computation

    NASA Technical Reports Server (NTRS)

    Pratt, T. W.

    1985-01-01

    The parallel implementation of scientific computing environment (PISCES) is a project to provide high-level programming environments for parallel MIMD computers. Pisces 1, the first of these environments, is a FORTRAN 77 based environment which runs under the UNIX operating system. The Pisces 1 user programs in Pisces FORTRAN, an extension of FORTRAN 77 for parallel processing. The major emphasis in the Pisces 1 design is in providing a carefully specified virtual machine that defines the run-time environment within which Pisces FORTRAN programs are executed. Each implementation then provides the same virtual machine, regardless of differences in the underlying architecture. The design is intended to be portable to a variety of architectures. Currently Pisces 1 is implemented on a network of Apollo workstations and on a DEC VAX uniprocessor via simulation of the task level parallelism. An implementation for the Flexible Computing Corp. FLEX/32 is under construction. An introduction to the Pisces 1 virtual computer and the FORTRAN 77 extensions is presented. An example of an algorithm for the iterative solution of a system of equations is given. The most notable features of the design are the provision for several granularities of parallelism in programs and the provision of a window mechanism for distributed access to large arrays of data.

  7. Suitability of digital camcorders for virtual reality image data capture

    NASA Astrophysics Data System (ADS)

    D'Apuzzo, Nicola; Maas, Hans-Gerd

    1998-12-01

    Today's consumer market digital camcorders offer features which make them appear quite interesting devices for virtual reality data capture. The paper compares a digital camcorder with an analogue camcorder and a machine vision type CCD camera and discusses the suitability of these three cameras for virtual reality applications. Besides the discussion of technical features of the cameras, this includes a detailed accuracy test in order to define the range of applications. In combination with the cameras, three different framegrabbers are tested. The geometric accuracy potential of all three cameras turned out to be surprisingly large, and no problems were noticed in the radiometric performance. On the other hand, some disadvantages have to be reported: from the photogrammetrists point of view, the major disadvantage of most camcorders is the missing possibility to synchronize multiple devices, limiting the suitability for 3-D motion data capture. Moreover, the standard video format contains interlacing, which is also undesirable for all applications dealing with moving objects or moving cameras. Further disadvantages are computer interfaces with functionality, which is still suboptimal. While custom-made solutions to these problems are probably rather expensive (and will make potential users turn back to machine vision like equipment), this functionality could probably be included by the manufacturers at almost zero cost.

  8. Cloud Computing Applications in Support of Earth Science Activities at Marshall Space Flight Center

    NASA Technical Reports Server (NTRS)

    Molthan, Andrew L.; Limaye, Ashutosh S.; Srikishen, Jayanthi

    2011-01-01

    Currently, the NASA Nebula Cloud Computing Platform is available to Agency personnel in a pre-release status as the system undergoes a formal operational readiness review. Over the past year, two projects within the Earth Science Office at NASA Marshall Space Flight Center have been investigating the performance and value of Nebula s "Infrastructure as a Service", or "IaaS" concept and applying cloud computing concepts to advance their respective mission goals. The Short-term Prediction Research and Transition (SPoRT) Center focuses on the transition of unique NASA satellite observations and weather forecasting capabilities for use within the operational forecasting community through partnerships with NOAA s National Weather Service (NWS). SPoRT has evaluated the performance of the Weather Research and Forecasting (WRF) model on virtual machines deployed within Nebula and used Nebula instances to simulate local forecasts in support of regional forecast studies of interest to select NWS forecast offices. In addition to weather forecasting applications, rapidly deployable Nebula virtual machines have supported the processing of high resolution NASA satellite imagery to support disaster assessment following the historic severe weather and tornado outbreak of April 27, 2011. Other modeling and satellite analysis activities are underway in support of NASA s SERVIR program, which integrates satellite observations, ground-based data and forecast models to monitor environmental change and improve disaster response in Central America, the Caribbean, Africa, and the Himalayas. Leveraging SPoRT s experience, SERVIR is working to establish a real-time weather forecasting model for Central America. Other modeling efforts include hydrologic forecasts for Kenya, driven by NASA satellite observations and reanalysis data sets provided by the broader meteorological community. Forecast modeling efforts are supplemented by short-term forecasts of convective initiation, determined by geostationary satellite observations processed on virtual machines powered by Nebula.

  9. Development and experimental test of support vector machines virtual screening method for searching Src inhibitors from large compound libraries.

    PubMed

    Han, Bucong; Ma, Xiaohua; Zhao, Ruiying; Zhang, Jingxian; Wei, Xiaona; Liu, Xianghui; Liu, Xin; Zhang, Cunlong; Tan, Chunyan; Jiang, Yuyang; Chen, Yuzong

    2012-11-23

    Src plays various roles in tumour progression, invasion, metastasis, angiogenesis and survival. It is one of the multiple targets of multi-target kinase inhibitors in clinical uses and trials for the treatment of leukemia and other cancers. These successes and appearances of drug resistance in some patients have raised significant interest and efforts in discovering new Src inhibitors. Various in-silico methods have been used in some of these efforts. It is desirable to explore additional in-silico methods, particularly those capable of searching large compound libraries at high yields and reduced false-hit rates. We evaluated support vector machines (SVM) as virtual screening tools for searching Src inhibitors from large compound libraries. SVM trained and tested by 1,703 inhibitors and 63,318 putative non-inhibitors correctly identified 93.53%~ 95.01% inhibitors and 99.81%~ 99.90% non-inhibitors in 5-fold cross validation studies. SVM trained by 1,703 inhibitors reported before 2011 and 63,318 putative non-inhibitors correctly identified 70.45% of the 44 inhibitors reported since 2011, and predicted as inhibitors 44,843 (0.33%) of 13.56M PubChem, 1,496 (0.89%) of 168 K MDDR, and 719 (7.73%) of 9,305 MDDR compounds similar to the known inhibitors. SVM showed comparable yield and reduced false hit rates in searching large compound libraries compared to the similarity-based and other machine-learning VS methods developed from the same set of training compounds and molecular descriptors. We tested three virtual hits of the same novel scaffold from in-house chemical libraries not reported as Src inhibitor, one of which showed moderate activity. SVM may be potentially explored for searching Src inhibitors from large compound libraries at low false-hit rates.

  10. Optimized Hypervisor Scheduler for Parallel Discrete Event Simulations on Virtual Machine Platforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoginath, Srikanth B; Perumalla, Kalyan S

    2013-01-01

    With the advent of virtual machine (VM)-based platforms for parallel computing, it is now possible to execute parallel discrete event simulations (PDES) over multiple virtual machines, in contrast to executing in native mode directly over hardware as is traditionally done over the past decades. While mature VM-based parallel systems now offer new, compelling benefits such as serviceability, dynamic reconfigurability and overall cost effectiveness, the runtime performance of parallel applications can be significantly affected. In particular, most VM-based platforms are optimized for general workloads, but PDES execution exhibits unique dynamics significantly different from other workloads. Here we first present results frommore » experiments that highlight the gross deterioration of the runtime performance of VM-based PDES simulations when executed using traditional VM schedulers, quantitatively showing the bad scaling properties of the scheduler as the number of VMs is increased. The mismatch is fundamental in nature in the sense that any fairness-based VM scheduler implementation would exhibit this mismatch with PDES runs. We also present a new scheduler optimized specifically for PDES applications, and describe its design and implementation. Experimental results obtained from running PDES benchmarks (PHOLD and vehicular traffic simulations) over VMs show over an order of magnitude improvement in the run time of the PDES-optimized scheduler relative to the regular VM scheduler, with over 20 reduction in run time of simulations using up to 64 VMs. The observations and results are timely in the context of emerging systems such as cloud platforms and VM-based high performance computing installations, highlighting to the community the need for PDES-specific support, and the feasibility of significantly reducing the runtime overhead for scalable PDES on VM platforms.« less

  11. Security model for VM in cloud

    NASA Astrophysics Data System (ADS)

    Kanaparti, Venkataramana; Naveen K., R.; Rajani, S.; Padmvathamma, M.; Anitha, C.

    2013-03-01

    Cloud computing is a new approach emerged to meet ever-increasing demand for computing resources and to reduce operational costs and Capital Expenditure for IT services. As this new way of computation allows data and applications to be stored away from own corporate server, it brings more issues in security such as virtualization security, distributed computing, application security, identity management, access control and authentication. Even though Virtualization forms the basis for cloud computing it poses many threats in securing cloud. As most of Security threats lies at Virtualization layer in cloud we proposed this new Security Model for Virtual Machine in Cloud (SMVC) in which every process is authenticated by Trusted-Agent (TA) in Hypervisor as well as in VM. Our proposed model is designed to with-stand attacks by unauthorized process that pose threat to applications related to Data Mining, OLAP systems, Image processing which requires huge resources in cloud deployed on one or more VM's.

  12. The effective use of virtualization for selection of data centers in a cloud computing environment

    NASA Astrophysics Data System (ADS)

    Kumar, B. Santhosh; Parthiban, Latha

    2018-04-01

    Data centers are the places which consist of network of remote servers to store, access and process the data. Cloud computing is a technology where users worldwide will submit the tasks and the service providers will direct the requests to the data centers which are responsible for execution of tasks. The servers in the data centers need to employ the virtualization concept so that multiple tasks can be executed simultaneously. In this paper we proposed an algorithm for data center selection based on energy of virtual machines created in server. The virtualization energy in each of the server is calculated and total energy of the data center is obtained by the summation of individual server energy. The tasks submitted are routed to the data center with least energy consumption which will result in minimizing the operational expenses of a service provider.

  13. Using virtualization to protect the proprietary material science applications in volunteer computing

    NASA Astrophysics Data System (ADS)

    Khrapov, Nikolay P.; Rozen, Valery V.; Samtsevich, Artem I.; Posypkin, Mikhail A.; Sukhomlin, Vladimir A.; Oganov, Artem R.

    2018-04-01

    USPEX is a world-leading software for computational material design. In essence, USPEX splits simulation into a large number of workunits that can be processed independently. This scheme ideally fits the desktop grid architecture. Workunit processing is done by a simulation package aimed at energy minimization. Many of such packages are proprietary and should be protected from unauthorized access when running on a volunteer PC. In this paper we present an original approach based on virtualization. In a nutshell, the proprietary code and input files are stored in an encrypted folder and run inside a virtual machine image that is also password protected. The paper describes this approach in detail and discusses its application in USPEX@home volunteer project.

  14. A nested virtualization tool for information technology practical education.

    PubMed

    Pérez, Carlos; Orduña, Juan M; Soriano, Francisco R

    2016-01-01

    A common problem of some information technology courses is the difficulty of providing practical exercises. Although different approaches have been followed to solve this problem, it is still an open issue, specially in security and computer network courses. This paper proposes NETinVM, a tool based on nested virtualization that includes a fully functional lab, comprising several computers and networks, in a single virtual machine. It also analyzes and evaluates how it has been used in different teaching environments. The results show that this tool makes it possible to perform demos, labs and practical exercises, greatly appreciated by the students, that would otherwise be unfeasible. Also, its portability allows to reproduce classroom activities, as well as the students' autonomous work.

  15. Virtual hospital--a computer-aided platform to evaluate the sense of direction.

    PubMed

    Jiang, Ching-Fen; Li, Yuan-Shyi

    2007-01-01

    This paper presents a computer-aided platform, named Virtual Hospital (VH), to evaluate the wayfinding ability that is found impaired in senile people with early dementia. The development of the VH takes the advantage of virtual reality technology to make the evaluation of the sense of direction more convenient and accurate then the conventional way. A pilot study was carried out to test its feasibility in differentiating the sense of direction between different genders. The results with significant differences in the response time (p<0.05) and the pointing error (p<0.01) between genders suggest the potential of the VH for clinical uses. Further improvement on the human-machine interface is necessary to make it easy for geriatric people to use.

  16. SU-E-T-573: Normal Tissue Dose Effect of Prescription Isodose Level Selection in Lung Stereotactic Body Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Q; Lei, Y; Zheng, D

    Purpose: To evaluate dose fall-off in normal tissue for lung stereotactic body radiation therapy (SBRT) cases planned with different prescription isodose levels (IDLs), by calculating the dose dropping speed (DDS) in normal tissue on plans computed with both Pencil Beam (PB) and Monte-Carlo (MC) algorithms. Methods: The DDS was calculated on 32 plans for 8 lung SBRT patients. For each patient, 4 dynamic conformal arc plans were individually optimized for prescription isodose levels (IDL) ranging from 60% to 90% of the maximum dose with 10% increments to conformally cover the PTV. Eighty non-overlapping rind structures each of 1mm thickness weremore » created layer by layer from each PTV surface. The average dose in each rind was calculated and fitted with a double exponential function (DEF) of the distance from the PTV surface, which models the steep- and moderate-slope portions of the average dose curve in normal tissue. The parameter characterizing the steep portion of the average dose curve in the DEF quantifies the DDS in the immediate normal tissue receiving high dose. Provided that the prescription dose covers the whole PTV, a greater DDS indicates better normal tissue sparing. The DDS were compared among plans with different prescription IDLs, for plans computed with both PB and MC algorithms. Results: For all patients, the DDS was found to be the lowest for 90% prescription IDL and reached a highest plateau region for 60% or 70% prescription. The trend was the same for both PB and MC plans. Conclusion: Among the range of prescription IDLs accepted by lung SBRT RTOG protocols, prescriptions to 60% and 70% IDLs were found to provide best normal tissue sparing.« less

  17. Optimization of the prescription isodose line for Gamma Knife radiosurgery using the shot within shot technique.

    PubMed

    Johnson, Perry B; Monterroso, Maria I; Yang, Fei; Mellon, Eric

    2017-11-25

    This work explores how the choice of prescription isodose line (IDL) affects the dose gradient, target coverage, and treatment time for Gamma Knife radiosurgery when a smaller shot is encompassed within a larger shot at the same stereotactic coordinates (shot within shot technique). Beam profiles for the 4, 8, and 16 mm collimator settings were extracted from the treatment planning system and characterized using Gaussian fits. The characterized data were used to create over 10,000 shot within shot configurations by systematically changing collimator weighting and choice of prescription IDL. Each configuration was quantified in terms of the dose gradient, target coverage, and beam-on time. By analyzing these configurations, it was found that there are regions of overlap in target size where a higher prescription IDL provides equivalent dose fall-off to a plan prescribed at the 50% IDL. Furthermore, the data indicate that treatment times within these regions can be reduced by up to 40%. An optimization strategy was devised to realize these gains. The strategy was tested for seven patients treated for 1-4 brain metastases (20 lesions total). For a single collimator setting, the gradient in the axial plane was steepest when prescribed to the 56-63% (4 mm), 62-70% (8 mm), and 77-84% (16 mm) IDL, respectively. Through utilization of the optimization technique, beam-on time was reduced by more than 15% in 16/20 lesions. The volume of normal brain receiving 12 Gy or above also decreased in many cases, and in only one instance increased by more than 0.5 cm 3 . This work demonstrates that IDL optimization using the shot within shot technique can reduce treatment times without degrading treatment plan quality.

  18. Human Factors Consideration for the Design of Collaborative Machine Assistants

    NASA Astrophysics Data System (ADS)

    Park, Sung; Fisk, Arthur D.; Rogers, Wendy A.

    Recent improvements in technology have facilitated the use of robots and virtual humans not only in entertainment and engineering but also in the military (Hill et al., 2003), healthcare (Pollack et al., 2002), and education domains (Johnson, Rickel, & Lester, 2000). As active partners of humans, such machine assistants can take the form of a robot or a graphical representation and serve the role of a financial assistant, a health manager, or even a social partner. As a result, interactive technologies are becoming an integral component of people's everyday lives.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCaskey, Alexander J.

    There is a lack of state-of-the-art quantum computing simulation software that scales on heterogeneous systems like Titan. Tensor Network Quantum Virtual Machine (TNQVM) provides a quantum simulator that leverages a distributed network of GPUs to simulate quantum circuits in a manner that leverages recent results from tensor network theory.

  20. Hardware Support for Malware Defense and End-to-End Trust

    DTIC Science & Technology

    2017-02-01

    IoT) sensors and actuators, mobile devices and servers; cloud based, stand alone, and traditional mainframes. The prototype developed demonstrated...virtual machines. For mobile platforms we developed and prototyped an architecture supporting separation of personalities on the same platform...4 3.1. MOBILE

  1. 77 FR 9239 - California State Motor Vehicle and Nonroad Engine Pollution Control Standards; Truck Idling...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-16

    ... Pollution Control Standards; Truck Idling Requirements; Notice of Decision AGENCY: Environmental Protection... to meet its serious air pollution problems. Likewise, EPA has consistently recognized that California... and high concentrations of automobiles, create serious pollution problems.'' \\37\\ Furthermore, no...

  2. Characterization of PTO and Idle Behavior for Utility Vehicles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duran, Adam W.; Konan, Arnaud M.; Miller, Eric S.

    This report presents the results of analyses performed on utility vehicle data composed primarily of aerial lift bucket trucks sampled from the National Renewable Energy Laboratory's Fleet DNA database to characterize power takeoff (PTO) and idle operating behavior for utility trucks. Two major data sources were examined in this study: a 75-vehicle sample of Odyne electric PTO (ePTO)-equipped vehicles drawn from multiple fleets spread across the United States and 10 conventional PTO-equipped Pacific Gas and Electric fleet vehicles operating in California. Novel data mining approaches were developed to identify PTO and idle operating states for each of the datasets usingmore » telematics and controller area network/onboard diagnostics data channels. These methods were applied to the individual datasets and aggregated to develop utilization curves and distributions describing PTO and idle behavior in both absolute and relative operating terms. This report also includes background information on the source vehicles, development of the analysis methodology, and conclusions regarding the study's findings.« less

  3. Request queues for interactive clients in a shared file system of a parallel computing system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bent, John M.; Faibish, Sorin

    Interactive requests are processed from users of log-in nodes. A metadata server node is provided for use in a file system shared by one or more interactive nodes and one or more batch nodes. The interactive nodes comprise interactive clients to execute interactive tasks and the batch nodes execute batch jobs for one or more batch clients. The metadata server node comprises a virtual machine monitor; an interactive client proxy to store metadata requests from the interactive clients in an interactive client queue; a batch client proxy to store metadata requests from the batch clients in a batch client queue;more » and a metadata server to store the metadata requests from the interactive client queue and the batch client queue in a metadata queue based on an allocation of resources by the virtual machine monitor. The metadata requests can be prioritized, for example, based on one or more of a predefined policy and predefined rules.« less

  4. Geometric dimension model of virtual astronaut body for ergonomic analysis of man-machine space system

    NASA Astrophysics Data System (ADS)

    Qianxiang, Zhou

    2012-07-01

    It is very important to clarify the geometric characteristic of human body segment and constitute analysis model for ergonomic design and the application of ergonomic virtual human. The typical anthropometric data of 1122 Chinese men aged 20-35 years were collected using three-dimensional laser scanner for human body. According to the correlation between different parameters, curve fitting were made between seven trunk parameters and ten body parameters with the SPSS 16.0 software. It can be concluded that hip circumference and shoulder breadth are the most important parameters in the models and the two parameters have high correlation with the others parameters of human body. By comparison with the conventional regressive curves, the present regression equation with the seven trunk parameters is more accurate to forecast the geometric dimensions of head, neck, height and the four limbs with high precision. Therefore, it is greatly valuable for ergonomic design and analysis of man-machine system.This result will be very useful to astronaut body model analysis and application.

  5. A Virtual Geant4 Environment

    NASA Astrophysics Data System (ADS)

    Iwai, Go

    2015-12-01

    We describe the development of an environment for Geant4 consisting of an application and data that provide users with a more efficient way to access Geant4 applications without having to download and build the software locally. The environment is platform neutral and offers the users near-real time performance. In addition, the environment consists of data and Geant4 libraries built using low-level virtual machine (LLVM) tools which can produce bitcode that can be embedded in HTML and accessed via a browser. The bitcode is downloaded to the local machine via the browser and can then be configured by the user. This approach provides a way of minimising the risk of leaking potentially sensitive data used to construct the Geant4 model and application in the medical domain for treatment planning. We describe several applications that have used this approach and compare their performance with that of native applications. We also describe potential user communities that could benefit from this approach.

  6. The core legion object model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewis, M.; Grimshaw, A.

    1996-12-31

    The Legion project at the University of Virginia is an architecture for designing and building system services that provide the illusion of a single virtual machine to users, a virtual machine that provides secure shared object and shared name spaces, application adjustable fault-tolerance, improved response time, and greater throughput. Legion targets wide area assemblies of workstations, supercomputers, and parallel supercomputers, Legion tackles problems not solved by existing workstation based parallel processing tools; the system will enable fault-tolerance, wide area parallel processing, inter-operability, heterogeneity, a single global name space, protection, security, efficient scheduling, and comprehensive resource management. This paper describes themore » core Legion object model, which specifies the composition and functionality of Legion`s core objects-those objects that cooperate to create, locate, manage, and remove objects in the Legion system. The object model facilitates a flexible extensible implementation, provides a single global name space, grants site autonomy to participating organizations, and scales to millions of sites and trillions of objects.« less

  7. An Extended Proof-Carrying Code Framework for Security Enforcement

    NASA Astrophysics Data System (ADS)

    Pirzadeh, Heidar; Dubé, Danny; Hamou-Lhadj, Abdelwahab

    The rapid growth of the Internet has resulted in increased attention to security to protect users from being victims of security threats. In this paper, we focus on security mechanisms that are based on Proof-Carrying Code (PCC) techniques. In a PCC system, a code producer sends a code along with its safety proof to the consumer. The consumer executes the code only if the proof is valid. Although PCC has been shown to be a useful security framework, it suffers from the sheer size of typical proofs -proofs of even small programs can be considerably large. In this paper, we propose an extended PCC framework (EPCC) in which, instead of the proof, a proof generator for the program in question is transmitted. This framework enables the execution of the proof generator and the recovery of the proof on the consumer's side in a secure manner using a newly created virtual machine called the VEP (Virtual Machine for Extended PCC).

  8. Development of a HIPAA-compliant environment for translational research data and analytics.

    PubMed

    Bradford, Wayne; Hurdle, John F; LaSalle, Bernie; Facelli, Julio C

    2014-01-01

    High-performance computing centers (HPC) traditionally have far less restrictive privacy management policies than those encountered in healthcare. We show how an HPC can be re-engineered to accommodate clinical data while retaining its utility in computationally intensive tasks such as data mining, machine learning, and statistics. We also discuss deploying protected virtual machines. A critical planning step was to engage the university's information security operations and the information security and privacy office. Access to the environment requires a double authentication mechanism. The first level of authentication requires access to the university's virtual private network and the second requires that the users be listed in the HPC network information service directory. The physical hardware resides in a data center with controlled room access. All employees of the HPC and its users take the university's local Health Insurance Portability and Accountability Act training series. In the first 3 years, researcher count has increased from 6 to 58.

  9. New Web Server - the Java Version of Tempest - Produced

    NASA Technical Reports Server (NTRS)

    York, David W.; Ponyik, Joseph G.

    2000-01-01

    A new software design and development effort has produced a Java (Sun Microsystems, Inc.) version of the award-winning Tempest software (refs. 1 and 2). In 1999, the Embedded Web Technology (EWT) team received a prestigious R&D 100 Award for Tempest, Java Version. In this article, "Tempest" will refer to the Java version of Tempest, a World Wide Web server for desktop or embedded systems. Tempest was designed at the NASA Glenn Research Center at Lewis Field to run on any platform for which a Java Virtual Machine (JVM, Sun Microsystems, Inc.) exists. The JVM acts as a translator between the native code of the platform and the byte code of Tempest, which is compiled in Java. These byte code files are Java executables with a ".class" extension. Multiple byte code files can be zipped together as a "*.jar" file for more efficient transmission over the Internet. Today's popular browsers, such as Netscape (Netscape Communications Corporation) and Internet Explorer (Microsoft Corporation) have built-in Virtual Machines to display Java applets.

  10. Machine learning-based assessment tool for imbalance and vestibular dysfunction with virtual reality rehabilitation system.

    PubMed

    Yeh, Shih-Ching; Huang, Ming-Chun; Wang, Pa-Chun; Fang, Te-Yung; Su, Mu-Chun; Tsai, Po-Yi; Rizzo, Albert

    2014-10-01

    Dizziness is a major consequence of imbalance and vestibular dysfunction. Compared to surgery and drug treatments, balance training is non-invasive and more desired. However, training exercises are usually tedious and the assessment tool is insufficient to diagnose patient's severity rapidly. An interactive virtual reality (VR) game-based rehabilitation program that adopted Cawthorne-Cooksey exercises, and a sensor-based measuring system were introduced. To verify the therapeutic effect, a clinical experiment with 48 patients and 36 normal subjects was conducted. Quantified balance indices were measured and analyzed by statistical tools and a Support Vector Machine (SVM) classifier. In terms of balance indices, patients who completed the training process are progressed and the difference between normal subjects and patients is obvious. Further analysis by SVM classifier show that the accuracy of recognizing the differences between patients and normal subject is feasible, and these results can be used to evaluate patients' severity and make rapid assessment. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  11. High-Performance Data Analysis Tools for Sun-Earth Connection Missions

    NASA Technical Reports Server (NTRS)

    Messmer, Peter

    2011-01-01

    The data analysis tool of choice for many Sun-Earth Connection missions is the Interactive Data Language (IDL) by ITT VIS. The increasing amount of data produced by these missions and the increasing complexity of image processing algorithms requires access to higher computing power. Parallel computing is a cost-effective way to increase the speed of computation, but algorithms oftentimes have to be modified to take advantage of parallel systems. Enhancing IDL to work on clusters gives scientists access to increased performance in a familiar programming environment. The goal of this project was to enable IDL applications to benefit from both computing clusters as well as graphics processing units (GPUs) for accelerating data analysis tasks. The tool suite developed in this project enables scientists now to solve demanding data analysis problems in IDL that previously required specialized software, and it allows them to be solved orders of magnitude faster than on conventional PCs. The tool suite consists of three components: (1) TaskDL, a software tool that simplifies the creation and management of task farms, collections of tasks that can be processed independently and require only small amounts of data communication; (2) mpiDL, a tool that allows IDL developers to use the Message Passing Interface (MPI) inside IDL for problems that require large amounts of data to be exchanged among multiple processors; and (3) GPULib, a tool that simplifies the use of GPUs as mathematical coprocessors from within IDL. mpiDL is unique in its support for the full MPI standard and its support of a broad range of MPI implementations. GPULib is unique in enabling users to take advantage of an inexpensive piece of hardware, possibly already installed in their computer, and achieve orders of magnitude faster execution time for numerically complex algorithms. TaskDL enables the simple setup and management of task farms on compute clusters. The products developed in this project have the potential to interact, so one can build a cluster of PCs, each equipped with a GPU, and use mpiDL to communicate between the nodes and GPULib to accelerate the computations on each node.

  12. Autoplot: a Browser for Science Data on the Web

    NASA Astrophysics Data System (ADS)

    Faden, J.; Weigel, R. S.; West, E. E.; Merka, J.

    2008-12-01

    Autoplot (www.autoplot.org) is software for plotting data from many different sources and in many different file formats. Data from CDF, CEF, Fits, NetCDF, and OpenDAP can be plotted, along with many other sources such as ASCII tables and Excel spreadsheets. This is done by adapting these various data formats and APIs into a common data model that borrows from the netCDF and CDF data models. Autoplot uses a web browser metaphor to simplify use. The user specifies a parameter URL, for example a CDF file accessible via http with a parameter name appended, and the file resource is downloaded and the parameter is rendered in a scientifically meaningful way. When data span multiple files, the user can use a file name template in the URL to aggregate (combine) a set of remote files. So the problem of aggregating data across file boundaries is handled on the client side, allowing simple web servers to be used. The das2 graphics library provides rich controls for exploring the data. Scripting is supported through Python, providing not just programmatic control, but for calculating new parameters in a language that will look familiar to IDL and Matlab users. Autoplot is Java-based software, and will run on most computers without a burdensome installation process. It can also used as an applet or as a servlet that serves static images. Autoplot was developed as part of the Virtual Radiation Belt Observatory (ViRBO) project, and is also being used for the Virtual Magnetospheric Observatory (VMO). It is expected that this flexible, general-purpose plotting tool will be useful for allowing a data provider to add instant visualization capabilities to a directory of files or for general use in the Virtual Observatory environment.

  13. The effect of ambient conditions on carbon monoxide emissions from an idling gas turbine combustor. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Subramanian, A. K.

    1977-01-01

    A test program employing a gas turbine combustor is outlined; the results of which quantize the effects of changes in ambient temperature and humidity on carbon monoxide emissions at simulated idle operating conditions. A comparison of the experimental results with analytical results generated by a kinetic model of the combustion process, and reflecting changing ambient conditions, is given. It is demonstrated that for a complete range of possible ambient variations, significant changes do occur in the amount of carbon monoxide emitted by a gas turbine at idle, and that the analytical model is reasonably successful in predicting changes.

  14. Parallel computation for biological sequence comparison: comparing a portable model to the native model for the Intel Hypercube.

    PubMed

    Nadkarni, P M; Miller, P L

    1991-01-01

    A parallel program for inter-database sequence comparison was developed on the Intel Hypercube using two models of parallel programming. One version was built using machine-specific Hypercube parallel programming commands. The other version was built using Linda, a machine-independent parallel programming language. The two versions of the program provide a case study comparing these two approaches to parallelization in an important biological application area. Benchmark tests with both programs gave comparable results with a small number of processors. As the number of processors was increased, the Linda version was somewhat less efficient. The Linda version was also run without change on Network Linda, a virtual parallel machine running on a network of desktop workstations.

  15. 14 CFR 23.77 - Balked landing.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... of more than 6,000 pounds maximum weight and each normal, utility, and acrobatic category turbine... movement of the power controls from minimum flight-idle position; (2) The landing gear extended; (3) The... of movement of the power controls from the minimum flight idle position; (2) Landing gear extended...

  16. 40 CFR 86.1537 - Idle test run.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Heavy-Duty Engines, New Methanol-Fueled Natural Gas-Fueled, and Liquefied Petroleum Gas-Fueled Diesel-Cycle Heavy-Duty Engines, New Otto-Cycle Light-Duty Trucks, and New Methanol-Fueled Natural Gas-Fueled... dilute sampling. (6) For bag sampling, sample idle emissions long enough to obtain a sufficient bag...

  17. 40 CFR 1033.115 - Other requirements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... mode and non-hotel mode. (g) Idle controls. All new locomotives must be equipped with automatic engine... that will achieve equivalent idle control. (4) See § 1033.201 for provisions that allow you to obtain a... 1033.115 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS...

  18. 40 CFR 1033.115 - Other requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... mode and non-hotel mode. (g) Idle controls. All new locomotives must be equipped with automatic engine... that will achieve equivalent idle control. (4) See § 1033.201 for provisions that allow you to obtain a... 1033.115 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS...

  19. 40 CFR 1033.115 - Other requirements.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... mode and non-hotel mode. (g) Idle controls. All new locomotives must be equipped with automatic engine... that will achieve equivalent idle control. (4) See § 1033.201 for provisions that allow you to obtain a... 1033.115 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS...

  20. 40 CFR 1033.115 - Other requirements.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... mode and non-hotel mode. (g) Idle controls. All new locomotives must be equipped with automatic engine... that will achieve equivalent idle control. (4) See § 1033.201 for provisions that allow you to obtain a... 1033.115 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS...

  1. Learners' Dictionaries: State of the Art. Anthology Series 23.

    ERIC Educational Resources Information Center

    Tickoo, Makhan L., Ed.

    A collection of articles on dictionaries for advanced second language learners includes essays on the past, present, and future of learners' dictionaries; alternative dictionaries; dictionary construction; and dictionaries and their users. Titles include: "Idle Thoughts of an Idle Fellow; or Vaticinations on the Learners' Dictionary"…

  2. 40 CFR 85.2218 - Preconditioned idle test-EPA 91.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 18 2010-07-01 2010-07-01 false Preconditioned idle test-EPA 91. 85.2218 Section 85.2218 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF AIR POLLUTION FROM MOBILE SOURCES Emission Control System Performance Warranty Short...

  3. 40 CFR 85.2213 - Idle test-EPA 91.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 18 2010-07-01 2010-07-01 false Idle test-EPA 91. 85.2213 Section 85.2213 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF AIR POLLUTION FROM MOBILE SOURCES Emission Control System Performance Warranty Short Tests § 85...

  4. 40 CFR 85.2212 - Idle test-EPA 81.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 18 2010-07-01 2010-07-01 false Idle test-EPA 81. 85.2212 Section 85.2212 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF AIR POLLUTION FROM MOBILE SOURCES Emission Control System Performance Warranty Short Tests § 85...

  5. Essential Power Systems Workshop - OEM Perspective

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bill Gouse

    2001-12-12

    In California, idling is largely done for climate control. This suggests that climate control devices alone could be used to reduce idling. Line-haul truck drivers surveyed require an average of 4-6 kW of power for a stereo, CB radio, light, refrigerator, and climate control found in the average truck. More power may likely be necessary for peak power demands. The amount of time line-haul trucks reported to have stopped is between 25 and 30 hours per week. It was not possible to accurately determine from the pilot survey the location, purpose, and duration of idling. Consulting driver logs or electronicallymore » monitoring trucks could yield more accurate data, including seasonal and geographic differences. Truck drivers were receptive to idling alternatives. Two-thirds of truck drivers surveyed support a program to reduce idling. Two-thirds of drivers reported they would purchase idling reduction technologies if the technology yielded a payback period of two years or less. Willingness to purchase auxiliary power units appears to be higher for owner-operators than for company drivers. With a 2-year payback period, 82% of owner- operators would be willing to buy an idle- reducing device, while 63% of company drivers thought their company would do the same. Contact with companies is necessary to discern whether this difference between owner- operators and companies is true or simply due to the perception of the company drivers. Truck stops appear to be a much more attractive option for electrification than rest areas by a 48% to 21% margin. Much of this discrepancy may be due to perceived safety problems with rest areas. This survey did not properly differentiate between using these areas for breaks or overnight. The next, full survey will quantify where the truck drivers are staying overnight, where they go for breaks, and the duration of time they spend at each place. The nationwide survey, which is in progress, will indicate how applicable the results are to the US in general. In addition to the survey, we believe data loggers and focus groups will be necessary to collect the idling duration and location data necessary to compare auxiliary power units to truck stop electrification. Focus groups are recommended to better understand the driver response to APUs and electrification. The appearance and perception of the new systems will need further clarification, which could be accomplished with a demonstration for truck drivers.« less

  6. "Tactic": Traffic Aware Cloud for Tiered Infrastructure Consolidation

    ERIC Educational Resources Information Center

    Sangpetch, Akkarit

    2013-01-01

    Large-scale enterprise applications are deployed as distributed applications. These applications consist of many inter-connected components with heterogeneous roles and complex dependencies. Each component typically consumes 5-15% of the server capacity. Deploying each component as a separate virtual machine (VM) allows us to consolidate the…

  7. Airlift Operation Modeling Using Discrete Event Simulation (DES)

    DTIC Science & Technology

    2009-12-01

    Java ......................................................................................................20 2. Simkit...JRE Java Runtime Environment JVM Java Virtual Machine lbs Pounds LAM Load Allocation Mode LRM Landing Spot Reassignment Mode LEGO Listener Event...SOFTWARE DEVELOPMENT ENVIRONMENT The following are the software tools and development environment used for constructing the models. 1. Java Java

  8. The Warsaw Ghetto: A Shattered Window on the Holocaust.

    ERIC Educational Resources Information Center

    Burstin, Barbara Stern

    1980-01-01

    Reviews literature about the Warsaw ghetto uprising in April, 1943, in which Jewish resistance fighters fought to the last against the Nazi war machine. The author notes that history textbooks at both high school and college levels give virtually no mention of the revolt. (Author/KC)

  9. Networked Resources.

    ERIC Educational Resources Information Center

    Nickerson, Gord

    1991-01-01

    Describes the use and applications of the communications program Telenet for remote log-in, a basic interactive resource sharing service that enables users to connect to any machine on the Internet and conduct a session. The Virtual Terminal--the central component of Telenet--is also described, as well as problems with terminals, services…

  10. Lean Green Machines

    ERIC Educational Resources Information Center

    Villano, Matt

    2011-01-01

    Colleges and universities have been among the leaders nationwide in adopting green initiatives, partly due to their demographics, but also because they are facing their own budget pressures. Virtualization has become the poster child of many schools' efforts, because it provides significant bang for the buck. However, more and more higher…

  11. Unstable behaviour of RPT when testing turbine characteristics in the laboratory

    NASA Astrophysics Data System (ADS)

    Nielsen, T. K.; Fjørtoft Svarstad, M.

    2014-03-01

    A reversible pump turbine is a machine that can operate in three modes of operation i.e. in pumping mode. in turbine mode and in phase compensating mode (idle speed). Reversible pump turbines have an increasing importance for regulation purposes for obtaining power balance in electric power systems. Especially in grids dominated by thermal energy. reversible pump turbines improve the overall power regulating ability. Increased use of renewables (wind-. wave- and tidal power plants) will utterly demand better regulation ability of the traditional water power systems. enhancing the use of reversible pump turbines. A reversible pump turbine is known for having incredible steep speed - flow characteristics. As the speed increases the flow decreases more than that of a Francis turbines with the same specific speed. The steep characteristics might cause severe stability problems in turbine mode of operation. Stability in idle speed is a necessity for phasing in the generator to the electric grid. In the design process of a power plant. system dynamic simulations must be performed in order to check the system stability. The turbine characteristics will have to be modelled with certain accuracy even before one knows the exact turbine design and have measured characteristics. A representation of the RPT characteristics for system dynamic simulation purposes is suggested and compared with measured characteristics. The model shows good agreement with RPT characteristics measured in The Waterpower Laboratory. Because of the S-shaped characteristics. there is a stability issue involved when measuring these characteristics. Without special measures. it is impossible to achieve stable conditions in certain operational points. The paper discusses the mechanism when using a throttle to achieve system stability. even if the turbine characteristics imply instability.

  12. Alternative Fuels Data Center

    Science.gov Websites

    Idle Reduction Equipment Excise Tax Exemption Qualified on-board idle reduction devices and advanced insulation are exempt from the federal excise tax imposed on the retail sale of heavy-duty highway ) SmartWay Technology Program Federal Excise Tax Exemption website. The exemption applies to equipment that

  13. 41 CFR 101-25.109-1 - Identification of idle equipment.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 41 Public Contracts and Property Management 2 2013-07-01 2012-07-01 true Identification of idle equipment. 101-25.109-1 Section 101-25.109-1 Public Contracts and Property Management Federal Property Management Regulations System FEDERAL PROPERTY MANAGEMENT REGULATIONS SUPPLY AND PROCUREMENT 25-GENERAL 25.1...

  14. 41 CFR 101-25.109-1 - Identification of idle equipment.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 41 Public Contracts and Property Management 2 2012-07-01 2012-07-01 false Identification of idle equipment. 101-25.109-1 Section 101-25.109-1 Public Contracts and Property Management Federal Property Management Regulations System FEDERAL PROPERTY MANAGEMENT REGULATIONS SUPPLY AND PROCUREMENT 25-GENERAL 25.1...

  15. 41 CFR 101-25.109-1 - Identification of idle equipment.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 41 Public Contracts and Property Management 2 2014-07-01 2012-07-01 true Identification of idle equipment. 101-25.109-1 Section 101-25.109-1 Public Contracts and Property Management Federal Property Management Regulations System FEDERAL PROPERTY MANAGEMENT REGULATIONS SUPPLY AND PROCUREMENT 25-GENERAL 25.1...

  16. 41 CFR 101-25.109-1 - Identification of idle equipment.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 41 Public Contracts and Property Management 2 2011-07-01 2007-07-01 true Identification of idle equipment. 101-25.109-1 Section 101-25.109-1 Public Contracts and Property Management Federal Property Management Regulations System FEDERAL PROPERTY MANAGEMENT REGULATIONS SUPPLY AND PROCUREMENT 25-GENERAL 25.1...

  17. 40 CFR 86.1506 - Equipment required and specifications; overview.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... specifications appear in §§ 86.1509 through 86.1511. (2) Fuel and analytical tests. Fuel requirements for idle... Test Procedures § 86.1506 Equipment required and specifications; overview. (a) This subpart contains procedures for performing idle exhaust emission tests on Otto-cycle heavy-duty engines and Otto-cycle light...

  18. 40 CFR 86.1506 - Equipment required and specifications; overview.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... specifications appear in §§ 86.1509 through 86.1511. (2) Fuel and analytical tests. Fuel requirements for idle... Test Procedures § 86.1506 Equipment required and specifications; overview. (a) This subpart contains procedures for performing idle exhaust emission tests on Otto-cycle heavy-duty engines and Otto-cycle light...

  19. 40 CFR 86.1506 - Equipment required and specifications; overview.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... specifications appear in §§ 86.1509 through 86.1511. (2) Fuel and analytical tests. Fuel requirements for idle... Test Procedures § 86.1506 Equipment required and specifications; overview. (a) This subpart contains procedures for performing idle exhaust emission tests on Otto-cycle heavy-duty engines and Otto-cycle light...

  20. 41 CFR 109-25.109-1 - Identification of idle equipment.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 41 Public Contracts and Property Management 3 2010-07-01 2010-07-01 false Identification of idle equipment. 109-25.109-1 Section 109-25.109-1 Public Contracts and Property Management Federal Property Management Regulations System (Continued) DEPARTMENT OF ENERGY PROPERTY MANAGEMENT REGULATIONS SUPPLY AND...

Top