Sample records for virtual machine technology

  1. Modeling and simulation of five-axis virtual machine based on NX

    NASA Astrophysics Data System (ADS)

    Li, Xiaoda; Zhan, Xianghui

    2018-04-01

    Virtual technology in the machinery manufacturing industry has shown the role of growing. In this paper, the Siemens NX software is used to model the virtual CNC machine tool, and the parameters of the virtual machine are defined according to the actual parameters of the machine tool so that the virtual simulation can be carried out without loss of the accuracy of the simulation. How to use the machine builder of the CAM module to define the kinematic chain and machine components of the machine is described. The simulation of virtual machine can provide alarm information of tool collision and over cutting during the process to users, and can evaluate and forecast the rationality of the technological process.

  2. Virtual Manufacturing Techniques Designed and Applied to Manufacturing Activities in the Manufacturing Integration and Technology Branch

    NASA Technical Reports Server (NTRS)

    Shearrow, Charles A.

    1999-01-01

    One of the identified goals of EM3 is to implement virtual manufacturing by the time the year 2000 has ended. To realize this goal of a true virtual manufacturing enterprise the initial development of a machinability database and the infrastructure must be completed. This will consist of the containment of the existing EM-NET problems and developing machine, tooling, and common materials databases. To integrate the virtual manufacturing enterprise with normal day to day operations the development of a parallel virtual manufacturing machinability database, virtual manufacturing database, virtual manufacturing paradigm, implementation/integration procedure, and testable verification models must be constructed. Common and virtual machinability databases will include the four distinct areas of machine tools, available tooling, common machine tool loads, and a materials database. The machine tools database will include the machine envelope, special machine attachments, tooling capacity, location within NASA-JSC or with a contractor, and availability/scheduling. The tooling database will include available standard tooling, custom in-house tooling, tool properties, and availability. The common materials database will include materials thickness ranges, strengths, types, and their availability. The virtual manufacturing databases will consist of virtual machines and virtual tooling directly related to the common and machinability databases. The items to be completed are the design and construction of the machinability databases, virtual manufacturing paradigm for NASA-JSC, implementation timeline, VNC model of one bridge mill and troubleshoot existing software and hardware problems with EN4NET. The final step of this virtual manufacturing project will be to integrate other production sites into the databases bringing JSC's EM3 into a position of becoming a clearing house for NASA's digital manufacturing needs creating a true virtual manufacturing enterprise.

  3. Managing virtual machines with Vac and Vcycle

    NASA Astrophysics Data System (ADS)

    McNab, A.; Love, P.; MacMahon, E.

    2015-12-01

    We compare the Vac and Vcycle virtual machine lifecycle managers and our experiences in providing production job execution services for ATLAS, CMS, LHCb, and the GridPP VO at sites in the UK, France and at CERN. In both the Vac and Vcycle systems, the virtual machines are created outside of the experiment's job submission and pilot framework. In the case of Vac, a daemon runs on each physical host which manages a pool of virtual machines on that host, and a peer-to-peer UDP protocol is used to achieve the desired target shares between experiments across the site. In the case of Vcycle, a daemon manages a pool of virtual machines on an Infrastructure-as-a-Service cloud system such as OpenStack, and has within itself enough information to create the types of virtual machines to achieve the desired target shares. Both systems allow unused shares for one experiment to temporarily taken up by other experiements with work to be done. The virtual machine lifecycle is managed with a minimum of information, gathered from the virtual machine creation mechanism (such as libvirt or OpenStack) and using the proposed Machine/Job Features API from WLCG. We demonstrate that the same virtual machine designs can be used to run production jobs on Vac and Vcycle/OpenStack sites for ATLAS, CMS, LHCb, and GridPP, and that these technologies allow sites to be operated in a reliable and robust way.

  4. Human Machine Interfaces for Teleoperators and Virtual Environments

    NASA Technical Reports Server (NTRS)

    Durlach, Nathaniel I. (Compiler); Sheridan, Thomas B. (Compiler); Ellis, Stephen R. (Compiler)

    1991-01-01

    In Mar. 1990, a meeting organized around the general theme of teleoperation research into virtual environment display technology was conducted. This is a collection of conference-related fragments that will give a glimpse of the potential of the following fields and how they interplay: sensorimotor performance; human-machine interfaces; teleoperation; virtual environments; performance measurement and evaluation methods; and design principles and predictive models.

  5. Will Anything Useful Come Out of Virtual Reality? Examination of a Naval Application

    DTIC Science & Technology

    1993-05-01

    The term virtual reality can encompass varying meanings, but some generally accepted attributes of a virtual environment are that it is immersive...technology, but at present there are few practical applications which are utilizing the broad range of virtual reality technology. This paper will discuss an...Operability, operator functions, Virtual reality , Man-machine interface, Decision aids/decision making, Decision support. ASW.

  6. Feasibility of Virtual Machine and Cloud Computing Technologies for High Performance Computing

    DTIC Science & Technology

    2014-05-01

    Hat Enterprise Linux SaaS software as a service VM virtual machine vNUMA virtual non-uniform memory access WRF weather research and forecasting...previously mentioned in Chapter I Section B1 of this paper, which is used to run the weather research and forecasting ( WRF ) model in their experiments...against a VMware virtualization solution of WRF . The experiment consisted of running WRF in a standard configuration between the D-VTM and VMware while

  7. An imperialist competitive algorithm for virtual machine placement in cloud computing

    NASA Astrophysics Data System (ADS)

    Jamali, Shahram; Malektaji, Sepideh; Analoui, Morteza

    2017-05-01

    Cloud computing, the recently emerged revolution in IT industry, is empowered by virtualisation technology. In this paradigm, the user's applications run over some virtual machines (VMs). The process of selecting proper physical machines to host these virtual machines is called virtual machine placement. It plays an important role on resource utilisation and power efficiency of cloud computing environment. In this paper, we propose an imperialist competitive-based algorithm for the virtual machine placement problem called ICA-VMPLC. The base optimisation algorithm is chosen to be ICA because of its ease in neighbourhood movement, good convergence rate and suitable terminology. The proposed algorithm investigates search space in a unique manner to efficiently obtain optimal placement solution that simultaneously minimises power consumption and total resource wastage. Its final solution performance is compared with several existing methods such as grouping genetic and ant colony-based algorithms as well as bin packing heuristic. The simulation results show that the proposed method is superior to other tested algorithms in terms of power consumption, resource wastage, CPU usage efficiency and memory usage efficiency.

  8. System-Level Virtualization Research at Oak Ridge National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scott, Stephen L; Vallee, Geoffroy R; Naughton, III, Thomas J

    2010-01-01

    System-level virtualization is today enjoying a rebirth as a technique to effectively share what were then considered large computing resources to subsequently fade from the spotlight as individual workstations gained in popularity with a one machine - one user approach. One reason for this resurgence is that the simple workstation has grown in capability to rival that of anything available in the past. Thus, computing centers are again looking at the price/performance benefit of sharing that single computing box via server consolidation. However, industry is only concentrating on the benefits of using virtualization for server consolidation (enterprise computing) whereas ourmore » interest is in leveraging virtualization to advance high-performance computing (HPC). While these two interests may appear to be orthogonal, one consolidating multiple applications and users on a single machine while the other requires all the power from many machines to be dedicated solely to its purpose, we propose that virtualization does provide attractive capabilities that may be exploited to the benefit of HPC interests. This does raise the two fundamental questions of: is the concept of virtualization (a machine sharing technology) really suitable for HPC and if so, how does one go about leveraging these virtualization capabilities for the benefit of HPC. To address these questions, this document presents ongoing studies on the usage of system-level virtualization in a HPC context. These studies include an analysis of the benefits of system-level virtualization for HPC, a presentation of research efforts based on virtualization for system availability, and a presentation of research efforts for the management of virtual systems. The basis for this document was material presented by Stephen L. Scott at the Collaborative and Grid Computing Technologies meeting held in Cancun, Mexico on April 12-14, 2007.« less

  9. Modeling and Analysis Compute Environments, Utilizing Virtualization Technology in the Climate and Earth Systems Science domain

    NASA Astrophysics Data System (ADS)

    Michaelis, A.; Nemani, R. R.; Wang, W.; Votava, P.; Hashimoto, H.

    2010-12-01

    Given the increasing complexity of climate modeling and analysis tools, it is often difficult and expensive to build or recreate an exact replica of the software compute environment used in past experiments. With the recent development of new technologies for hardware virtualization, an opportunity exists to create full modeling, analysis and compute environments that are “archiveable”, transferable and may be easily shared amongst a scientific community or presented to a bureaucratic body if the need arises. By encapsulating and entire modeling and analysis environment in a virtual machine image, others may quickly gain access to the fully built system used in past experiments, potentially easing the task and reducing the costs of reproducing and verify past results produced by other researchers. Moreover, these virtual machine images may be used as a pedagogical tool for others that are interested in performing an academic exercise but don't yet possess the broad expertise required. We built two virtual machine images, one with the Community Earth System Model (CESM) and one with Weather Research Forecast Model (WRF), then ran several small experiments to assess the feasibility, performance overheads costs, reusability, and transferability. We present a list of the pros and cons as well as lessoned learned from utilizing virtualization technology in the climate and earth systems modeling domain.

  10. Migrating EO/IR sensors to cloud-based infrastructure as service architectures

    NASA Astrophysics Data System (ADS)

    Berglie, Stephen T.; Webster, Steven; May, Christopher M.

    2014-06-01

    The Night Vision Image Generator (NVIG), a product of US Army RDECOM CERDEC NVESD, is a visualization tool used widely throughout Army simulation environments to provide fully attributed synthesized, full motion video using physics-based sensor and environmental effects. The NVIG relies heavily on contemporary hardware-based acceleration and GPU processing techniques, which push the envelope of both enterprise and commodity-level hypervisor support for providing virtual machines with direct access to hardware resources. The NVIG has successfully been integrated into fully virtual environments where system architectures leverage cloudbased technologies to various extents in order to streamline infrastructure and service management. This paper details the challenges presented to engineers seeking to migrate GPU-bound processes, such as the NVIG, to virtual machines and, ultimately, Cloud-Based IAS architectures. In addition, it presents the path that led to success for the NVIG. A brief overview of Cloud-Based infrastructure management tool sets is provided, and several virtual desktop solutions are outlined. A discrimination is made between general purpose virtual desktop technologies compared to technologies that expose GPU-specific capabilities, including direct rendering and hard ware-based video encoding. Candidate hypervisor/virtual machine configurations that nominally satisfy the virtualized hardware-level GPU requirements of the NVIG are presented , and each is subsequently reviewed in light of its implications on higher-level Cloud management techniques. Implementation details are included from the hardware level, through the operating system, to the 3D graphics APls required by the NVIG and similar GPU-bound tools.

  11. Research on axisymmetric aspheric surface numerical design and manufacturing technology

    NASA Astrophysics Data System (ADS)

    Wang, Zhen-zhong; Guo, Yin-biao; Lin, Zheng

    2006-02-01

    The key technology for aspheric machining offers exact machining path and machining aspheric lens with high accuracy and efficiency, in spite of the development of traditional manual manufacturing into nowadays numerical control (NC) machining. This paper presents a mathematical model between virtual cone and aspheric surface equations, and discusses the technology of uniform wear of grinding wheel and error compensation in aspheric machining. Finally, a software system for high precision aspheric surface manufacturing is designed and realized, based on the mentioned above. This software system can work out grinding wheel path according to input parameters and generate machining NC programs of aspheric surfaces.

  12. Proposal of Modification Strategy of NC Program in the Virtual Manufacturing Environment

    NASA Astrophysics Data System (ADS)

    Narita, Hirohisa; Chen, Lian-Yi; Fujimoto, Hideo; Shirase, Keiichi; Arai, Eiji

    Virtual manufacturing will be a key technology in process planning, because there are no evaluation tools for cutting conditions. Therefore, virtual machining simulator (VMSim), which can predict end milling processes, has been developed. The modification strategy of NC program using VMSim is proposed in this paper.

  13. Active Gaming: Is "Virtual" Reality Right for Your Physical Education Program?

    ERIC Educational Resources Information Center

    Hansen, Lisa; Sanders, Stephen W.

    2012-01-01

    Active gaming is growing in popularity and the idea of increasing children's physical activity by using technology is largely accepted by physical educators. Teachers nationwide have been providing active gaming equipment such as virtual bikes, rhythmic dance machines, virtual sporting games, martial arts simulators, balance boards, and other…

  14. An Analysis of Hardware-Assisted Virtual Machine Based Rootkits

    DTIC Science & Technology

    2014-06-01

    certain aspects of TPM implementation just to name a few. HyperWall is an architecture proposed by Szefer and Lee to protect guest VMs from...DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words) The use of virtual machine (VM) technology has expanded rapidly since AMD and Intel implemented ...Intel VT-x implementations of Blue Pill to identify commonalities in the respective versions’ attack methodologies from both a functional and technical

  15. A performance study of live VM migration technologies: VMotion vs XenMotion

    NASA Astrophysics Data System (ADS)

    Feng, Xiujie; Tang, Jianxiong; Luo, Xuan; Jin, Yaohui

    2011-12-01

    Due to the growing demand of flexible resource management for cloud computing services, researches on live virtual machine migration have attained more and more attention. Live migration of virtual machine across different hosts has been a powerful tool to facilitate system maintenance, load balancing, fault tolerance and so on. In this paper, we use a measurement-based approach to compare the performance of two major live migration technologies under certain network conditions, i.e., VMotion and XenMotion. The results show that VMotion generates much less data transferred than XenMotion when migrating identical VMs. However, in network with moderate packet loss and delay, which are typical in a VPN (virtual private network) scenario used to connect the data centers, XenMotion outperforms VMotion in total migration time. We hope that this study can be helpful in choosing suitable virtualization environments for data center administrators and optimizing existing live migration mechanisms.

  16. CFCC: A Covert Flows Confinement Mechanism for Virtual Machine Coalitions

    NASA Astrophysics Data System (ADS)

    Cheng, Ge; Jin, Hai; Zou, Deqing; Shi, Lei; Ohoussou, Alex K.

    Normally, virtualization technology is adopted to construct the infrastructure of cloud computing environment. Resources are managed and organized dynamically through virtual machine (VM) coalitions in accordance with the requirements of applications. Enforcing mandatory access control (MAC) on the VM coalitions will greatly improve the security of VM-based cloud computing. However, the existing MAC models lack the mechanism to confine the covert flows and are hard to eliminate the convert channels. In this paper, we propose a covert flows confinement mechanism for virtual machine coalitions (CFCC), which introduces dynamic conflicts of interest based on the activity history of VMs, each of which is attached with a label. The proposed mechanism can be used to confine the covert flows between VMs in different coalitions. We implement a prototype system, evaluate its performance, and show that our mechanism is practical.

  17. Virtual Mission Operations of Remote Sensors With Rapid Access To and From Space

    NASA Technical Reports Server (NTRS)

    Ivancic, William D.; Stewart, Dave; Walke, Jon; Dikeman, Larry; Sage, Steven; Miller, Eric; Northam, James; Jackson, Chris; Taylor, John; Lynch, Scott; hide

    2010-01-01

    This paper describes network-centric operations, where a virtual mission operations center autonomously receives sensor triggers, and schedules space and ground assets using Internet-based technologies and service-oriented architectures. For proof-of-concept purposes, sensor triggers are received from the United States Geological Survey (USGS) to determine targets for space-based sensors. The Surrey Satellite Technology Limited (SSTL) Disaster Monitoring Constellation satellite, the United Kingdom Disaster Monitoring Constellation (UK-DMC), is used as the space-based sensor. The UK-DMC s availability is determined via machine-to-machine communications using SSTL s mission planning system. Access to/from the UK-DMC for tasking and sensor data is via SSTL s and Universal Space Network s (USN) ground assets. The availability and scheduling of USN s assets can also be performed autonomously via machine-to-machine communications. All communication, both on the ground and between ground and space, uses open Internet standards.

  18. Cloud services for the Fermilab scientific stakeholders

    DOE PAGES

    Timm, S.; Garzoglio, G.; Mhashilkar, P.; ...

    2015-12-23

    As part of the Fermilab/KISTI cooperative research project, Fermilab has successfully run an experimental simulation workflow at scale on a federation of Amazon Web Services (AWS), FermiCloud, and local FermiGrid resources. We used the CernVM-FS (CVMFS) file system to deliver the application software. We established Squid caching servers in AWS as well, using the Shoal system to let each individual virtual machine find the closest squid server. We also developed an automatic virtual machine conversion system so that we could transition virtual machines made on FermiCloud to Amazon Web Services. We used this system to successfully run a cosmic raymore » simulation of the NOvA detector at Fermilab, making use of both AWS spot pricing and network bandwidth discounts to minimize the cost. On FermiCloud we also were able to run the workflow at the scale of 1000 virtual machines, using a private network routable inside of Fermilab. As a result, we present in detail the technological improvements that were used to make this work a reality.« less

  19. Cloud services for the Fermilab scientific stakeholders

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Timm, S.; Garzoglio, G.; Mhashilkar, P.

    As part of the Fermilab/KISTI cooperative research project, Fermilab has successfully run an experimental simulation workflow at scale on a federation of Amazon Web Services (AWS), FermiCloud, and local FermiGrid resources. We used the CernVM-FS (CVMFS) file system to deliver the application software. We established Squid caching servers in AWS as well, using the Shoal system to let each individual virtual machine find the closest squid server. We also developed an automatic virtual machine conversion system so that we could transition virtual machines made on FermiCloud to Amazon Web Services. We used this system to successfully run a cosmic raymore » simulation of the NOvA detector at Fermilab, making use of both AWS spot pricing and network bandwidth discounts to minimize the cost. On FermiCloud we also were able to run the workflow at the scale of 1000 virtual machines, using a private network routable inside of Fermilab. As a result, we present in detail the technological improvements that were used to make this work a reality.« less

  20. Virtual reality and planetary exploration

    NASA Technical Reports Server (NTRS)

    Mcgreevy, Michael W.

    1992-01-01

    Exploring planetary environments is central to NASA's missions and goals. A new computing technology called Virtual Reality has much to offer in support of planetary exploration. This technology augments and extends human presence within computer-generated and remote spatial environments. Historically, NASA has been a leader in many of the fundamental concepts and technologies that comprise Virtual Reality. Indeed, Ames Research Center has a central role in the development of this rapidly emerging approach to using computers. This ground breaking work has inspired researchers in academia, industry, and the military. Further, NASA's leadership in this technology has spun off new businesses, has caught the attention of the international business community, and has generated several years of positive international media coverage. In the future, Virtual Reality technology will enable greatly improved human-machine interactions for more productive planetary surface exploration. Perhaps more importantly, Virtual Reality technology will democratize the experience of planetary exploration and thereby broaden understanding of, and support for, this historic enterprise.

  1. Virtual reality and planetary exploration

    NASA Astrophysics Data System (ADS)

    McGreevy, Michael W.

    Exploring planetary environments is central to NASA's missions and goals. A new computing technology called Virtual Reality has much to offer in support of planetary exploration. This technology augments and extends human presence within computer-generated and remote spatial environments. Historically, NASA has been a leader in many of the fundamental concepts and technologies that comprise Virtual Reality. Indeed, Ames Research Center has a central role in the development of this rapidly emerging approach to using computers. This ground breaking work has inspired researchers in academia, industry, and the military. Further, NASA's leadership in this technology has spun off new businesses, has caught the attention of the international business community, and has generated several years of positive international media coverage. In the future, Virtual Reality technology will enable greatly improved human-machine interactions for more productive planetary surface exploration. Perhaps more importantly, Virtual Reality technology will democratize the experience of planetary exploration and thereby broaden understanding of, and support for, this historic enterprise.

  2. An Analysis Platform for Mobile Ad Hoc Network (MANET) Scenario Execution Log Data

    DTIC Science & Technology

    2016-01-01

    these technologies. 4.1 Backend Technologies • Java 1.8 • my-sql-connector- java -5.0.8.jar • Tomcat • VirtualBox • Kali MANET Virtual Machine 4.2...Frontend Technologies • LAMPP 4.3 Database • MySQL Server 5. Database The SEDAP database settings and structure are described in this section...contains all the backend java functionality including the web services, should be placed in the webapps directory inside the Tomcat installation

  3. Web Service Distributed Management Framework for Autonomic Server Virtualization

    NASA Astrophysics Data System (ADS)

    Solomon, Bogdan; Ionescu, Dan; Litoiu, Marin; Mihaescu, Mircea

    Virtualization for the x86 platform has imposed itself recently as a new technology that can improve the usage of machines in data centers and decrease the cost and energy of running a high number of servers. Similar to virtualization, autonomic computing and more specifically self-optimization, aims to improve server farm usage through provisioning and deprovisioning of instances as needed by the system. Autonomic systems are able to determine the optimal number of server machines - real or virtual - to use at a given time, and add or remove servers from a cluster in order to achieve optimal usage. While provisioning and deprovisioning of servers is very important, the way the autonomic system is built is also very important, as a robust and open framework is needed. One such management framework is the Web Service Distributed Management (WSDM) system, which is an open standard of the Organization for the Advancement of Structured Information Standards (OASIS). This paper presents an open framework built on top of the WSDM specification, which aims to provide self-optimization for applications servers residing on virtual machines.

  4. Computer network defense system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Urias, Vincent; Stout, William M. S.; Loverro, Caleb

    A method and apparatus for protecting virtual machines. A computer system creates a copy of a group of the virtual machines in an operating network in a deception network to form a group of cloned virtual machines in the deception network when the group of the virtual machines is accessed by an adversary. The computer system creates an emulation of components from the operating network in the deception network. The components are accessible by the group of the cloned virtual machines as if the group of the cloned virtual machines was in the operating network. The computer system moves networkmore » connections for the group of the virtual machines in the operating network used by the adversary from the group of the virtual machines in the operating network to the group of the cloned virtual machines, enabling protecting the group of the virtual machines from actions performed by the adversary.« less

  5. Introduction of Virtualization Technology to Multi-Process Model Checking

    NASA Technical Reports Server (NTRS)

    Leungwattanakit, Watcharin; Artho, Cyrille; Hagiya, Masami; Tanabe, Yoshinori; Yamamoto, Mitsuharu

    2009-01-01

    Model checkers find failures in software by exploring every possible execution schedule. Java PathFinder (JPF), a Java model checker, has been extended recently to cover networked applications by caching data transferred in a communication channel. A target process is executed by JPF, whereas its peer process runs on a regular virtual machine outside. However, non-deterministic target programs may produce different output data in each schedule, causing the cache to restart the peer process to handle the different set of data. Virtualization tools could help us restore previous states of peers, eliminating peer restart. This paper proposes the application of virtualization technology to networked model checking, concentrating on JPF.

  6. An Introduction to 3-D Sound

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Null, Cynthia H. (Technical Monitor)

    1997-01-01

    This talk will overview the basic technologies related to the creation of virtual acoustic images, and the potential of including spatial auditory displays in human-machine interfaces. Research into the perceptual error inherent in both natural and virtual spatial hearing is reviewed, since the formation of improved technologies is tied to psychoacoustic research. This includes a discussion of Head Related Transfer Function (HRTF) measurement techniques (the HRTF provides important perceptual cues within a virtual acoustic display). Many commercial applications of virtual acoustics have so far focused on games and entertainment ; in this review, other types of applications are examined, including aeronautic safety, voice communications, virtual reality, and room acoustic simulation. In particular, the notion that realistic simulation is optimized within a virtual acoustic display when head motion and reverberation cues are included within a perceptual model.

  7. Design and fabrication of complete dentures using CAD/CAM technology

    PubMed Central

    Han, Weili; Li, Yanfeng; Zhang, Yue; lv, Yuan; Zhang, Ying; Hu, Ping; Liu, Huanyue; Ma, Zheng; Shen, Yi

    2017-01-01

    Abstract The aim of the study was to test the feasibility of using commercially available computer-aided design and computer-aided manufacturing (CAD/CAM) technology including 3Shape Dental System 2013 trial version, WIELAND V2.0.049 and WIELAND ZENOTEC T1 milling machine to design and fabricate complete dentures. The modeling process of full denture available in the trial version of 3Shape Dental System 2013 was used to design virtual complete dentures on the basis of 3-dimensional (3D) digital edentulous models generated from the physical models. The virtual complete dentures designed were exported to CAM software of WIELAND V2.0.049. A WIELAND ZENOTEC T1 milling machine controlled by the CAM software was used to fabricate physical dentitions and baseplates by milling acrylic resin composite plates. The physical dentitions were bonded to the corresponding baseplates to form the maxillary and mandibular complete dentures. Virtual complete dentures were successfully designed using the software through several steps including generation of 3D digital edentulous models, model analysis, arrangement of artificial teeth, trimming relief area, and occlusal adjustment. Physical dentitions and baseplates were successfully fabricated according to the designed virtual complete dentures using milling machine controlled by a CAM software. Bonding physical dentitions to the corresponding baseplates generated the final physical complete dentures. Our study demonstrated that complete dentures could be successfully designed and fabricated by using CAD/CAM. PMID:28072686

  8. Examining Effects of Virtual Machine Settings on Voice over Internet Protocol in a Private Cloud Environment

    ERIC Educational Resources Information Center

    Liao, Yuan

    2011-01-01

    The virtualization of computing resources, as represented by the sustained growth of cloud computing, continues to thrive. Information Technology departments are building their private clouds due to the perception of significant cost savings by managing all physical computing resources from a single point and assigning them to applications or…

  9. Multiplexing Low and High QoS Workloads in Virtual Environments

    NASA Astrophysics Data System (ADS)

    Verboven, Sam; Vanmechelen, Kurt; Broeckhove, Jan

    Virtualization technology has introduced new ways for managing IT infrastructure. The flexible deployment of applications through self-contained virtual machine images has removed the barriers for multiplexing, suspending and migrating applications with their entire execution environment, allowing for a more efficient use of the infrastructure. These developments have given rise to an important challenge regarding the optimal scheduling of virtual machine workloads. In this paper, we specifically address the VM scheduling problem in which workloads that require guaranteed levels of CPU performance are mixed with workloads that do not require such guarantees. We introduce a framework to analyze this scheduling problem and evaluate to what extent such mixed service delivery is beneficial for a provider of virtualized IT infrastructure. Traditionally providers offer IT resources under a guaranteed and fixed performance profile, which can lead to underutilization. The findings of our simulation study show that through proper tuning of a limited set of parameters, the proposed scheduling algorithm allows for a significant increase in utilization without sacrificing on performance dependability.

  10. Open access for ALICE analysis based on virtualization technology

    NASA Astrophysics Data System (ADS)

    Buncic, P.; Gheata, M.; Schutz, Y.

    2015-12-01

    Open access is one of the important leverages for long-term data preservation for a HEP experiment. To guarantee the usability of data analysis tools beyond the experiment lifetime it is crucial that third party users from the scientific community have access to the data and associated software. The ALICE Collaboration has developed a layer of lightweight components built on top of virtualization technology to hide the complexity and details of the experiment-specific software. Users can perform basic analysis tasks within CernVM, a lightweight generic virtual machine, paired with an ALICE specific contextualization. Once the virtual machine is launched, a graphical user interface is automatically started without any additional configuration. This interface allows downloading the base ALICE analysis software and running a set of ALICE analysis modules. Currently the available tools include fully documented tutorials for ALICE analysis, such as the measurement of strange particle production or the nuclear modification factor in Pb-Pb collisions. The interface can be easily extended to include an arbitrary number of additional analysis modules. We present the current status of the tools used by ALICE through the CERN open access portal, and the plans for future extensions of this system.

  11. Cloud Based Resource for Data Hosting, Visualization and Analysis Using UCSC Cancer Genomics Browser | Informatics Technology for Cancer Research (ITCR)

    Cancer.gov

    The Cancer Analysis Virtual Machine (CAVM) project will leverage cloud technology, the UCSC Cancer Genomics Browser, and the Galaxy analysis workflow system to provide investigators with a flexible, scalable platform for hosting, visualizing and analyzing their own genomic data.

  12. Grid heterogeneity in in-silico experiments: an exploration of drug screening using DOCK on cloud environments.

    PubMed

    Yim, Wen-Wai; Chien, Shu; Kusumoto, Yasuyuki; Date, Susumu; Haga, Jason

    2010-01-01

    Large-scale in-silico screening is a necessary part of drug discovery and Grid computing is one answer to this demand. A disadvantage of using Grid computing is the heterogeneous computational environments characteristic of a Grid. In our study, we have found that for the molecular docking simulation program DOCK, different clusters within a Grid organization can yield inconsistent results. Because DOCK in-silico virtual screening (VS) is currently used to help select chemical compounds to test with in-vitro experiments, such differences have little effect on the validity of using virtual screening before subsequent steps in the drug discovery process. However, it is difficult to predict whether the accumulation of these discrepancies over sequentially repeated VS experiments will significantly alter the results if VS is used as the primary means for identifying potential drugs. Moreover, such discrepancies may be unacceptable for other applications requiring more stringent thresholds. This highlights the need for establishing a more complete solution to provide the best scientific accuracy when executing an application across Grids. One possible solution to platform heterogeneity in DOCK performance explored in our study involved the use of virtual machines as a layer of abstraction. This study investigated the feasibility and practicality of using virtual machine and recent cloud computing technologies in a biological research application. We examined the differences and variations of DOCK VS variables, across a Grid environment composed of different clusters, with and without virtualization. The uniform computer environment provided by virtual machines eliminated inconsistent DOCK VS results caused by heterogeneous clusters, however, the execution time for the DOCK VS increased. In our particular experiments, overhead costs were found to be an average of 41% and 2% in execution time for two different clusters, while the actual magnitudes of the execution time costs were minimal. Despite the increase in overhead, virtual clusters are an ideal solution for Grid heterogeneity. With greater development of virtual cluster technology in Grid environments, the problem of platform heterogeneity may be eliminated through virtualization, allowing greater usage of VS, and will benefit all Grid applications in general.

  13. Simulation Platform: a cloud-based online simulation environment.

    PubMed

    Yamazaki, Tadashi; Ikeno, Hidetoshi; Okumura, Yoshihiro; Satoh, Shunji; Kamiyama, Yoshimi; Hirata, Yutaka; Inagaki, Keiichiro; Ishihara, Akito; Kannon, Takayuki; Usui, Shiro

    2011-09-01

    For multi-scale and multi-modal neural modeling, it is needed to handle multiple neural models described at different levels seamlessly. Database technology will become more important for these studies, specifically for downloading and handling the neural models seamlessly and effortlessly. To date, conventional neuroinformatics databases have solely been designed to archive model files, but the databases should provide a chance for users to validate the models before downloading them. In this paper, we report our on-going project to develop a cloud-based web service for online simulation called "Simulation Platform". Simulation Platform is a cloud of virtual machines running GNU/Linux. On a virtual machine, various software including developer tools such as compilers and libraries, popular neural simulators such as GENESIS, NEURON and NEST, and scientific software such as Gnuplot, R and Octave, are pre-installed. When a user posts a request, a virtual machine is assigned to the user, and the simulation starts on that machine. The user remotely accesses to the machine through a web browser and carries out the simulation, without the need to install any software but a web browser on the user's own computer. Therefore, Simulation Platform is expected to eliminate impediments to handle multiple neural models that require multiple software. Copyright © 2011 Elsevier Ltd. All rights reserved.

  14. Reprint of: Simulation Platform: a cloud-based online simulation environment.

    PubMed

    Yamazaki, Tadashi; Ikeno, Hidetoshi; Okumura, Yoshihiro; Satoh, Shunji; Kamiyama, Yoshimi; Hirata, Yutaka; Inagaki, Keiichiro; Ishihara, Akito; Kannon, Takayuki; Usui, Shiro

    2011-11-01

    For multi-scale and multi-modal neural modeling, it is needed to handle multiple neural models described at different levels seamlessly. Database technology will become more important for these studies, specifically for downloading and handling the neural models seamlessly and effortlessly. To date, conventional neuroinformatics databases have solely been designed to archive model files, but the databases should provide a chance for users to validate the models before downloading them. In this paper, we report our on-going project to develop a cloud-based web service for online simulation called "Simulation Platform". Simulation Platform is a cloud of virtual machines running GNU/Linux. On a virtual machine, various software including developer tools such as compilers and libraries, popular neural simulators such as GENESIS, NEURON and NEST, and scientific software such as Gnuplot, R and Octave, are pre-installed. When a user posts a request, a virtual machine is assigned to the user, and the simulation starts on that machine. The user remotely accesses to the machine through a web browser and carries out the simulation, without the need to install any software but a web browser on the user's own computer. Therefore, Simulation Platform is expected to eliminate impediments to handle multiple neural models that require multiple software. Copyright © 2011 Elsevier Ltd. All rights reserved.

  15. Virtual reality in surgical training.

    PubMed

    Lange, T; Indelicato, D J; Rosen, J M

    2000-01-01

    Virtual reality in surgery and, more specifically, in surgical training, faces a number of challenges in the future. These challenges are building realistic models of the human body, creating interface tools to view, hear, touch, feel, and manipulate these human body models, and integrating virtual reality systems into medical education and treatment. A final system would encompass simulators specifically for surgery, performance machines, telemedicine, and telesurgery. Each of these areas will need significant improvement for virtual reality to impact medicine successfully in the next century. This article gives an overview of, and the challenges faced by, current systems in the fast-changing field of virtual reality technology, and provides a set of specific milestones for a truly realistic virtual human body.

  16. Effects of virtualization on a scientific application - Running a hyperspectral radiative transfer code on virtual machines.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tikotekar, Anand A; Vallee, Geoffroy R; Naughton III, Thomas J

    2008-01-01

    The topic of system-level virtualization has recently begun to receive interest for high performance computing (HPC). This is in part due to the isolation and encapsulation offered by the virtual machine. These traits enable applications to customize their environments and maintain consistent software configurations in their virtual domains. Additionally, there are mechanisms that can be used for fault tolerance like live virtual machine migration. Given these attractive benefits to virtualization, a fundamental question arises, how does this effect my scientific application? We use this as the premise for our paper and observe a real-world scientific code running on a Xenmore » virtual machine. We studied the effects of running a radiative transfer simulation, Hydrolight, on a virtual machine. We discuss our methodology and report observations regarding the usage of virtualization with this application.« less

  17. Human Factors Consideration for the Design of Collaborative Machine Assistants

    NASA Astrophysics Data System (ADS)

    Park, Sung; Fisk, Arthur D.; Rogers, Wendy A.

    Recent improvements in technology have facilitated the use of robots and virtual humans not only in entertainment and engineering but also in the military (Hill et al., 2003), healthcare (Pollack et al., 2002), and education domains (Johnson, Rickel, & Lester, 2000). As active partners of humans, such machine assistants can take the form of a robot or a graphical representation and serve the role of a financial assistant, a health manager, or even a social partner. As a result, interactive technologies are becoming an integral component of people's everyday lives.

  18. Can Science Education Research Give an Answer to Questions Posed by History of Science and Technology? The Case of Steam Engine's Measurement

    ERIC Educational Resources Information Center

    Kanderakis, Nikos E.

    2009-01-01

    According to the principle of virtual velocities, if on a simple machine in equilibrium we suppose a slight virtual movement, then the ratio of weights or forces equals the inverse ratio of velocities or displacements. The product of the weight raised or force applied multiplied by the height or displacement plays a central role there. British…

  19. myChEMBL: a virtual machine implementation of open data and cheminformatics tools.

    PubMed

    Ochoa, Rodrigo; Davies, Mark; Papadatos, George; Atkinson, Francis; Overington, John P

    2014-01-15

    myChEMBL is a completely open platform, which combines public domain bioactivity data with open source database and cheminformatics technologies. myChEMBL consists of a Linux (Ubuntu) Virtual Machine featuring a PostgreSQL schema with the latest version of the ChEMBL database, as well as the latest RDKit cheminformatics libraries. In addition, a self-contained web interface is available, which can be modified and improved according to user specifications. The VM is available at: ftp://ftp.ebi.ac.uk/pub/databases/chembl/VM/myChEMBL/current. The web interface and web services code is available at: https://github.com/rochoa85/myChEMBL.

  20. A nested virtualization tool for information technology practical education.

    PubMed

    Pérez, Carlos; Orduña, Juan M; Soriano, Francisco R

    2016-01-01

    A common problem of some information technology courses is the difficulty of providing practical exercises. Although different approaches have been followed to solve this problem, it is still an open issue, specially in security and computer network courses. This paper proposes NETinVM, a tool based on nested virtualization that includes a fully functional lab, comprising several computers and networks, in a single virtual machine. It also analyzes and evaluates how it has been used in different teaching environments. The results show that this tool makes it possible to perform demos, labs and practical exercises, greatly appreciated by the students, that would otherwise be unfeasible. Also, its portability allows to reproduce classroom activities, as well as the students' autonomous work.

  1. Secure Autonomous Automated Scheduling (SAAS). Rev. 1.1

    NASA Technical Reports Server (NTRS)

    Walke, Jon G.; Dikeman, Larry; Sage, Stephen P.; Miller, Eric M.

    2010-01-01

    This report describes network-centric operations, where a virtual mission operations center autonomously receives sensor triggers, and schedules space and ground assets using Internet-based technologies and service-oriented architectures. For proof-of-concept purposes, sensor triggers are received from the United States Geological Survey (USGS) to determine targets for space-based sensors. The Surrey Satellite Technology Limited (SSTL) Disaster Monitoring Constellation satellite, the UK-DMC, is used as the space-based sensor. The UK-DMC's availability is determined via machine-to-machine communications using SSTL's mission planning system. Access to/from the UK-DMC for tasking and sensor data is via SSTL's and Universal Space Network's (USN) ground assets. The availability and scheduling of USN's assets can also be performed autonomously via machine-to-machine communications. All communication, both on the ground and between ground and space, uses open Internet standards

  2. LHCb experience with running jobs in virtual machines

    NASA Astrophysics Data System (ADS)

    McNab, A.; Stagni, F.; Luzzi, C.

    2015-12-01

    The LHCb experiment has been running production jobs in virtual machines since 2013 as part of its DIRAC-based infrastructure. We describe the architecture of these virtual machines and the steps taken to replicate the WLCG worker node environment expected by user and production jobs. This relies on the uCernVM system for providing root images for virtual machines. We use the CernVM-FS distributed filesystem to supply the root partition files, the LHCb software stack, and the bootstrapping scripts necessary to configure the virtual machines for us. Using this approach, we have been able to minimise the amount of contextualisation which must be provided by the virtual machine managers. We explain the process by which the virtual machine is able to receive payload jobs submitted to DIRAC by users and production managers, and how this differs from payloads executed within conventional DIRAC pilot jobs on batch queue based sites. We describe our operational experiences in running production on VM based sites managed using Vcycle/OpenStack, Vac, and HTCondor Vacuum. Finally we show how our use of these resources is monitored using Ganglia and DIRAC.

  3. Software platform virtualization in chemistry research and university teaching

    PubMed Central

    2009-01-01

    Background Modern chemistry laboratories operate with a wide range of software applications under different operating systems, such as Windows, LINUX or Mac OS X. Instead of installing software on different computers it is possible to install those applications on a single computer using Virtual Machine software. Software platform virtualization allows a single guest operating system to execute multiple other operating systems on the same computer. We apply and discuss the use of virtual machines in chemistry research and teaching laboratories. Results Virtual machines are commonly used for cheminformatics software development and testing. Benchmarking multiple chemistry software packages we have confirmed that the computational speed penalty for using virtual machines is low and around 5% to 10%. Software virtualization in a teaching environment allows faster deployment and easy use of commercial and open source software in hands-on computer teaching labs. Conclusion Software virtualization in chemistry, mass spectrometry and cheminformatics is needed for software testing and development of software for different operating systems. In order to obtain maximum performance the virtualization software should be multi-core enabled and allow the use of multiprocessor configurations in the virtual machine environment. Server consolidation, by running multiple tasks and operating systems on a single physical machine, can lead to lower maintenance and hardware costs especially in small research labs. The use of virtual machines can prevent software virus infections and security breaches when used as a sandbox system for internet access and software testing. Complex software setups can be created with virtual machines and are easily deployed later to multiple computers for hands-on teaching classes. We discuss the popularity of bioinformatics compared to cheminformatics as well as the missing cheminformatics education at universities worldwide. PMID:20150997

  4. Software platform virtualization in chemistry research and university teaching.

    PubMed

    Kind, Tobias; Leamy, Tim; Leary, Julie A; Fiehn, Oliver

    2009-11-16

    Modern chemistry laboratories operate with a wide range of software applications under different operating systems, such as Windows, LINUX or Mac OS X. Instead of installing software on different computers it is possible to install those applications on a single computer using Virtual Machine software. Software platform virtualization allows a single guest operating system to execute multiple other operating systems on the same computer. We apply and discuss the use of virtual machines in chemistry research and teaching laboratories. Virtual machines are commonly used for cheminformatics software development and testing. Benchmarking multiple chemistry software packages we have confirmed that the computational speed penalty for using virtual machines is low and around 5% to 10%. Software virtualization in a teaching environment allows faster deployment and easy use of commercial and open source software in hands-on computer teaching labs. Software virtualization in chemistry, mass spectrometry and cheminformatics is needed for software testing and development of software for different operating systems. In order to obtain maximum performance the virtualization software should be multi-core enabled and allow the use of multiprocessor configurations in the virtual machine environment. Server consolidation, by running multiple tasks and operating systems on a single physical machine, can lead to lower maintenance and hardware costs especially in small research labs. The use of virtual machines can prevent software virus infections and security breaches when used as a sandbox system for internet access and software testing. Complex software setups can be created with virtual machines and are easily deployed later to multiple computers for hands-on teaching classes. We discuss the popularity of bioinformatics compared to cheminformatics as well as the missing cheminformatics education at universities worldwide.

  5. Virtualization for the LHCb Online system

    NASA Astrophysics Data System (ADS)

    Bonaccorsi, Enrico; Brarda, Loic; Moine, Gary; Neufeld, Niko

    2011-12-01

    Virtualization has long been advertised by the IT-industry as a way to cut down cost, optimise resource usage and manage the complexity in large data-centers. The great number and the huge heterogeneity of hardware, both industrial and custom-made, has up to now led to reluctance in the adoption of virtualization in the IT infrastructure of large experiment installations. Our experience in the LHCb experiment has shown that virtualization improves the availability and the manageability of the whole system. We have done an evaluation of available hypervisors / virtualization solutions and find that the Microsoft HV technology provides a high level of maturity and flexibility for our purpose. We present the results of these comparison tests, describing in detail, the architecture of our virtualization infrastructure with a special emphasis on the security for services visible to the outside world. Security is achieved by a sophisticated combination of VLANs, firewalls and virtual routing - the cost and benefits of this solution are analysed. We have adapted our cluster management tools, notably Quattor, for the needs of virtual machines and this allows us to migrate smoothly services on physical machines to the virtualized infrastructure. The procedures for migration will also be described. In the final part of the document we describe our recent R&D activities aiming to replacing the SAN-backend for the virtualization by a cheaper iSCSI solution - this will allow to move all servers and related services to the virtualized infrastructure, excepting the ones doing hardware control via non-commodity PCI plugin cards.

  6. CloVR: a virtual machine for automated and portable sequence analysis from the desktop using cloud computing.

    PubMed

    Angiuoli, Samuel V; Matalka, Malcolm; Gussman, Aaron; Galens, Kevin; Vangala, Mahesh; Riley, David R; Arze, Cesar; White, James R; White, Owen; Fricke, W Florian

    2011-08-30

    Next-generation sequencing technologies have decentralized sequence acquisition, increasing the demand for new bioinformatics tools that are easy to use, portable across multiple platforms, and scalable for high-throughput applications. Cloud computing platforms provide on-demand access to computing infrastructure over the Internet and can be used in combination with custom built virtual machines to distribute pre-packaged with pre-configured software. We describe the Cloud Virtual Resource, CloVR, a new desktop application for push-button automated sequence analysis that can utilize cloud computing resources. CloVR is implemented as a single portable virtual machine (VM) that provides several automated analysis pipelines for microbial genomics, including 16S, whole genome and metagenome sequence analysis. The CloVR VM runs on a personal computer, utilizes local computer resources and requires minimal installation, addressing key challenges in deploying bioinformatics workflows. In addition CloVR supports use of remote cloud computing resources to improve performance for large-scale sequence processing. In a case study, we demonstrate the use of CloVR to automatically process next-generation sequencing data on multiple cloud computing platforms. The CloVR VM and associated architecture lowers the barrier of entry for utilizing complex analysis protocols on both local single- and multi-core computers and cloud systems for high throughput data processing.

  7. Evaluating open-source cloud computing solutions for geosciences

    NASA Astrophysics Data System (ADS)

    Huang, Qunying; Yang, Chaowei; Liu, Kai; Xia, Jizhe; Xu, Chen; Li, Jing; Gui, Zhipeng; Sun, Min; Li, Zhenglong

    2013-09-01

    Many organizations start to adopt cloud computing for better utilizing computing resources by taking advantage of its scalability, cost reduction, and easy to access characteristics. Many private or community cloud computing platforms are being built using open-source cloud solutions. However, little has been done to systematically compare and evaluate the features and performance of open-source solutions in supporting Geosciences. This paper provides a comprehensive study of three open-source cloud solutions, including OpenNebula, Eucalyptus, and CloudStack. We compared a variety of features, capabilities, technologies and performances including: (1) general features and supported services for cloud resource creation and management, (2) advanced capabilities for networking and security, and (3) the performance of the cloud solutions in provisioning and operating the cloud resources as well as the performance of virtual machines initiated and managed by the cloud solutions in supporting selected geoscience applications. Our study found that: (1) no significant performance differences in central processing unit (CPU), memory and I/O of virtual machines created and managed by different solutions, (2) OpenNebula has the fastest internal network while both Eucalyptus and CloudStack have better virtual machine isolation and security strategies, (3) Cloudstack has the fastest operations in handling virtual machines, images, snapshots, volumes and networking, followed by OpenNebula, and (4) the selected cloud computing solutions are capable for supporting concurrent intensive web applications, computing intensive applications, and small-scale model simulations without intensive data communication.

  8. e-Addictology: An Overview of New Technologies for Assessing and Intervening in Addictive Behaviors.

    PubMed

    Ferreri, Florian; Bourla, Alexis; Mouchabac, Stephane; Karila, Laurent

    2018-01-01

    New technologies can profoundly change the way we understand psychiatric pathologies and addictive disorders. New concepts are emerging with the development of more accurate means of collecting live data, computerized questionnaires, and the use of passive data. Digital phenotyping , a paradigmatic example, refers to the use of computerized measurement tools to capture the characteristics of different psychiatric disorders. Similarly, machine learning-a form of artificial intelligence-can improve the classification of patients based on patterns that clinicians have not always considered in the past. Remote or automated interventions (web-based or smartphone-based apps), as well as virtual reality and neurofeedback, are already available or under development. These recent changes have the potential to disrupt practices, as well as practitioners' beliefs, ethics and representations, and may even call into question their professional culture. However, the impact of new technologies on health professionals' practice in addictive disorder care has yet to be determined. In the present paper, we therefore present an overview of new technology in the field of addiction medicine. Using the keywords [e-health], [m-health], [computer], [mobile], [smartphone], [wearable], [digital], [machine learning], [ecological momentary assessment], [biofeedback] and [virtual reality], we searched the PubMed database for the most representative articles in the field of assessment and interventions in substance use disorders. We screened 595 abstracts and analyzed 92 articles, dividing them into seven categories: e-health program and web-based interventions, machine learning, computerized adaptive testing, wearable devices and digital phenotyping, ecological momentary assessment, biofeedback, and virtual reality. This overview shows that new technologies can improve assessment and interventions in the field of addictive disorders. The precise role of connected devices, artificial intelligence and remote monitoring remains to be defined. If they are to be used effectively, these tools must be explained and adapted to the different profiles of physicians and patients. The involvement of patients, caregivers and other health professionals is essential to their design and assessment.

  9. Network Hardware Virtualization for Application Provisioning in Core Networks

    DOE PAGES

    Gumaste, Ashwin; Das, Tamal; Khandwala, Kandarp; ...

    2017-02-03

    We present that service providers and vendors are moving toward a network virtualized core, whereby multiple applications would be treated on their own merit in programmable hardware. Such a network would have the advantage of being customized for user requirements and allow provisioning of next generation services that are built specifically to meet user needs. In this article, we articulate the impact of network virtualization on networks that provide customized services and how a provider's business can grow with network virtualization. We outline a decision map that allows mapping of applications with technology that is supported in network-virtualization - orientedmore » equipment. Analogies to the world of virtual machines and generic virtualization show that hardware supporting network virtualization will facilitate new customer needs while optimizing the provider network from the cost and performance perspectives. A key conclusion of the article is that growth would yield sizable revenue when providers plan ahead in terms of supporting network-virtualization-oriented technology in their networks. To be precise, providers have to incorporate into their growth plans network elements capable of new service deployments while protecting network neutrality. Finally, a simulation study validates our NV-induced model.« less

  10. Network Hardware Virtualization for Application Provisioning in Core Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gumaste, Ashwin; Das, Tamal; Khandwala, Kandarp

    We present that service providers and vendors are moving toward a network virtualized core, whereby multiple applications would be treated on their own merit in programmable hardware. Such a network would have the advantage of being customized for user requirements and allow provisioning of next generation services that are built specifically to meet user needs. In this article, we articulate the impact of network virtualization on networks that provide customized services and how a provider's business can grow with network virtualization. We outline a decision map that allows mapping of applications with technology that is supported in network-virtualization - orientedmore » equipment. Analogies to the world of virtual machines and generic virtualization show that hardware supporting network virtualization will facilitate new customer needs while optimizing the provider network from the cost and performance perspectives. A key conclusion of the article is that growth would yield sizable revenue when providers plan ahead in terms of supporting network-virtualization-oriented technology in their networks. To be precise, providers have to incorporate into their growth plans network elements capable of new service deployments while protecting network neutrality. Finally, a simulation study validates our NV-induced model.« less

  11. Humans and machines in space: The vision, the challenge, the payoff; AAS Goddard Memorial Symposium, 29th, Washington, DC, March 14-15, 1991

    NASA Astrophysics Data System (ADS)

    Johnson, Bradley; May, Gayle L.; Korn, Paula

    A recent symposium produced papers in the areas of solar system exploration, man machine interfaces, cybernetics, virtual reality, telerobotics, life support systems and the scientific and technology spinoff from the NASA space program. A number of papers also addressed the social and economic impacts of the space program. For individual titles, see A95-87468 through A95-87479.

  12. Air-condition Control System of Weaving Workshop Based on LabVIEW

    NASA Astrophysics Data System (ADS)

    Song, Jian

    The project of air-condition measurement and control system based on LabVIEW is put forward for the sake of controlling effectively the environmental targets in the weaving workshop. In this project, which is based on the virtual instrument technology and in which LabVIEW development platform by NI is adopted, the system is constructed on the basis of the virtual instrument technology. It is composed of the upper PC, central control nodes based on CC2530, sensor nodes, sensor modules and executive device. Fuzzy control algorithm is employed to achieve the accuracy control of the temperature and humidity. A user-friendly man-machine interaction interface is designed with virtual instrument technology at the core of the software. It is shown by experiments that the measurement and control system can run stably and reliably and meet the functional requirements for controlling the weaving workshop.

  13. Dynamic provisioning of local and remote compute resources with OpenStack

    NASA Astrophysics Data System (ADS)

    Giffels, M.; Hauth, T.; Polgart, F.; Quast, G.

    2015-12-01

    Modern high-energy physics experiments rely on the extensive usage of computing resources, both for the reconstruction of measured events as well as for Monte-Carlo simulation. The Institut fur Experimentelle Kernphysik (EKP) at KIT is participating in both the CMS and Belle experiments with computing and storage resources. In the upcoming years, these requirements are expected to increase due to growing amount of recorded data and the rise in complexity of the simulated events. It is therefore essential to increase the available computing capabilities by tapping into all resource pools. At the EKP institute, powerful desktop machines are available to users. Due to the multi-core nature of modern CPUs, vast amounts of CPU time are not utilized by common desktop usage patterns. Other important providers of compute capabilities are classical HPC data centers at universities or national research centers. Due to the shared nature of these installations, the standardized software stack required by HEP applications cannot be installed. A viable way to overcome this constraint and offer a standardized software environment in a transparent manner is the usage of virtualization technologies. The OpenStack project has become a widely adopted solution to virtualize hardware and offer additional services like storage and virtual machine management. This contribution will report on the incorporation of the institute's desktop machines into a private OpenStack Cloud. The additional compute resources provisioned via the virtual machines have been used for Monte-Carlo simulation and data analysis. Furthermore, a concept to integrate shared, remote HPC centers into regular HEP job workflows will be presented. In this approach, local and remote resources are merged to form a uniform, virtual compute cluster with a single point-of-entry for the user. Evaluations of the performance and stability of this setup and operational experiences will be discussed.

  14. Using virtual machine monitors to overcome the challenges of monitoring and managing virtualized cloud infrastructures

    NASA Astrophysics Data System (ADS)

    Bamiah, Mervat Adib; Brohi, Sarfraz Nawaz; Chuprat, Suriayati

    2012-01-01

    Virtualization is one of the hottest research topics nowadays. Several academic researchers and developers from IT industry are designing approaches for solving security and manageability issues of Virtual Machines (VMs) residing on virtualized cloud infrastructures. Moving the application from a physical to a virtual platform increases the efficiency, flexibility and reduces management cost as well as effort. Cloud computing is adopting the paradigm of virtualization, using this technique, memory, CPU and computational power is provided to clients' VMs by utilizing the underlying physical hardware. Beside these advantages there are few challenges faced by adopting virtualization such as management of VMs and network traffic, unexpected additional cost and resource allocation. Virtual Machine Monitor (VMM) or hypervisor is the tool used by cloud providers to manage the VMs on cloud. There are several heterogeneous hypervisors provided by various vendors that include VMware, Hyper-V, Xen and Kernel Virtual Machine (KVM). Considering the challenge of VM management, this paper describes several techniques to monitor and manage virtualized cloud infrastructures.

  15. Hardware assisted hypervisor introspection.

    PubMed

    Shi, Jiangyong; Yang, Yuexiang; Tang, Chuan

    2016-01-01

    In this paper, we introduce hypervisor introspection, an out-of-box way to monitor the execution of hypervisors. Similar to virtual machine introspection which has been proposed to protect virtual machines in an out-of-box way over the past decade, hypervisor introspection can be used to protect hypervisors which are the basis of cloud security. Virtual machine introspection tools are usually deployed either in hypervisor or in privileged virtual machines, which might also be compromised. By utilizing hardware support including nested virtualization, EPT protection and #BP, we are able to monitor all hypercalls belongs to the virtual machines of one hypervisor, include that of privileged virtual machine and even when the hypervisor is compromised. What's more, hypercall injection method is used to simulate hypercall-based attacks and evaluate the performance of our method. Experiment results show that our method can effectively detect hypercall-based attacks with some performance cost. Lastly, we discuss our furture approaches of reducing the performance cost and preventing the compromised hypervisor from detecting the existence of our introspector, in addition with some new scenarios to apply our hypervisor introspection system.

  16. Architecture and Key Techniques of Augmented Reality Maintenance Guiding System for Civil Aircrafts

    NASA Astrophysics Data System (ADS)

    hong, Zhou; Wenhua, Lu

    2017-01-01

    Augmented reality technology is introduced into the maintenance related field for strengthened information in real-world scenarios through integration of virtual assistant maintenance information with real-world scenarios. This can lower the difficulty of maintenance, reduce maintenance errors, and improve the maintenance efficiency and quality of civil aviation crews. Architecture of augmented reality virtual maintenance guiding system is proposed on the basis of introducing the definition of augmented reality and analyzing the characteristics of augmented reality virtual maintenance. Key techniques involved, such as standardization and organization of maintenance data, 3D registration, modeling of maintenance guidance information and virtual maintenance man-machine interaction, are elaborated emphatically, and solutions are given.

  17. Using shadow page cache to improve isolated drivers performance.

    PubMed

    Zheng, Hao; Dong, Xiaoshe; Wang, Endong; Chen, Baoke; Zhu, Zhengdong; Liu, Chengzhe

    2015-01-01

    With the advantage of the reusability property of the virtualization technology, users can reuse various types and versions of existing operating systems and drivers in a virtual machine, so as to customize their application environment. In order to prevent users' virtualization environments being impacted by driver faults in virtual machine, Chariot examines the correctness of driver's write operations by the method of combining a driver's write operation capture and a driver's private access control table. However, this method needs to keep the write permission of shadow page table as read-only, so as to capture isolated driver's write operations through page faults, which adversely affect the performance of the driver. Based on delaying setting frequently used shadow pages' write permissions to read-only, this paper proposes an algorithm using shadow page cache to improve the performance of isolated drivers and carefully study the relationship between the performance of drivers and the size of shadow page cache. Experimental results show that, through the shadow page cache, the performance of isolated drivers can be greatly improved without impacting Chariot's reliability too much.

  18. Using Shadow Page Cache to Improve Isolated Drivers Performance

    PubMed Central

    Dong, Xiaoshe; Wang, Endong; Chen, Baoke; Zhu, Zhengdong; Liu, Chengzhe

    2015-01-01

    With the advantage of the reusability property of the virtualization technology, users can reuse various types and versions of existing operating systems and drivers in a virtual machine, so as to customize their application environment. In order to prevent users' virtualization environments being impacted by driver faults in virtual machine, Chariot examines the correctness of driver's write operations by the method of combining a driver's write operation capture and a driver's private access control table. However, this method needs to keep the write permission of shadow page table as read-only, so as to capture isolated driver's write operations through page faults, which adversely affect the performance of the driver. Based on delaying setting frequently used shadow pages' write permissions to read-only, this paper proposes an algorithm using shadow page cache to improve the performance of isolated drivers and carefully study the relationship between the performance of drivers and the size of shadow page cache. Experimental results show that, through the shadow page cache, the performance of isolated drivers can be greatly improved without impacting Chariot's reliability too much. PMID:25815373

  19. CloVR: A virtual machine for automated and portable sequence analysis from the desktop using cloud computing

    PubMed Central

    2011-01-01

    Background Next-generation sequencing technologies have decentralized sequence acquisition, increasing the demand for new bioinformatics tools that are easy to use, portable across multiple platforms, and scalable for high-throughput applications. Cloud computing platforms provide on-demand access to computing infrastructure over the Internet and can be used in combination with custom built virtual machines to distribute pre-packaged with pre-configured software. Results We describe the Cloud Virtual Resource, CloVR, a new desktop application for push-button automated sequence analysis that can utilize cloud computing resources. CloVR is implemented as a single portable virtual machine (VM) that provides several automated analysis pipelines for microbial genomics, including 16S, whole genome and metagenome sequence analysis. The CloVR VM runs on a personal computer, utilizes local computer resources and requires minimal installation, addressing key challenges in deploying bioinformatics workflows. In addition CloVR supports use of remote cloud computing resources to improve performance for large-scale sequence processing. In a case study, we demonstrate the use of CloVR to automatically process next-generation sequencing data on multiple cloud computing platforms. Conclusion The CloVR VM and associated architecture lowers the barrier of entry for utilizing complex analysis protocols on both local single- and multi-core computers and cloud systems for high throughput data processing. PMID:21878105

  20. Virtual Network Configuration Management System for Data Center Operations and Management

    NASA Astrophysics Data System (ADS)

    Okita, Hideki; Yoshizawa, Masahiro; Uehara, Keitaro; Mizuno, Kazuhiko; Tarui, Toshiaki; Naono, Ken

    Virtualization technologies are widely deployed in data centers to improve system utilization. However, they increase the workload for operators, who have to manage the structure of virtual networks in data centers. A virtual-network management system which automates the integration of the configurations of the virtual networks is provided. The proposed system collects the configurations from server virtualization platforms and VLAN-supported switches, and integrates these configurations according to a newly developed XML-based management information model for virtual-network configurations. Preliminary evaluations show that the proposed system helps operators by reducing the time to acquire the configurations from devices and correct the inconsistency of operators' configuration management database by about 40 percent. Further, they also show that the proposed system has excellent scalability; the system takes less than 20 minutes to acquire the virtual-network configurations from a large scale network that includes 300 virtual machines. These results imply that the proposed system is effective for improving the configuration management process for virtual networks in data centers.

  1. A Lightweight Intelligent Virtual Cinematography System for Machinima Production

    DTIC Science & Technology

    2007-01-01

    portmanteau of machine and cinema , machinima refers to the innovation of leveraging video game technology to greatly ease the creation of computer...selecting camera angles to capture the action of an a priori unknown script as aesthetically appropriate cinema . There are a number of challenges therein...Proc. of the 4th International Conf. on Autonomous Agents. Young, R.M. and Riedl, M.O. 2003. Towards an Architecture for Intelligent Control of Narrative in Interactive Virtual Worlds. In Proc. of IUI 2003.

  2. Computer Associates International, CA-ACF2/VM Release 3.1

    DTIC Science & Technology

    1987-09-09

    Associates CA-ACF2/VM Bibliography International Business Machines Corporation, IBM Virtual Machine/Directory Maintenance Program Logic Manual...publication number LY20-0889 International Business Machines International Business Machines Corporation, IBM System/370 Principles of Operation...publication number GA22-7000 International Business Machines Corporation, IBM Virtual Machine/Directory Maintenance Installation and System Administrator’s

  3. An Effective Mechanism for Virtual Machine Placement using Aco in IAAS Cloud

    NASA Astrophysics Data System (ADS)

    Shenbaga Moorthy, Rajalakshmi; Fareentaj, U.; Divya, T. K.

    2017-08-01

    Cloud computing provides an effective way to dynamically provide numerous resources to meet customer demands. A major challenging problem for cloud providers is designing efficient mechanisms for optimal virtual machine Placement (OVMP). Such mechanisms enable the cloud providers to effectively utilize their available resources and obtain higher profits. In order to provide appropriate resources to the clients an optimal virtual machine placement algorithm is proposed. Virtual machine placement is NP-Hard problem. Such NP-Hard problem can be solved using heuristic algorithm. In this paper, Ant Colony Optimization based virtual machine placement is proposed. Our proposed system focuses on minimizing the cost spending in each plan for hosting virtual machines in a multiple cloud provider environment and the response time of each cloud provider is monitored periodically, in such a way to minimize delay in providing the resources to the users. The performance of the proposed algorithm is compared with greedy mechanism. The proposed algorithm is simulated in Eclipse IDE. The results clearly show that the proposed algorithm minimizes the cost, response time and also number of migrations.

  4. Efficient operating system level virtualization techniques for cloud resources

    NASA Astrophysics Data System (ADS)

    Ansu, R.; Samiksha; Anju, S.; Singh, K. John

    2017-11-01

    Cloud computing is an advancing technology which provides the servcies of Infrastructure, Platform and Software. Virtualization and Computer utility are the keys of Cloud computing. The numbers of cloud users are increasing day by day. So it is the need of the hour to make resources available on demand to satisfy user requirements. The technique in which resources namely storage, processing power, memory and network or I/O are abstracted is known as Virtualization. For executing the operating systems various virtualization techniques are available. They are: Full System Virtualization and Para Virtualization. In Full Virtualization, the whole architecture of hardware is duplicated virtually. No modifications are required in Guest OS as the OS deals with the VM hypervisor directly. In Para Virtualization, modifications of OS is required to run in parallel with other OS. For the Guest OS to access the hardware, the host OS must provide a Virtual Machine Interface. OS virtualization has many advantages such as migrating applications transparently, consolidation of server, online maintenance of OS and providing security. This paper briefs both the virtualization techniques and discusses the issues in OS level virtualization.

  5. Paging memory from random access memory to backing storage in a parallel computer

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Inglett, Todd A; Ratterman, Joseph D; Smith, Brian E

    2013-05-21

    Paging memory from random access memory (`RAM`) to backing storage in a parallel computer that includes a plurality of compute nodes, including: executing a data processing application on a virtual machine operating system in a virtual machine on a first compute node; providing, by a second compute node, backing storage for the contents of RAM on the first compute node; and swapping, by the virtual machine operating system in the virtual machine on the first compute node, a page of memory from RAM on the first compute node to the backing storage on the second compute node.

  6. Investigation of Storage Options for Scientific Computing on Grid and Cloud Facilities

    NASA Astrophysics Data System (ADS)

    Garzoglio, Gabriele

    2012-12-01

    In recent years, several new storage technologies, such as Lustre, Hadoop, OrangeFS, and BlueArc, have emerged. While several groups have run benchmarks to characterize them under a variety of configurations, more work is needed to evaluate these technologies for the use cases of scientific computing on Grid clusters and Cloud facilities. This paper discusses our evaluation of the technologies as deployed on a test bed at FermiCloud, one of the Fermilab infrastructure-as-a-service Cloud facilities. The test bed consists of 4 server-class nodes with 40 TB of disk space and up to 50 virtual machine clients, some running on the storage server nodes themselves. With this configuration, the evaluation compares the performance of some of these technologies when deployed on virtual machines and on “bare metal” nodes. In addition to running standard benchmarks such as IOZone to check the sanity of our installation, we have run I/O intensive tests using physics-analysis applications. This paper presents how the storage solutions perform in a variety of realistic use cases of scientific computing. One interesting difference among the storage systems tested is found in a decrease in total read throughput with increasing number of client processes, which occurs in some implementations but not others.

  7. Investigation of storage options for scientific computing on Grid and Cloud facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garzoglio, Gabriele

    In recent years, several new storage technologies, such as Lustre, Hadoop, OrangeFS, and BlueArc, have emerged. While several groups have run benchmarks to characterize them under a variety of configurations, more work is needed to evaluate these technologies for the use cases of scientific computing on Grid clusters and Cloud facilities. This paper discusses our evaluation of the technologies as deployed on a test bed at FermiCloud, one of the Fermilab infrastructure-as-a-service Cloud facilities. The test bed consists of 4 server-class nodes with 40 TB of disk space and up to 50 virtual machine clients, some running on the storagemore » server nodes themselves. With this configuration, the evaluation compares the performance of some of these technologies when deployed on virtual machines and on bare metal nodes. In addition to running standard benchmarks such as IOZone to check the sanity of our installation, we have run I/O intensive tests using physics-analysis applications. This paper presents how the storage solutions perform in a variety of realistic use cases of scientific computing. One interesting difference among the storage systems tested is found in a decrease in total read throughput with increasing number of client processes, which occurs in some implementations but not others.« less

  8. 1001 Ways to run AutoDock Vina for virtual screening

    NASA Astrophysics Data System (ADS)

    Jaghoori, Mohammad Mahdi; Bleijlevens, Boris; Olabarriaga, Silvia D.

    2016-03-01

    Large-scale computing technologies have enabled high-throughput virtual screening involving thousands to millions of drug candidates. It is not trivial, however, for biochemical scientists to evaluate the technical alternatives and their implications for running such large experiments. Besides experience with the molecular docking tool itself, the scientist needs to learn how to run it on high-performance computing (HPC) infrastructures, and understand the impact of the choices made. Here, we review such considerations for a specific tool, AutoDock Vina, and use experimental data to illustrate the following points: (1) an additional level of parallelization increases virtual screening throughput on a multi-core machine; (2) capturing of the random seed is not enough (though necessary) for reproducibility on heterogeneous distributed computing systems; (3) the overall time spent on the screening of a ligand library can be improved by analysis of factors affecting execution time per ligand, including number of active torsions, heavy atoms and exhaustiveness. We also illustrate differences among four common HPC infrastructures: grid, Hadoop, small cluster and multi-core (virtual machine on the cloud). Our analysis shows that these platforms are suitable for screening experiments of different sizes. These considerations can guide scientists when choosing the best computing platform and set-up for their future large virtual screening experiments.

  9. 1001 Ways to run AutoDock Vina for virtual screening.

    PubMed

    Jaghoori, Mohammad Mahdi; Bleijlevens, Boris; Olabarriaga, Silvia D

    2016-03-01

    Large-scale computing technologies have enabled high-throughput virtual screening involving thousands to millions of drug candidates. It is not trivial, however, for biochemical scientists to evaluate the technical alternatives and their implications for running such large experiments. Besides experience with the molecular docking tool itself, the scientist needs to learn how to run it on high-performance computing (HPC) infrastructures, and understand the impact of the choices made. Here, we review such considerations for a specific tool, AutoDock Vina, and use experimental data to illustrate the following points: (1) an additional level of parallelization increases virtual screening throughput on a multi-core machine; (2) capturing of the random seed is not enough (though necessary) for reproducibility on heterogeneous distributed computing systems; (3) the overall time spent on the screening of a ligand library can be improved by analysis of factors affecting execution time per ligand, including number of active torsions, heavy atoms and exhaustiveness. We also illustrate differences among four common HPC infrastructures: grid, Hadoop, small cluster and multi-core (virtual machine on the cloud). Our analysis shows that these platforms are suitable for screening experiments of different sizes. These considerations can guide scientists when choosing the best computing platform and set-up for their future large virtual screening experiments.

  10. Providing Assistive Technology Applications as a Service Through Cloud Computing.

    PubMed

    Mulfari, Davide; Celesti, Antonio; Villari, Massimo; Puliafito, Antonio

    2015-01-01

    Users with disabilities interact with Personal Computers (PCs) using Assistive Technology (AT) software solutions. Such applications run on a PC that a person with a disability commonly uses. However the configuration of AT applications is not trivial at all, especially whenever the user needs to work on a PC that does not allow him/her to rely on his / her AT tools (e.g., at work, at university, in an Internet point). In this paper, we discuss how cloud computing provides a valid technological solution to enhance such a scenario.With the emergence of cloud computing, many applications are executed on top of virtual machines (VMs). Virtualization allows us to achieve a software implementation of a real computer able to execute a standard operating system and any kind of application. In this paper we propose to build personalized VMs running AT programs and settings. By using the remote desktop technology, our solution enables users to control their customized virtual desktop environment by means of an HTML5-based web interface running on any computer equipped with a browser, whenever they are.

  11. Manufacturing data analytics using a virtual factory representation.

    PubMed

    Jain, Sanjay; Shao, Guodong; Shin, Seung-Jun

    2017-01-01

    Large manufacturers have been using simulation to support decision-making for design and production. However, with the advancement of technologies and the emergence of big data, simulation can be utilised to perform and support data analytics for associated performance gains. This requires not only significant model development expertise, but also huge data collection and analysis efforts. This paper presents an approach within the frameworks of Design Science Research Methodology and prototyping to address the challenge of increasing the use of modelling, simulation and data analytics in manufacturing via reduction of the development effort. The use of manufacturing simulation models is presented as data analytics applications themselves and for supporting other data analytics applications by serving as data generators and as a tool for validation. The virtual factory concept is presented as the vehicle for manufacturing modelling and simulation. Virtual factory goes beyond traditional simulation models of factories to include multi-resolution modelling capabilities and thus allowing analysis at varying levels of detail. A path is proposed for implementation of the virtual factory concept that builds on developments in technologies and standards. A virtual machine prototype is provided as a demonstration of the use of a virtual representation for manufacturing data analytics.

  12. Design and implementation of a reliable and cost-effective cloud computing infrastructure: the INFN Napoli experience

    NASA Astrophysics Data System (ADS)

    Capone, V.; Esposito, R.; Pardi, S.; Taurino, F.; Tortone, G.

    2012-12-01

    Over the last few years we have seen an increasing number of services and applications needed to manage and maintain cloud computing facilities. This is particularly true for computing in high energy physics, which often requires complex configurations and distributed infrastructures. In this scenario a cost effective rationalization and consolidation strategy is the key to success in terms of scalability and reliability. In this work we describe an IaaS (Infrastructure as a Service) cloud computing system, with high availability and redundancy features, which is currently in production at INFN-Naples and ATLAS Tier-2 data centre. The main goal we intended to achieve was a simplified method to manage our computing resources and deliver reliable user services, reusing existing hardware without incurring heavy costs. A combined usage of virtualization and clustering technologies allowed us to consolidate our services on a small number of physical machines, reducing electric power costs. As a result of our efforts we developed a complete solution for data and computing centres that can be easily replicated using commodity hardware. Our architecture consists of 2 main subsystems: a clustered storage solution, built on top of disk servers running GlusterFS file system, and a virtual machines execution environment. GlusterFS is a network file system able to perform parallel writes on multiple disk servers, providing this way live replication of data. High availability is also achieved via a network configuration using redundant switches and multiple paths between hypervisor hosts and disk servers. We also developed a set of management scripts to easily perform basic system administration tasks such as automatic deployment of new virtual machines, adaptive scheduling of virtual machines on hypervisor hosts, live migration and automated restart in case of hypervisor failures.

  13. Virtualization of open-source secure web services to support data exchange in a pediatric critical care research network

    PubMed Central

    Sward, Katherine A; Newth, Christopher JL; Khemani, Robinder G; Cryer, Martin E; Thelen, Julie L; Enriquez, Rene; Shaoyu, Su; Pollack, Murray M; Harrison, Rick E; Meert, Kathleen L; Berg, Robert A; Wessel, David L; Shanley, Thomas P; Dalton, Heidi; Carcillo, Joseph; Jenkins, Tammara L; Dean, J Michael

    2015-01-01

    Objectives To examine the feasibility of deploying a virtual web service for sharing data within a research network, and to evaluate the impact on data consistency and quality. Material and Methods Virtual machines (VMs) encapsulated an open-source, semantically and syntactically interoperable secure web service infrastructure along with a shadow database. The VMs were deployed to 8 Collaborative Pediatric Critical Care Research Network Clinical Centers. Results Virtual web services could be deployed in hours. The interoperability of the web services reduced format misalignment from 56% to 1% and demonstrated that 99% of the data consistently transferred using the data dictionary and 1% needed human curation. Conclusions Use of virtualized open-source secure web service technology could enable direct electronic abstraction of data from hospital databases for research purposes. PMID:25796596

  14. Virtual pools for interactive analysis and software development through an integrated Cloud environment

    NASA Astrophysics Data System (ADS)

    Grandi, C.; Italiano, A.; Salomoni, D.; Calabrese Melcarne, A. K.

    2011-12-01

    WNoDeS, an acronym for Worker Nodes on Demand Service, is software developed at CNAF-Tier1, the National Computing Centre of the Italian Institute for Nuclear Physics (INFN) located in Bologna. WNoDeS provides on demand, integrated access to both Grid and Cloud resources through virtualization technologies. Besides the traditional use of computing resources in batch mode, users need to have interactive and local access to a number of systems. WNoDeS can dynamically select these computers instantiating Virtual Machines, according to the requirements (computing, storage and network resources) of users through either the Open Cloud Computing Interface API, or through a web console. An interactive use is usually limited to activities in user space, i.e. where the machine configuration is not modified. In some other instances the activity concerns development and testing of services and thus implies the modification of the system configuration (and, therefore, root-access to the resource). The former use case is a simple extension of the WNoDeS approach, where the resource is provided in interactive mode. The latter implies saving the virtual image at the end of each user session so that it can be presented to the user at subsequent requests. This work describes how the LHC experiments at INFN-Bologna are testing and making use of these dynamically created ad-hoc machines via WNoDeS to support flexible, interactive analysis and software development at the INFN Tier-1 Computing Centre.

  15. 3D multiplayer virtual pets game using Google Card Board

    NASA Astrophysics Data System (ADS)

    Herumurti, Darlis; Riskahadi, Dimas; Kuswardayan, Imam

    2017-08-01

    Virtual Reality (VR) is a technology which allows user to interact with the virtual environment. This virtual environment is generated and simulated by computer. This technology can make user feel the sensation when they are in the virtual environment. The VR technology provides real virtual environment view for user and it is not viewed from screen. But it needs another additional device to show the view of virtual environment. This device is known as Head Mounted Device (HMD). Oculust Rift and Microsoft Hololens are the most famous HMD devices used in VR. And in 2014, Google Card Board was introduced at Google I/O developers conference. Google Card Board is VR platform which allows user to enjoy the VR with simple and cheap way. In this research, we explore Google Card Board to develop simulation game of raising pet. The Google Card Board is used to create view for the VR environment. The view and control in VR environment is built using Unity game engine. And the simulation process is designed using Finite State Machine (FSM). This FSM can help to design the process clearly. So the simulation process can describe the simulation of raising pet well. Raising pet is fun activity. But sometimes, there are many conditions which cause raising pet become difficult to do, i.e. environment condition, disease, high cost, etc. this research aims to explore and implement Google Card Board in simulation of raising pet.

  16. Kevlar: Transitioning Helix from Research to Practice

    DTIC Science & Technology

    2015-04-01

    protective transformations are applied to application binaries before they are deployed. Salient features of Kevlar include applying high- entropy ...variety of classes. Kevlar uses novel, fine-grained, high- entropy diversification transformations to prevent an attacker from successfully exploiting...Kevlar include applying high- entropy randomization techniques, automated program repairs, leveraging highly-optimized virtual machine technology, and in

  17. Kevlar: Transitioning Helix for Research to Practice

    DTIC Science & Technology

    2016-03-01

    entropy randomization techniques, automated program repairs leveraging highly-optimized virtual machine technology, and developing a novel framework...attacker from exploiting residual vulnerabilities in a wide variety of classes. Helix/Kevlar uses novel, fine-grained, high- entropy diversification...the Air Force, and IARPA). Salient features of Helix/Kevlar include developing high- entropy randomization techniques, automated program repairs

  18. From Floppies to Flash--Your Guide to Removable Media

    ERIC Educational Resources Information Center

    Berdinka, Matthew J.

    2005-01-01

    Technology that once involved a scary, mysterious machine the size of a small house now fits on desktops and commonly appears in offices, schools, and homes. Computers allow for processing, storing and transmitting data between two or more people virtually anywhere in the world. They also allow users to save documents, presentations, photos and…

  19. Future Cyborgs: Human-Machine Interface for Virtual Reality Applications

    DTIC Science & Technology

    2007-04-01

    FUTURE CYBORGS : HUMAN-MACHINE INTERFACE FOR VIRTUAL REALITY APPLICATIONS Robert R. Powell, Major, USAF April 2007 Blue Horizons...SUBTITLE Future Cyborgs : Human-Machine Interface for Virtual Reality Applications 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER...Nicholas Negroponte, Being Digital (New York: Alfred A Knopf, Inc, 1995), 123. 23 Ibid. 24 Andy Clark, Natural-Born Cyborgs (New York: Oxford

  20. Comparative analysis of machine learning methods in ligand-based virtual screening of large compound libraries.

    PubMed

    Ma, Xiao H; Jia, Jia; Zhu, Feng; Xue, Ying; Li, Ze R; Chen, Yu Z

    2009-05-01

    Machine learning methods have been explored as ligand-based virtual screening tools for facilitating drug lead discovery. These methods predict compounds of specific pharmacodynamic, pharmacokinetic or toxicological properties based on their structure-derived structural and physicochemical properties. Increasing attention has been directed at these methods because of their capability in predicting compounds of diverse structures and complex structure-activity relationships without requiring the knowledge of target 3D structure. This article reviews current progresses in using machine learning methods for virtual screening of pharmacodynamically active compounds from large compound libraries, and analyzes and compares the reported performances of machine learning tools with those of structure-based and other ligand-based (such as pharmacophore and clustering) virtual screening methods. The feasibility to improve the performance of machine learning methods in screening large libraries is discussed.

  1. A Novel Artificial Bee Colony Approach of Live Virtual Machine Migration Policy Using Bayes Theorem

    PubMed Central

    Xu, Gaochao; Hu, Liang; Fu, Xiaodong

    2013-01-01

    Green cloud data center has become a research hotspot of virtualized cloud computing architecture. Since live virtual machine (VM) migration technology is widely used and studied in cloud computing, we have focused on the VM placement selection of live migration for power saving. We present a novel heuristic approach which is called PS-ABC. Its algorithm includes two parts. One is that it combines the artificial bee colony (ABC) idea with the uniform random initialization idea, the binary search idea, and Boltzmann selection policy to achieve an improved ABC-based approach with better global exploration's ability and local exploitation's ability. The other one is that it uses the Bayes theorem to further optimize the improved ABC-based process to faster get the final optimal solution. As a result, the whole approach achieves a longer-term efficient optimization for power saving. The experimental results demonstrate that PS-ABC evidently reduces the total incremental power consumption and better protects the performance of VM running and migrating compared with the existing research. It makes the result of live VM migration more high-effective and meaningful. PMID:24385877

  2. A novel artificial bee colony approach of live virtual machine migration policy using Bayes theorem.

    PubMed

    Xu, Gaochao; Ding, Yan; Zhao, Jia; Hu, Liang; Fu, Xiaodong

    2013-01-01

    Green cloud data center has become a research hotspot of virtualized cloud computing architecture. Since live virtual machine (VM) migration technology is widely used and studied in cloud computing, we have focused on the VM placement selection of live migration for power saving. We present a novel heuristic approach which is called PS-ABC. Its algorithm includes two parts. One is that it combines the artificial bee colony (ABC) idea with the uniform random initialization idea, the binary search idea, and Boltzmann selection policy to achieve an improved ABC-based approach with better global exploration's ability and local exploitation's ability. The other one is that it uses the Bayes theorem to further optimize the improved ABC-based process to faster get the final optimal solution. As a result, the whole approach achieves a longer-term efficient optimization for power saving. The experimental results demonstrate that PS-ABC evidently reduces the total incremental power consumption and better protects the performance of VM running and migrating compared with the existing research. It makes the result of live VM migration more high-effective and meaningful.

  3. Rapid prototyping and stereolithography in dentistry

    PubMed Central

    Nayar, Sanjna; Bhuminathan, S.; Bhat, Wasim Manzoor

    2015-01-01

    The word rapid prototyping (RP) was first used in mechanical engineering field in the early 1980s to describe the act of producing a prototype, a unique product, the first product, or a reference model. In the past, prototypes were handmade by sculpting or casting, and their fabrication demanded a long time. Any and every prototype should undergo evaluation, correction of defects, and approval before the beginning of its mass or large scale production. Prototypes may also be used for specific or restricted purposes, in which case they are usually called a preseries model. With the development of information technology, three-dimensional models can be devised and built based on virtual prototypes. Computers can now be used to create accurately detailed projects that can be assessed from different perspectives in a process known as computer aided design (CAD). To materialize virtual objects using CAD, a computer aided manufacture (CAM) process has been developed. To transform a virtual file into a real object, CAM operates using a machine connected to a computer, similar to a printer or peripheral device. In 1987, Brix and Lambrecht used, for the first time, a prototype in health care. It was a three-dimensional model manufactured using a computer numerical control device, a type of machine that was the predecessor of RP. In 1991, human anatomy models produced with a technology called stereolithography were first used in a maxillofacial surgery clinic in Viena. PMID:26015715

  4. Rapid prototyping and stereolithography in dentistry.

    PubMed

    Nayar, Sanjna; Bhuminathan, S; Bhat, Wasim Manzoor

    2015-04-01

    The word rapid prototyping (RP) was first used in mechanical engineering field in the early 1980s to describe the act of producing a prototype, a unique product, the first product, or a reference model. In the past, prototypes were handmade by sculpting or casting, and their fabrication demanded a long time. Any and every prototype should undergo evaluation, correction of defects, and approval before the beginning of its mass or large scale production. Prototypes may also be used for specific or restricted purposes, in which case they are usually called a preseries model. With the development of information technology, three-dimensional models can be devised and built based on virtual prototypes. Computers can now be used to create accurately detailed projects that can be assessed from different perspectives in a process known as computer aided design (CAD). To materialize virtual objects using CAD, a computer aided manufacture (CAM) process has been developed. To transform a virtual file into a real object, CAM operates using a machine connected to a computer, similar to a printer or peripheral device. In 1987, Brix and Lambrecht used, for the first time, a prototype in health care. It was a three-dimensional model manufactured using a computer numerical control device, a type of machine that was the predecessor of RP. In 1991, human anatomy models produced with a technology called stereolithography were first used in a maxillofacial surgery clinic in Viena.

  5. Integration of Virtual Machine Technologies into Hastily Formed Networks in Support of Humanitarian Relief and Disaster Recovery Missions

    DTIC Science & Technology

    2011-12-01

    and measures of effectiveness (MOE). New technologies that offer solid-state hard drives built into modular VDI devices known as appliances ...Joint Reconfigurable Vehicle LAN Local Area Network LOS Line of Sight LTE Long Term Evolution MB Megabyte MOP Measure of Performance MOE Measure ...re-usable measures of performance and measures of effectiveness (MOP and MOE) and evaluation procedures will be applied to this research. A

  6. Virtual Reality Simulations and Animations in a Web-Based Interactive Manufacturing Engineering Module

    ERIC Educational Resources Information Center

    Ong, S. K.; Mannan, M. A.

    2004-01-01

    This paper presents a web-based interactive teaching package that provides a comprehensive and conducive yet dynamic and interactive environment for a module on automated machine tools in the Manufacturing Division at the National University of Singapore. The use of Internet technologies in this teaching tool makes it possible to conjure…

  7. Integration of the virtual 3D model of a control system with the virtual controller

    NASA Astrophysics Data System (ADS)

    Herbuś, K.; Ociepka, P.

    2015-11-01

    Nowadays the design process includes simulation analysis of different components of a constructed object. It involves the need for integration of different virtual object to simulate the whole investigated technical system. The paper presents the issues related to the integration of a virtual 3D model of a chosen control system of with a virtual controller. The goal of integration is to verify the operation of an adopted object of in accordance with the established control program. The object of the simulation work is the drive system of a tunneling machine for trenchless work. In the first stage of work was created an interactive visualization of functioning of the 3D virtual model of a tunneling machine. For this purpose, the software of the VR (Virtual Reality) class was applied. In the elaborated interactive application were created adequate procedures allowing controlling the drive system of a translatory motion, a rotary motion and the drive system of a manipulator. Additionally was created the procedure of turning on and off the output crushing head, mounted on the last element of the manipulator. In the elaborated interactive application have been established procedures for receiving input data from external software, on the basis of the dynamic data exchange (DDE), which allow controlling actuators of particular control systems of the considered machine. In the next stage of work, the program on a virtual driver, in the ladder diagram (LD) language, was created. The control program was developed on the basis of the adopted work cycle of the tunneling machine. The element integrating the virtual model of the tunneling machine for trenchless work with the virtual controller is the application written in a high level language (Visual Basic). In the developed application was created procedures responsible for collecting data from the running, in a simulation mode, virtual controller and transferring them to the interactive application, in which is verified the operation of the adopted research object. The carried out work allowed foot the integration of the virtual model of the control system of the tunneling machine with the virtual controller, enabling the verification of its operation.

  8. Inertial Motion-Tracking Technology for Virtual 3-D

    NASA Technical Reports Server (NTRS)

    2005-01-01

    In the 1990s, NASA pioneered virtual reality research. The concept was present long before, but, prior to this, the technology did not exist to make a viable virtual reality system. Scientists had theories and ideas they knew that the concept had potential, but the computers of the 1970s and 1980s were not fast enough, sensors were heavy and cumbersome, and people had difficulty blending fluidly with the machines. Scientists at Ames Research Center built upon the research of previous decades and put the necessary technology behind them, making the theories of virtual reality a reality. Virtual reality systems depend on complex motion-tracking sensors to convey information between the user and the computer to give the user the feeling that he is operating in the real world. These motion-tracking sensors measure and report an object s position and orientation as it changes. A simple example of motion tracking would be the cursor on a computer screen moving in correspondence to the shifting of the mouse. Tracking in 3-D, necessary to create virtual reality, however, is much more complex. To be successful, the perspective of the virtual image seen on the computer must be an accurate representation of what is seen in the real world. As the user s head or camera moves, turns, or tilts, the computer-generated environment must change accordingly with no noticeable lag, jitter, or distortion. Historically, the lack of smooth and rapid tracking of the user s motion has thwarted the widespread use of immersive 3-D computer graphics. NASA uses virtual reality technology for a variety of purposes, mostly training of astronauts. The actual missions are costly and dangerous, so any opportunity the crews have to practice their maneuvering in accurate situations before the mission is valuable and instructive. For that purpose, NASA has funded a great deal of virtual reality research, and benefited from the results.

  9. Hybrid polylingual object model: an efficient and seamless integration of Java and native components on the Dalvik virtual machine.

    PubMed

    Huang, Yukun; Chen, Rong; Wei, Jingbo; Pei, Xilong; Cao, Jing; Prakash Jayaraman, Prem; Ranjan, Rajiv

    2014-01-01

    JNI in the Android platform is often observed with low efficiency and high coding complexity. Although many researchers have investigated the JNI mechanism, few of them solve the efficiency and the complexity problems of JNI in the Android platform simultaneously. In this paper, a hybrid polylingual object (HPO) model is proposed to allow a CAR object being accessed as a Java object and as vice in the Dalvik virtual machine. It is an acceptable substitute for JNI to reuse the CAR-compliant components in Android applications in a seamless and efficient way. The metadata injection mechanism is designed to support the automatic mapping and reflection between CAR objects and Java objects. A prototype virtual machine, called HPO-Dalvik, is implemented by extending the Dalvik virtual machine to support the HPO model. Lifespan management, garbage collection, and data type transformation of HPO objects are also handled in the HPO-Dalvik virtual machine automatically. The experimental result shows that the HPO model outweighs the standard JNI in lower overhead on native side, better executing performance with no JNI bridging code being demanded.

  10. Virtual hospital--a computer-aided platform to evaluate the sense of direction.

    PubMed

    Jiang, Ching-Fen; Li, Yuan-Shyi

    2007-01-01

    This paper presents a computer-aided platform, named Virtual Hospital (VH), to evaluate the wayfinding ability that is found impaired in senile people with early dementia. The development of the VH takes the advantage of virtual reality technology to make the evaluation of the sense of direction more convenient and accurate then the conventional way. A pilot study was carried out to test its feasibility in differentiating the sense of direction between different genders. The results with significant differences in the response time (p<0.05) and the pointing error (p<0.01) between genders suggest the potential of the VH for clinical uses. Further improvement on the human-machine interface is necessary to make it easy for geriatric people to use.

  11. The ASSERT Virtual Machine Kernel: Support for Preservation of Temporal Properties

    NASA Astrophysics Data System (ADS)

    Zamorano, J.; de la Puente, J. A.; Pulido, J. A.; Urueña

    2008-08-01

    A new approach to building embedded real-time software has been developed in the ASSERT project. One of its key elements is the concept of a virtual machine preserving the non-functional properties of the system, and especially real-time properties, all the way down from high- level design models down to executable code. The paper describes one instance of the virtual machine concept that provides support for the preservation of temporal properties both at the source code level —by accept- ing only "legal" entities, i.e. software components with statically analysable real-tim behaviour— and at run-time —by monitoring the temporal behaviour of the system. The virtual machine has been validated on several pilot projects carried out by aerospace companies in the framework of the ASSERT project.

  12. Means and method of balancing multi-cylinder reciprocating machines

    DOEpatents

    Corey, John A.; Walsh, Michael M.

    1985-01-01

    A virtual balancing axis arrangement is described for multi-cylinder reciprocating piston machines for effectively balancing out imbalanced forces and minimizing residual imbalance moments acting on the crankshaft of such machines without requiring the use of additional parallel-arrayed balancing shafts or complex and expensive gear arrangements. The novel virtual balancing axis arrangement is capable of being designed into multi-cylinder reciprocating piston and crankshaft machines for substantially reducing vibrations induced during operation of such machines with only minimal number of additional component parts. Some of the required component parts may be available from parts already required for operation of auxiliary equipment, such as oil and water pumps used in certain types of reciprocating piston and crankshaft machine so that by appropriate location and dimensioning in accordance with the teachings of the invention, the virtual balancing axis arrangement can be built into the machine at little or no additional cost.

  13. Three-dimensional virtual acoustic displays

    NASA Technical Reports Server (NTRS)

    Wenzel, Elizabeth M.

    1991-01-01

    The development of an alternative medium for displaying information in complex human-machine interfaces is described. The 3-D virtual acoustic display is a means for accurately transferring information to a human operator using the auditory modality; it combines directional and semantic characteristics to form naturalistic representations of dynamic objects and events in remotely sensed or simulated environments. Although the technology can stand alone, it is envisioned as a component of a larger multisensory environment and will no doubt find its greatest utility in that context. The general philosophy in the design of the display has been that the development of advanced computer interfaces should be driven first by an understanding of human perceptual requirements, and later by technological capabilities or constraints. In expanding on this view, current and potential uses are addressed of virtual acoustic displays, such displays are characterized, and recent approaches to their implementation and application are reviewed, the research project at NASA-Ames is described in detail, and finally some critical research issues for the future are outlined.

  14. Hybrid Cloud Computing Environment for EarthCube and Geoscience Community

    NASA Astrophysics Data System (ADS)

    Yang, C. P.; Qin, H.

    2016-12-01

    The NSF EarthCube Integration and Test Environment (ECITE) has built a hybrid cloud computing environment to provides cloud resources from private cloud environments by using cloud system software - OpenStack and Eucalyptus, and also manages public cloud - Amazon Web Service that allow resource synchronizing and bursting between private and public cloud. On ECITE hybrid cloud platform, EarthCube and geoscience community can deploy and manage the applications by using base virtual machine images or customized virtual machines, analyze big datasets by using virtual clusters, and real-time monitor the virtual resource usage on the cloud. Currently, a number of EarthCube projects have deployed or started migrating their projects to this platform, such as CHORDS, BCube, CINERGI, OntoSoft, and some other EarthCube building blocks. To accomplish the deployment or migration, administrator of ECITE hybrid cloud platform prepares the specific needs (e.g. images, port numbers, usable cloud capacity, etc.) of each project in advance base on the communications between ECITE and participant projects, and then the scientists or IT technicians in those projects launch one or multiple virtual machines, access the virtual machine(s) to set up computing environment if need be, and migrate their codes, documents or data without caring about the heterogeneity in structure and operations among different cloud platforms.

  15. Cooperating reduction machines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kluge, W.E.

    1983-11-01

    This paper presents a concept and a system architecture for the concurrent execution of program expressions of a concrete reduction language based on lamda-expressions. If formulated appropriately, these expressions are well-suited for concurrent execution, following a demand-driven model of computation. In particular, recursive program expressions with nonlinear expansion may, at run time, recursively be partitioned into a hierarchy of independent subexpressions which can be reduced by a corresponding hierarchy of virtual reduction machines. This hierarchy unfolds and collapses dynamically, with virtual machines recursively assuming the role of masters that create and eventually terminate, or synchronize with, slaves. The paper alsomore » proposes a nonhierarchically organized system of reduction machines, each featuring a stack architecture, that effectively supports the allocation of virtual machines to the real machines of the system in compliance with their hierarchical order of creation and termination. 25 references.« less

  16. Complementary Machine Intelligence and Human Intelligence in Virtual Teaching Assistant for Tutoring Program Tracing

    ERIC Educational Resources Information Center

    Chou, Chih-Yueh; Huang, Bau-Hung; Lin, Chi-Jen

    2011-01-01

    This study proposes a virtual teaching assistant (VTA) to share teacher tutoring tasks in helping students practice program tracing and proposes two mechanisms of complementing machine intelligence and human intelligence to develop the VTA. The first mechanism applies machine intelligence to extend human intelligence (teacher answers) to evaluate…

  17. "Pack[superscript2]": VM Resource Scheduling for Fine-Grained Application SLAs in Highly Consolidated Environment

    ERIC Educational Resources Information Center

    Sukwong, Orathai

    2013-01-01

    Virtualization enables the ability to consolidate multiple servers on a single physical machine, increasing the infrastructure utilization. Maximizing the ratio of server virtual machines (VMs) to physical machines, namely the consolidation ratio, becomes an important goal toward infrastructure cost saving in a cloud. However, the consolidation…

  18. Generating Contextual Descriptions of Virtual Reality (VR) Spaces

    NASA Astrophysics Data System (ADS)

    Olson, D. M.; Zaman, C. H.; Sutherland, A.

    2017-12-01

    Virtual reality holds great potential for science communication, education, and research. However, interfaces for manipulating data and environments in virtual worlds are limited and idiosyncratic. Furthermore, speech and vision are the primary modalities by which humans collect information about the world, but the linking of visual and natural language domains is a relatively new pursuit in computer vision. Machine learning techniques have been shown to be effective at image and speech classification, as well as at describing images with language (Karpathy 2016), but have not yet been used to describe potential actions. We propose a technique for creating a library of possible context-specific actions associated with 3D objects in immersive virtual worlds based on a novel dataset generated natively in virtual reality containing speech, image, gaze, and acceleration data. We will discuss the design and execution of a user study in virtual reality that enabled the collection and the development of this dataset. We will also discuss the development of a hybrid machine learning algorithm linking vision data with environmental affordances in natural language. Our findings demonstrate that it is possible to develop a model which can generate interpretable verbal descriptions of possible actions associated with recognized 3D objects within immersive VR environments. This suggests promising applications for more intuitive user interfaces through voice interaction within 3D environments. It also demonstrates the potential to apply vast bodies of embodied and semantic knowledge to enrich user interaction within VR environments. This technology would allow for applications such as expert knowledge annotation of 3D environments, complex verbal data querying and object manipulation in virtual spaces, and computer-generated, dynamic 3D object affordances and functionality during simulations.

  19. Simplified Virtualization in a HEP/NP Environment with Condor

    NASA Astrophysics Data System (ADS)

    Strecker-Kellogg, W.; Caramarcu, C.; Hollowell, C.; Wong, T.

    2012-12-01

    In this work we will address the development of a simple prototype virtualized worker node cluster, using Scientific Linux 6.x as a base OS, KVM and the libvirt API for virtualization, and the Condor batch software to manage virtual machines. The discussion in this paper provides details on our experience with building, configuring, and deploying the various components from bare metal, including the base OS, creation and distribution of the virtualized OS images and the integration of batch services with the virtual machines. Our focus was on simplicity and interoperability with our existing architecture.

  20. The effective use of virtualization for selection of data centers in a cloud computing environment

    NASA Astrophysics Data System (ADS)

    Kumar, B. Santhosh; Parthiban, Latha

    2018-04-01

    Data centers are the places which consist of network of remote servers to store, access and process the data. Cloud computing is a technology where users worldwide will submit the tasks and the service providers will direct the requests to the data centers which are responsible for execution of tasks. The servers in the data centers need to employ the virtualization concept so that multiple tasks can be executed simultaneously. In this paper we proposed an algorithm for data center selection based on energy of virtual machines created in server. The virtualization energy in each of the server is calculated and total energy of the data center is obtained by the summation of individual server energy. The tasks submitted are routed to the data center with least energy consumption which will result in minimizing the operational expenses of a service provider.

  1. Elevating Virtual Machine Introspection for Fine-Grained Process Monitoring: Techniques and Applications

    ERIC Educational Resources Information Center

    Srinivasan, Deepa

    2013-01-01

    Recent rapid malware growth has exposed the limitations of traditional in-host malware-defense systems and motivated the development of secure virtualization-based solutions. By running vulnerable systems as virtual machines (VMs) and moving security software from inside VMs to the outside, the out-of-VM solutions securely isolate the anti-malware…

  2. Hybrid PolyLingual Object Model: An Efficient and Seamless Integration of Java and Native Components on the Dalvik Virtual Machine

    PubMed Central

    Huang, Yukun; Chen, Rong; Wei, Jingbo; Pei, Xilong; Cao, Jing; Prakash Jayaraman, Prem; Ranjan, Rajiv

    2014-01-01

    JNI in the Android platform is often observed with low efficiency and high coding complexity. Although many researchers have investigated the JNI mechanism, few of them solve the efficiency and the complexity problems of JNI in the Android platform simultaneously. In this paper, a hybrid polylingual object (HPO) model is proposed to allow a CAR object being accessed as a Java object and as vice in the Dalvik virtual machine. It is an acceptable substitute for JNI to reuse the CAR-compliant components in Android applications in a seamless and efficient way. The metadata injection mechanism is designed to support the automatic mapping and reflection between CAR objects and Java objects. A prototype virtual machine, called HPO-Dalvik, is implemented by extending the Dalvik virtual machine to support the HPO model. Lifespan management, garbage collection, and data type transformation of HPO objects are also handled in the HPO-Dalvik virtual machine automatically. The experimental result shows that the HPO model outweighs the standard JNI in lower overhead on native side, better executing performance with no JNI bridging code being demanded. PMID:25110745

  3. An incremental anomaly detection model for virtual machines.

    PubMed

    Zhang, Hancui; Chen, Shuyu; Liu, Jun; Zhou, Zhen; Wu, Tianshu

    2017-01-01

    Self-Organizing Map (SOM) algorithm as an unsupervised learning method has been applied in anomaly detection due to its capabilities of self-organizing and automatic anomaly prediction. However, because of the algorithm is initialized in random, it takes a long time to train a detection model. Besides, the Cloud platforms with large scale virtual machines are prone to performance anomalies due to their high dynamic and resource sharing characters, which makes the algorithm present a low accuracy and a low scalability. To address these problems, an Improved Incremental Self-Organizing Map (IISOM) model is proposed for anomaly detection of virtual machines. In this model, a heuristic-based initialization algorithm and a Weighted Euclidean Distance (WED) algorithm are introduced into SOM to speed up the training process and improve model quality. Meanwhile, a neighborhood-based searching algorithm is presented to accelerate the detection time by taking into account the large scale and high dynamic features of virtual machines on cloud platform. To demonstrate the effectiveness, experiments on a common benchmark KDD Cup dataset and a real dataset have been performed. Results suggest that IISOM has advantages in accuracy and convergence velocity of anomaly detection for virtual machines on cloud platform.

  4. An incremental anomaly detection model for virtual machines

    PubMed Central

    Zhang, Hancui; Chen, Shuyu; Liu, Jun; Zhou, Zhen; Wu, Tianshu

    2017-01-01

    Self-Organizing Map (SOM) algorithm as an unsupervised learning method has been applied in anomaly detection due to its capabilities of self-organizing and automatic anomaly prediction. However, because of the algorithm is initialized in random, it takes a long time to train a detection model. Besides, the Cloud platforms with large scale virtual machines are prone to performance anomalies due to their high dynamic and resource sharing characters, which makes the algorithm present a low accuracy and a low scalability. To address these problems, an Improved Incremental Self-Organizing Map (IISOM) model is proposed for anomaly detection of virtual machines. In this model, a heuristic-based initialization algorithm and a Weighted Euclidean Distance (WED) algorithm are introduced into SOM to speed up the training process and improve model quality. Meanwhile, a neighborhood-based searching algorithm is presented to accelerate the detection time by taking into account the large scale and high dynamic features of virtual machines on cloud platform. To demonstrate the effectiveness, experiments on a common benchmark KDD Cup dataset and a real dataset have been performed. Results suggest that IISOM has advantages in accuracy and convergence velocity of anomaly detection for virtual machines on cloud platform. PMID:29117245

  5. Analysis towards VMEM File of a Suspended Virtual Machine

    NASA Astrophysics Data System (ADS)

    Song, Zheng; Jin, Bo; Sun, Yongqing

    With the popularity of virtual machines, forensic investigators are challenged with more complicated situations, among which discovering the evidences in virtualized environment is of significant importance. This paper mainly analyzes the file suffixed with .vmem in VMware Workstation, which stores all pseudo-physical memory into an image. The internal file structure of .vmem file is studied and disclosed. Key information about processes and threads of a suspended virtual machine is revealed. Further investigation into the Windows XP SP3 heap contents is conducted and a proof-of-concept tool is provided. Different methods to obtain forensic memory images are introduced, with both advantages and limits analyzed. We conclude with an outlook.

  6. Virtualization of open-source secure web services to support data exchange in a pediatric critical care research network.

    PubMed

    Frey, Lewis J; Sward, Katherine A; Newth, Christopher J L; Khemani, Robinder G; Cryer, Martin E; Thelen, Julie L; Enriquez, Rene; Shaoyu, Su; Pollack, Murray M; Harrison, Rick E; Meert, Kathleen L; Berg, Robert A; Wessel, David L; Shanley, Thomas P; Dalton, Heidi; Carcillo, Joseph; Jenkins, Tammara L; Dean, J Michael

    2015-11-01

    To examine the feasibility of deploying a virtual web service for sharing data within a research network, and to evaluate the impact on data consistency and quality. Virtual machines (VMs) encapsulated an open-source, semantically and syntactically interoperable secure web service infrastructure along with a shadow database. The VMs were deployed to 8 Collaborative Pediatric Critical Care Research Network Clinical Centers. Virtual web services could be deployed in hours. The interoperability of the web services reduced format misalignment from 56% to 1% and demonstrated that 99% of the data consistently transferred using the data dictionary and 1% needed human curation. Use of virtualized open-source secure web service technology could enable direct electronic abstraction of data from hospital databases for research purposes. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  7. The HEPiX Virtualisation Working Group: Towards a Grid of Clouds

    NASA Astrophysics Data System (ADS)

    Cass, Tony

    2012-12-01

    The use of virtual machine images, as for example with Cloud services such as Amazon's Elastic Compute Cloud, is attractive for users as they have a guaranteed execution environment, something that cannot today be provided across sites participating in computing grids such as the Worldwide LHC Computing Grid. However, Grid sites often operate within computer security frameworks which preclude the use of remotely generated images. The HEPiX Virtualisation Working Group was setup with the objective to enable use of remotely generated virtual machine images at Grid sites and, to this end, has introduced the idea of trusted virtual machine images which are guaranteed to be secure and configurable by sites such that security policy commitments can be met. This paper describes the requirements and details of these trusted virtual machine images and presents a model for their use to facilitate the integration of Grid- and Cloud-based computing environments for High Energy Physics.

  8. Status and Roadmap of CernVM

    NASA Astrophysics Data System (ADS)

    Berzano, D.; Blomer, J.; Buncic, P.; Charalampidis, I.; Ganis, G.; Meusel, R.

    2015-12-01

    Cloud resources nowadays contribute an essential share of resources for computing in high-energy physics. Such resources can be either provided by private or public IaaS clouds (e.g. OpenStack, Amazon EC2, Google Compute Engine) or by volunteers computers (e.g. LHC@Home 2.0). In any case, experiments need to prepare a virtual machine image that provides the execution environment for the physics application at hand. The CernVM virtual machine since version 3 is a minimal and versatile virtual machine image capable of booting different operating systems. The virtual machine image is less than 20 megabyte in size. The actual operating system is delivered on demand by the CernVM File System. CernVM 3 has matured from a prototype to a production environment. It is used, for instance, to run LHC applications in the cloud, to tune event generators using a network of volunteer computers, and as a container for the historic Scientific Linux 5 and Scientific Linux 4 based software environments in the course of long-term data preservation efforts of the ALICE, CMS, and ALEPH experiments. We present experience and lessons learned from the use of CernVM at scale. We also provide an outlook on the upcoming developments. These developments include adding support for Scientific Linux 7, the use of container virtualization, such as provided by Docker, and the streamlining of virtual machine contextualization towards the cloud-init industry standard.

  9. Numerical Control/Computer Aided Manufacturing (NC/CAM), A Descom Study

    DTIC Science & Technology

    1979-07-01

    CAM machines operate directly from computers, but most get instructions in the form of punched tape. The applications of NC/CAM are virtually...Although most NC/CAM equipment is metal working, its applications include electronics manufacturing, glass making, food processing, materiel handling...drafting, woodworking, plastics and inspection, just to name a few. Numerical control, like most technologies, is an advancing and evolutionary process

  10. Spectrophores as one-dimensional descriptors calculated from three-dimensional atomic properties: applications ranging from scaffold hopping to multi-target virtual screening.

    PubMed

    Gladysz, Rafaela; Dos Santos, Fabio Mendes; Langenaeker, Wilfried; Thijs, Gert; Augustyns, Koen; De Winter, Hans

    2018-03-07

    Spectrophores are novel descriptors that are calculated from the three-dimensional atomic properties of molecules. In our current implementation, the atomic properties that were used to calculate spectrophores include atomic partial charges, atomic lipophilicity indices, atomic shape deviations and atomic softness properties. This approach can easily be widened to also include additional atomic properties. Our novel methodology finds its roots in the experimental affinity fingerprinting technology developed in the 1990's by Terrapin Technologies. Here we have translated it into a purely virtual approach using artificial affinity cages and a simplified metric to calculate the interaction between these cages and the atomic properties. A typical spectrophore consists of a vector of 48 real numbers. This makes it highly suitable for the calculation of a wide range of similarity measures for use in virtual screening and for the investigation of quantitative structure-activity relationships in combination with advanced statistical approaches such as self-organizing maps, support vector machines and neural networks. In our present report we demonstrate the applicability of our novel methodology for scaffold hopping as well as virtual screening.

  11. Job Scheduling with Efficient Resource Monitoring in Cloud Datacenter

    PubMed Central

    Loganathan, Shyamala; Mukherjee, Saswati

    2015-01-01

    Cloud computing is an on-demand computing model, which uses virtualization technology to provide cloud resources to users in the form of virtual machines through internet. Being an adaptable technology, cloud computing is an excellent alternative for organizations for forming their own private cloud. Since the resources are limited in these private clouds maximizing the utilization of resources and giving the guaranteed service for the user are the ultimate goal. For that, efficient scheduling is needed. This research reports on an efficient data structure for resource management and resource scheduling technique in a private cloud environment and discusses a cloud model. The proposed scheduling algorithm considers the types of jobs and the resource availability in its scheduling decision. Finally, we conducted simulations using CloudSim and compared our algorithm with other existing methods, like V-MCT and priority scheduling algorithms. PMID:26473166

  12. Job Scheduling with Efficient Resource Monitoring in Cloud Datacenter.

    PubMed

    Loganathan, Shyamala; Mukherjee, Saswati

    2015-01-01

    Cloud computing is an on-demand computing model, which uses virtualization technology to provide cloud resources to users in the form of virtual machines through internet. Being an adaptable technology, cloud computing is an excellent alternative for organizations for forming their own private cloud. Since the resources are limited in these private clouds maximizing the utilization of resources and giving the guaranteed service for the user are the ultimate goal. For that, efficient scheduling is needed. This research reports on an efficient data structure for resource management and resource scheduling technique in a private cloud environment and discusses a cloud model. The proposed scheduling algorithm considers the types of jobs and the resource availability in its scheduling decision. Finally, we conducted simulations using CloudSim and compared our algorithm with other existing methods, like V-MCT and priority scheduling algorithms.

  13. An adaptive process-based cloud infrastructure for space situational awareness applications

    NASA Astrophysics Data System (ADS)

    Liu, Bingwei; Chen, Yu; Shen, Dan; Chen, Genshe; Pham, Khanh; Blasch, Erik; Rubin, Bruce

    2014-06-01

    Space situational awareness (SSA) and defense space control capabilities are top priorities for groups that own or operate man-made spacecraft. Also, with the growing amount of space debris, there is an increase in demand for contextual understanding that necessitates the capability of collecting and processing a vast amount sensor data. Cloud computing, which features scalable and flexible storage and computing services, has been recognized as an ideal candidate that can meet the large data contextual challenges as needed by SSA. Cloud computing consists of physical service providers and middleware virtual machines together with infrastructure, platform, and software as service (IaaS, PaaS, SaaS) models. However, the typical Virtual Machine (VM) abstraction is on a per operating systems basis, which is at too low-level and limits the flexibility of a mission application architecture. In responding to this technical challenge, a novel adaptive process based cloud infrastructure for SSA applications is proposed in this paper. In addition, the details for the design rationale and a prototype is further examined. The SSA Cloud (SSAC) conceptual capability will potentially support space situation monitoring and tracking, object identification, and threat assessment. Lastly, the benefits of a more granular and flexible cloud computing resources allocation are illustrated for data processing and implementation considerations within a representative SSA system environment. We show that the container-based virtualization performs better than hypervisor-based virtualization technology in an SSA scenario.

  14. Virtual C Machine and Integrated Development Environment for ATMS Controllers.

    DOT National Transportation Integrated Search

    2000-04-01

    The overall objective of this project is to develop a prototype virtual machine that fits on current Advanced Traffic Management Systems (ATMS) controllers and provides functionality for complex traffic operations.;Prepared in cooperation with Utah S...

  15. New Web Server - the Java Version of Tempest - Produced

    NASA Technical Reports Server (NTRS)

    York, David W.; Ponyik, Joseph G.

    2000-01-01

    A new software design and development effort has produced a Java (Sun Microsystems, Inc.) version of the award-winning Tempest software (refs. 1 and 2). In 1999, the Embedded Web Technology (EWT) team received a prestigious R&D 100 Award for Tempest, Java Version. In this article, "Tempest" will refer to the Java version of Tempest, a World Wide Web server for desktop or embedded systems. Tempest was designed at the NASA Glenn Research Center at Lewis Field to run on any platform for which a Java Virtual Machine (JVM, Sun Microsystems, Inc.) exists. The JVM acts as a translator between the native code of the platform and the byte code of Tempest, which is compiled in Java. These byte code files are Java executables with a ".class" extension. Multiple byte code files can be zipped together as a "*.jar" file for more efficient transmission over the Internet. Today's popular browsers, such as Netscape (Netscape Communications Corporation) and Internet Explorer (Microsoft Corporation) have built-in Virtual Machines to display Java applets.

  16. An element search ant colony technique for solving virtual machine placement problem

    NASA Astrophysics Data System (ADS)

    Srija, J.; Rani John, Rose; Kanaga, Grace Mary, Dr.

    2017-09-01

    The data centres in the cloud environment play a key role in providing infrastructure for ubiquitous computing, pervasive computing, mobile computing etc. This computing technique tries to utilize the available resources in order to provide services. Hence maintaining the resource utilization without wastage of power consumption has become a challenging task for the researchers. In this paper we propose the direct guidance ant colony system for effective mapping of virtual machines to the physical machine with maximal resource utilization and minimal power consumption. The proposed algorithm has been compared with the existing ant colony approach which is involved in solving virtual machine placement problem and thus the proposed algorithm proves to provide better result than the existing technique.

  17. Performance Analysis of Ivshmem for High-Performance Computing in Virtual Machines

    NASA Astrophysics Data System (ADS)

    Ivanovic, Pavle; Richter, Harald

    2018-01-01

    High-Performance computing (HPC) is rarely accomplished via virtual machines (VMs). In this paper, we present a remake of ivshmem which can change this. Ivshmem was a shared memory (SHM) between virtual machines on the same server, with SHM-access synchronization included, until about 5 years ago when newer versions of Linux and its virtualization library libvirt evolved. We restored that SHM-access synchronization feature because it is indispensable for HPC and made ivshmem runnable with contemporary versions of Linux, libvirt, KVM, QEMU and especially MPICH, which is an implementation of MPI - the standard HPC communication library. Additionally, MPICH was transparently modified by us to get ivshmem included, resulting in a three to ten times performance improvement compared to TCP/IP. Furthermore, we have transparently replaced MPI_PUT, a single-side MPICH communication mechanism, by an own MPI_PUT wrapper. As a result, our ivshmem even surpasses non-virtualized SHM data transfers for block lengths greater than 512 KBytes, showing the benefits of virtualization. All improvements were possible without using SR-IOV.

  18. Review of Enabling Technologies to Facilitate Secure Compute Customization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aderholdt, Ferrol; Caldwell, Blake A; Hicks, Susan Elaine

    High performance computing environments are often used for a wide variety of workloads ranging from simulation, data transformation and analysis, and complex workflows to name just a few. These systems may process data for a variety of users, often requiring strong separation between job allocations. There are many challenges to establishing these secure enclaves within the shared infrastructure of high-performance computing (HPC) environments. The isolation mechanisms in the system software are the basic building blocks for enabling secure compute enclaves. There are a variety of approaches and the focus of this report is to review the different virtualization technologies thatmore » facilitate the creation of secure compute enclaves. The report reviews current operating system (OS) protection mechanisms and modern virtualization technologies to better understand the performance/isolation properties. We also examine the feasibility of running ``virtualized'' computing resources as non-privileged users, and providing controlled administrative permissions for standard users running within a virtualized context. Our examination includes technologies such as Linux containers (LXC [32], Docker [15]) and full virtualization (KVM [26], Xen [5]). We categorize these different approaches to virtualization into two broad groups: OS-level virtualization and system-level virtualization. The OS-level virtualization uses containers to allow a single OS kernel to be partitioned to create Virtual Environments (VE), e.g., LXC. The resources within the host's kernel are only virtualized in the sense of separate namespaces. In contrast, system-level virtualization uses hypervisors to manage multiple OS kernels and virtualize the physical resources (hardware) to create Virtual Machines (VM), e.g., Xen, KVM. This terminology of VE and VM, detailed in Section 2, is used throughout the report to distinguish between the two different approaches to providing virtualized execution environments. As part of our technology review we analyzed several current virtualization solutions to assess their vulnerabilities. This included a review of common vulnerabilities and exposures (CVEs) for Xen, KVM, LXC and Docker to gauge their susceptibility to different attacks. The complete details are provided in Section 5 on page 33. Based on this review we concluded that system-level virtualization solutions have many more vulnerabilities than OS level virtualization solutions. As such, security mechanisms like sVirt (Section 3.3) should be considered when using system-level virtualization solutions in order to protect the host against exploits. The majority of vulnerabilities related to KVM, LXC, and Docker are in specific regions of the system. Therefore, future "zero day attacks" are likely to be in the same regions, which suggests that protecting these areas can simplify the protection of the host and maintain the isolation between users. The evaluations of virtualization technologies done thus far are discussed in Section 4. This includes experiments with 'user' namespaces in VEs, which provides the ability to isolate user privileges and allow a user to run with different UIDs within the container while mapping them to non-privileged UIDs in the host. We have identified Linux namespaces as a promising mechanism to isolate shared resources, while maintaining good performance. In Section 4.1 we describe our tests with LXC as a non-root user and leveraging namespaces to control UID/GID mappings and support controlled sharing of parallel file-systems. We highlight several of these namespace capabilities in Section 6.2.3. The other evaluations that were performed during this initial phase of work provide baseline performance data for comparing VEs and VMs to purely native execution. In Section 4.2 we performed tests using the High-Performance Computing Conjugate Gradient (HPCCG) benchmark to establish baseline performance for a scientific application when run on the Native (host) machine in contrast with execution under Docker and KVM. Our tests verified prior studies showing roughly 2-4% overheads in application execution time & MFlops when running in hypervisor-base environments (VMs) as compared to near native performance with VEs. For more details, see Figures 4.5 (page 28), 4.6 (page 28), and 4.7 (page 29). Additionally, in Section 4.3 we include network measurements for TCP bandwidth performance over the 10GigE interface in our testbed. The Native and Docker based tests achieved >= ~9Gbits/sec, while the KVM configuration only achieved 2.5Gbits/sec (Table 4.6 on page 32). This may be a configuration issue with our KVM installation, and is a point for further testing as we refine the network settings in the testbed. The initial network tests were done using a bridged networking configuration. The report outline is as follows: - Section 1 introduces the report and clarifies the scope of the proj...« less

  19. Lightweight scheduling of elastic analysis containers in a competitive cloud environment: a Docked Analysis Facility for ALICE

    NASA Astrophysics Data System (ADS)

    Berzano, D.; Blomer, J.; Buncic, P.; Charalampidis, I.; Ganis, G.; Meusel, R.

    2015-12-01

    During the last years, several Grid computing centres chose virtualization as a better way to manage diverse use cases with self-consistent environments on the same bare infrastructure. The maturity of control interfaces (such as OpenNebula and OpenStack) opened the possibility to easily change the amount of resources assigned to each use case by simply turning on and off virtual machines. Some of those private clouds use, in production, copies of the Virtual Analysis Facility, a fully virtualized and self-contained batch analysis cluster capable of expanding and shrinking automatically upon need: however, resources starvation occurs frequently as expansion has to compete with other virtual machines running long-living batch jobs. Such batch nodes cannot relinquish their resources in a timely fashion: the more jobs they run, the longer it takes to drain them and shut off, and making one-job virtual machines introduces a non-negligible virtualization overhead. By improving several components of the Virtual Analysis Facility we have realized an experimental “Docked” Analysis Facility for ALICE, which leverages containers instead of virtual machines for providing performance and security isolation. We will present the techniques we have used to address practical problems, such as software provisioning through CVMFS, as well as our considerations on the maturity of containers for High Performance Computing. As the abstraction layer is thinner, our Docked Analysis Facilities may feature a more fine-grained sizing, down to single-job node containers: we will show how this approach will positively impact automatic cluster resizing by deploying lightweight pilot containers instead of replacing central queue polls.

  20. The Virtual Climate Data Server (vCDS): An iRODS-Based Data Management Software Appliance Supporting Climate Data Services and Virtualization-as-a-Service in the NASA Center for Climate Simulation

    NASA Technical Reports Server (NTRS)

    Schnase, John L.; Tamkin, Glenn S.; Ripley, W. David III; Stong, Savannah; Gill, Roger; Duffy, Daniel Q.

    2012-01-01

    Scientific data services are becoming an important part of the NASA Center for Climate Simulation's mission. Our technological response to this expanding role is built around the concept of a Virtual Climate Data Server (vCDS), repetitive provisioning, image-based deployment and distribution, and virtualization-as-a-service. The vCDS is an iRODS-based data server specialized to the needs of a particular data-centric application. We use RPM scripts to build vCDS images in our local computing environment, our local Virtual Machine Environment, NASA s Nebula Cloud Services, and Amazon's Elastic Compute Cloud. Once provisioned into one or more of these virtualized resource classes, vCDSs can use iRODS s federation capabilities to create an integrated ecosystem of managed collections that is scalable and adaptable to changing resource requirements. This approach enables platform- or software-asa- service deployment of vCDS and allows the NCCS to offer virtualization-as-a-service: a capacity to respond in an agile way to new customer requests for data services.

  1. Binary phase digital reflection holograms - Fabrication and potential applications

    NASA Technical Reports Server (NTRS)

    Gallagher, N. C., Jr.; Angus, J. C.; Coffield, F. E.; Edwards, R. V.; Mann, J. A., Jr.

    1977-01-01

    A novel technique for the fabrication of binary-phase computer-generated reflection holograms is described. By use of integrated circuit technology, the holographic pattern is etched into a silicon wafer and then aluminum coated to make a reflection hologram. Because these holograms reflect virtually all the incident radiation, they may find application in machining with high-power lasers. A number of possible modifications of the hologram fabrication procedure are discussed.

  2. Distributed Object Technology with CORBA and Java: Key Concepts and Implications.

    DTIC Science & Technology

    1997-06-01

    commercial use should be addressed to the SEI Licensing Agent. NO WARRANTY THIS CARNEGIE MELLON UNIVERSITY AND SOFTWARE ENGINEERING INSTITUTE MATERIAL...retrieval. This power is not derived from the language per se, but from the architecture-neutral approach used by Java. The Java Virtual Machine...pattern that is focused on performance considerations, the PCo archi- tecture also uses CORBA interface definition language (IDL) to model the

  3. A Combination Therapy of JO-I and Chemotherapy in Ovarian Cancer Models

    DTIC Science & Technology

    2013-10-01

    which consists of a 3PAR storage backend and is sharing data via a highly available NetApp storage gateway and 2 high throughput commodity storage...Environment is configured as self- service Enterprise cloud and currently hosts more than 700 virtual machines. The network infrastructure consists of...technology infrastructure and information system applications designed to integrate, automate, and standardize operations. These systems fuse state of

  4. Development of a Big Data Application Architecture for Navy Manpower, Personnel, Training, and Education

    DTIC Science & Technology

    2016-03-01

    science IT information technology JBOD just a bunch of disks JDBC java database connectivity xviii JPME Joint Professional Military Education JSO...Joint Service Officer JVM java virtual machine MPP massively parallel processing MPTE Manpower, Personnel, Training, and Education NAVMAC Navy...27 external database, whether it is MySQL , Oracle, DB2, or SQL Server (Teller, 2015). Connectors optimize the data transfer by obtaining metadata

  5. Virtual prototyping of drop test using explicit analysis

    NASA Astrophysics Data System (ADS)

    Todorov, Georgi; Kamberov, Konstantin

    2017-12-01

    Increased requirements for reliability and safety, included in contemporary standards and norms, has high impact over new product development. New numerical techniques based on virtual prototyping technology, facilitates imrpoving product development cycle, resutling in reduced time/money spent for this stage as well as increased knowledge about certain failure mechanism. So called "drop test" became nearly a "must" step in development of any human operated product. This study aims to demonstrate dynamic behaviour assessment of a structure under impact loads, based on virtual prototyping using a typical nonlinear analysis - explicit dynamics. An example is presneted, based on a plastic container that is used as cartridge for a dispenser machine exposed to various work conditions. Different drop orientations were analyzed and critical load cases and design weaknesses have been found. Several design modifications have been proposed, based on detailed analyses results review.

  6. [Porting Radiotherapy Software of Varian to Cloud Platform].

    PubMed

    Zou, Lian; Zhang, Weisha; Liu, Xiangxiang; Xie, Zhao; Xie, Yaoqin

    2017-09-30

    To develop a low-cost private cloud platform of radiotherapy software. First, a private cloud platform which was based on OpenStack and the virtual GPU hardware was builded. Then on the private cloud platform, all the Varian radiotherapy software modules were installed to the virtual machine, and the corresponding function configuration was completed. Finally the software on the cloud was able to be accessed by virtual desktop client. The function test results of the cloud workstation show that a cloud workstation is equivalent to an isolated physical workstation, and any clients on the LAN can use the cloud workstation smoothly. The cloud platform transplantation in this study is economical and practical. The project not only improves the utilization rates of radiotherapy software, but also makes it possible that the cloud computing technology can expand its applications to the field of radiation oncology.

  7. Efficient Checkpointing of Virtual Machines using Virtual Machine Introspection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aderholdt, Ferrol; Han, Fang; Scott, Stephen L

    Cloud Computing environments rely heavily on system-level virtualization. This is due to the inherent benefits of virtualization including fault tolerance through checkpoint/restart (C/R) mechanisms. Because clouds are the abstraction of large data centers and large data centers have a higher potential for failure, it is imperative that a C/R mechanism for such an environment provide minimal latency as well as a small checkpoint file size. Recently, there has been much research into C/R with respect to virtual machines (VM) providing excellent solutions to reduce either checkpoint latency or checkpoint file size. However, these approaches do not provide both. This papermore » presents a method of checkpointing VMs by utilizing virtual machine introspection (VMI). Through the usage of VMI, we are able to determine which pages of memory within the guest are used or free and are better able to reduce the amount of pages written to disk during a checkpoint. We have validated this work by using various benchmarks to measure the latency along with the checkpoint size. With respect to checkpoint file size, our approach results in file sizes within 24% or less of the actual used memory within the guest. Additionally, the checkpoint latency of our approach is up to 52% faster than KVM s default method.« less

  8. MISR Center Block Time Tool

    Atmospheric Science Data Center

    2013-04-01

      MISR Center Block Time Tool The misr_time tool calculates the block center times for MISR Level 1B2 files. This is ... version of the IDL package or by using the IDL Virtual Machine application. The IDL Virtual Machine is bundled with IDL and is ...

  9. System-Level Virtualization for High Performance Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vallee, Geoffroy R; Naughton, III, Thomas J; Engelmann, Christian

    2008-01-01

    System-level virtualization has been a research topic since the 70's but regained popularity during the past few years because of the availability of efficient solution such as Xen and the implementation of hardware support in commodity processors (e.g. Intel-VT, AMD-V). However, a majority of system-level virtualization projects is guided by the server consolidation market. As a result, current virtualization solutions appear to not be suitable for high performance computing (HPC) which is typically based on large-scale systems. On another hand there is significant interest in exploiting virtual machines (VMs) within HPC for a number of other reasons. By virtualizing themore » machine, one is able to run a variety of operating systems and environments as needed by the applications. Virtualization allows users to isolate workloads, improving security and reliability. It is also possible to support non-native environments and/or legacy operating environments through virtualization. In addition, it is possible to balance work loads, use migration techniques to relocate applications from failing machines, and isolate fault systems for repair. This document presents the challenges for the implementation of a system-level virtualization solution for HPC. It also presents a brief survey of the different approaches and techniques to address these challenges.« less

  10. The Machine / Job Features Mechanism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alef, M.; Cass, T.; Keijser, J. J.

    Within the HEPiX virtualization group and the Worldwide LHC Computing Grid’s Machine/Job Features Task Force, a mechanism has been developed which provides access to detailed information about the current host and the current job to the job itself. This allows user payloads to access meta information, independent of the current batch system or virtual machine model. The information can be accessed either locally via the filesystem on a worker node, or remotely via HTTP(S) from a webserver. This paper describes the final version of the specification from 2016 which was published as an HEP Software Foundation technical note, and themore » design of the implementations of this version for batch and virtual machine platforms. We discuss early experiences with these implementations and how they can be exploited by experiment frameworks.« less

  11. The machine/job features mechanism

    NASA Astrophysics Data System (ADS)

    Alef, M.; Cass, T.; Keijser, J. J.; McNab, A.; Roiser, S.; Schwickerath, U.; Sfiligoi, I.

    2017-10-01

    Within the HEPiX virtualization group and the Worldwide LHC Computing Grid’s Machine/Job Features Task Force, a mechanism has been developed which provides access to detailed information about the current host and the current job to the job itself. This allows user payloads to access meta information, independent of the current batch system or virtual machine model. The information can be accessed either locally via the filesystem on a worker node, or remotely via HTTP(S) from a webserver. This paper describes the final version of the specification from 2016 which was published as an HEP Software Foundation technical note, and the design of the implementations of this version for batch and virtual machine platforms. We discuss early experiences with these implementations and how they can be exploited by experiment frameworks.

  12. A three phase optimization method for precopy based VM live migration.

    PubMed

    Sharma, Sangeeta; Chawla, Meenu

    2016-01-01

    Virtual machine live migration is a method of moving virtual machine across hosts within a virtualized datacenter. It provides significant benefits for administrator to manage datacenter efficiently. It reduces service interruption by transferring the virtual machine without stopping at source. Transfer of large number of virtual machine memory pages results in long migration time as well as downtime, which also affects the overall system performance. This situation becomes unbearable when migration takes place over slower network or a long distance migration within a cloud. In this paper, precopy based virtual machine live migration method is thoroughly analyzed to trace out the issues responsible for its performance drops. In order to address these issues, this paper proposes three phase optimization (TPO) method. It works in three phases as follows: (i) reduce the transfer of memory pages in first phase, (ii) reduce the transfer of duplicate pages by classifying frequently and non-frequently updated pages, and (iii) reduce the data sent in last iteration of migration by applying the simple RLE compression technique. As a result, each phase significantly reduces total pages transferred, total migration time and downtime respectively. The proposed TPO method is evaluated using different representative workloads on a Xen virtualized environment. Experimental results show that TPO method reduces total pages transferred by 71 %, total migration time by 70 %, downtime by 3 % for higher workload, and it does not impose significant overhead as compared to traditional precopy method. Comparison of TPO method with other methods is also done for supporting and showing its effectiveness. TPO method and precopy methods are also tested at different number of iterations. The TPO method gives better performance even with less number of iterations.

  13. Cognitive evaluation for the diagnosis of Alzheimer's disease based on Turing Test and Virtual Environments.

    PubMed

    Fernandez Montenegro, Juan Manuel; Argyriou, Vasileios

    2017-05-01

    Alzheimer's screening tests are commonly used by doctors to diagnose the patient's condition and stage as early as possible. Most of these tests are based on pen-paper interaction and do not embrace the advantages provided by new technologies. This paper proposes novel Alzheimer's screening tests based on virtual environments and game principles using new immersive technologies combined with advanced Human Computer Interaction (HCI) systems. These new tests are focused on the immersion of the patient in a virtual room, in order to mislead and deceive the patient's mind. In addition, we propose two novel variations of Turing Test proposed by Alan Turing as a method to detect dementia. As a result, four tests are introduced demonstrating the wide range of screening mechanisms that could be designed using virtual environments and game concepts. The proposed tests are focused on the evaluation of memory loss related to common objects, recent conversations and events; the diagnosis of problems in expressing and understanding language; the ability to recognize abnormalities; and to differentiate between virtual worlds and reality, or humans and machines. The proposed screening tests were evaluated and tested using both patients and healthy adults in a comparative study with state-of-the-art Alzheimer's screening tests. The results show the capacity of the new tests to distinguish healthy people from Alzheimer's patients. Copyright © 2017. Published by Elsevier Inc.

  14. Optimizing Virtual Network Functions Placement in Virtual Data Center Infrastructure Using Machine Learning

    NASA Astrophysics Data System (ADS)

    Bolodurina, I. P.; Parfenov, D. I.

    2018-01-01

    We have elaborated a neural network model of virtual network flow identification based on the statistical properties of flows circulating in the network of the data center and characteristics that describe the content of packets transmitted through network objects. This enabled us to establish the optimal set of attributes to identify virtual network functions. We have established an algorithm for optimizing the placement of virtual data functions using the data obtained in our research. Our approach uses a hybrid method of visualization using virtual machines and containers, which enables to reduce the infrastructure load and the response time in the network of the virtual data center. The algorithmic solution is based on neural networks, which enables to scale it at any number of the network function copies.

  15. Consciousness and the Invention of Morel

    PubMed Central

    Perogamvros, Lampros

    2013-01-01

    A scientific study of consciousness should take into consideration both objective and subjective measures of conscious experiences. To this date, very few studies have tried to integrate third-person data, or data about the neurophysiological correlates of conscious states, with first-person data, or data about subjective experience. Inspired by Morel's invention (Casares, 1940), a literary machine capable of reproducing sensory-dependent external reality, this article suggests that combination of virtual reality techniques and brain reading technologies, that is, decoding of conscious states by brain activity alone, can offer this integration. It is also proposed that the multimodal, simulating, and integrative capacities of the dreaming brain render it an “endogenous” Morel's machine, which can potentially be used in studying consciousness, but not always in a reliable way. Both the literary machine and dreaming could contribute to a better understanding of conscious states. PMID:23467765

  16. A location selection policy of live virtual machine migration for power saving and load balancing.

    PubMed

    Zhao, Jia; Ding, Yan; Xu, Gaochao; Hu, Liang; Dong, Yushuang; Fu, Xiaodong

    2013-01-01

    Green cloud data center has become a research hotspot of virtualized cloud computing architecture. And load balancing has also been one of the most important goals in cloud data centers. Since live virtual machine (VM) migration technology is widely used and studied in cloud computing, we have focused on location selection (migration policy) of live VM migration for power saving and load balancing. We propose a novel approach MOGA-LS, which is a heuristic and self-adaptive multiobjective optimization algorithm based on the improved genetic algorithm (GA). This paper has presented the specific design and implementation of MOGA-LS such as the design of the genetic operators, fitness values, and elitism. We have introduced the Pareto dominance theory and the simulated annealing (SA) idea into MOGA-LS and have presented the specific process to get the final solution, and thus, the whole approach achieves a long-term efficient optimization for power saving and load balancing. The experimental results demonstrate that MOGA-LS evidently reduces the total incremental power consumption and better protects the performance of VM migration and achieves the balancing of system load compared with the existing research. It makes the result of live VM migration more high-effective and meaningful.

  17. A Location Selection Policy of Live Virtual Machine Migration for Power Saving and Load Balancing

    PubMed Central

    Xu, Gaochao; Hu, Liang; Dong, Yushuang; Fu, Xiaodong

    2013-01-01

    Green cloud data center has become a research hotspot of virtualized cloud computing architecture. And load balancing has also been one of the most important goals in cloud data centers. Since live virtual machine (VM) migration technology is widely used and studied in cloud computing, we have focused on location selection (migration policy) of live VM migration for power saving and load balancing. We propose a novel approach MOGA-LS, which is a heuristic and self-adaptive multiobjective optimization algorithm based on the improved genetic algorithm (GA). This paper has presented the specific design and implementation of MOGA-LS such as the design of the genetic operators, fitness values, and elitism. We have introduced the Pareto dominance theory and the simulated annealing (SA) idea into MOGA-LS and have presented the specific process to get the final solution, and thus, the whole approach achieves a long-term efficient optimization for power saving and load balancing. The experimental results demonstrate that MOGA-LS evidently reduces the total incremental power consumption and better protects the performance of VM migration and achieves the balancing of system load compared with the existing research. It makes the result of live VM migration more high-effective and meaningful. PMID:24348165

  18. Protection of Mission-Critical Applications from Untrusted Execution Environment: Resource Efficient Replication and Migration of Virtual Machines

    DTIC Science & Technology

    2015-09-28

    the performance of log-and- replay can degrade significantly for VMs configured with multiple virtual CPUs, since the shared memory communication...whether based on checkpoint replication or log-and- replay , existing HA ap- proaches use in- memory backups. The backup VM sits in the memory of a...efficiently. 15. SUBJECT TERMS High-availability virtual machines, live migration, memory and traffic overheads, application suspension, Java

  19. Fast 3D NIR systems for facial measurement and lip-reading

    NASA Astrophysics Data System (ADS)

    Brahm, Anika; Ramm, Roland; Heist, Stefan; Rulff, Christian; Kühmstedt, Peter; Notni, Gunther

    2017-05-01

    Structured-light projection is a well-established optical method for the non-destructive contactless three-dimensional (3D) measurement of object surfaces. In particular, there is a great demand for accurate and fast 3D scans of human faces or facial regions of interest in medicine, safety, face modeling, games, virtual life, or entertainment. New developments of facial expression detection and machine lip-reading can be used for communication tasks, future machine control, or human-machine interactions. In such cases, 3D information may offer more detailed information than 2D images which can help to increase the power of current facial analysis algorithms. In this contribution, we present new 3D sensor technologies based on three different methods of near-infrared projection technologies in combination with a stereo vision setup of two cameras. We explain the optical principles of an NIR GOBO projector, an array projector and a modified multi-aperture projection method and compare their performance parameters to each other. Further, we show some experimental measurement results of applications where we realized fast, accurate, and irritation-free measurements of human faces.

  20. Logical optical line terminal technologies towards flexible and highly reliable metro- and access-integrated networks

    NASA Astrophysics Data System (ADS)

    Okamoto, Satoru; Sato, Takehiro; Yamanaka, Naoaki

    2017-01-01

    In this paper, flexible and highly reliable metro and access integrated networks with network virtualization and software defined networking technologies will be presented. Logical optical line terminal (L-OLT) technologies and active optical distribution networks (ODNs) are the key to introduce flexibility and high reliability into the metro and access integrated networks. In the Elastic Lambda Aggregation Network (EλAN) project which was started in 2012, a concept of the programmable optical line terminal (P-OLT) has been proposed. A role of the P-OLT is providing multiple network services that have different protocols and quality of service requirements by single OLT box. Accommodated services will be Internet access, mobile front-haul/back-haul, data-center access, and leased line. L-OLTs are configured within the P-OLT box to support the functions required for each network service. Multiple P-OLTs and programmable optical network units (P-ONUs) are connected by the active ODN. Optical access paths which have flexible capacity are set on the ODN to provide network services from L-OLT to logical ONUs (L-ONUs). The L-OLT to L-ONU path on the active ODN provides a logical connection. Therefore, introducing virtualization technologies becomes possible. One example is moving an L-OLT from one P-OLT to another P-OLT like a virtual machine. This movement is called L-OLT migration. The L-OLT migration provides flexible and reliable network functions such as energy saving by aggregating L-OLTs to a limited number of P-OLTs, and network wide optical access path restoration. Other L-OLT virtualization technologies and experimental results will be also discussed in the paper.

  1. Math Machines: Using Actuators in Physics Classes

    NASA Astrophysics Data System (ADS)

    Thomas, Frederick J.; Chaney, Robert A.; Gruesbeck, Marta

    2018-01-01

    Probeware (sensors combined with data-analysis software) is a well-established part of physics education. In engineering and technology, sensors are frequently paired with actuators—motors, heaters, buzzers, valves, color displays, medical dosing systems, and other devices that are activated by electrical signals to produce intentional physical change. This article describes how a 20-year project aimed at better integration of the STEM disciplines (science, technology, engineering and mathematics) uses brief actuator activities in physics instruction. Math Machines "actionware" includes software and hardware that convert virtually any free-form, time-dependent algebraic function into the dynamic actions of a stepper motor, servo motor, or RGB (red, green, blue) color mixer. With wheels and a platform, the stepper motor becomes LACI, a programmable vehicle. Adding a low-power laser module turns the servo motor into a programmable Pointer. Adding a gear and platform can transform the Pointer into an earthquake simulator.

  2. RHE: A JVM Courseware

    ERIC Educational Resources Information Center

    Liu, S.; Tang, J.; Deng, C.; Li, X.-F.; Gaudiot, J.-L.

    2011-01-01

    Java Virtual Machine (JVM) education has become essential in training embedded software engineers as well as virtual machine researchers and practitioners. However, due to the lack of suitable instructional tools, it is difficult for students to obtain any kind of hands-on experience and to attain any deep understanding of JVM design. To address…

  3. Avatars and virtual agents – relationship interfaces for the elderly

    PubMed Central

    2017-01-01

    In the Digital Era, the authors witness a change in the relationship between the patient and the care-giver or Health Maintenance Organization's providing the health services. Another fact is the use of various technologies to increase the effectiveness and quality of health services across all primary and secondary users. These technologies range from telemedicine systems, decision making tools, online and self-services applications and virtual agents; all providing information and assistance. The common thread between all these digital implementations, is they all require human machine interfaces. These interfaces must be interactive, user friendly and inviting, to create user involvement and cooperation incentives. The challenge is to design interfaces which will best fit the target users and enable smooth interaction especially, for the elderly users. Avatars and Virtual Agents are one of the interfaces used for both home care monitoring and companionship. They are also inherently multimodal in nature and allow an intimate relation between the elderly users and the Avatar. This study discusses the need and nature of these relationship models, the challenges of designing for the elderly. The study proposes key features for the design and evaluation in the area of assistive applications using Avatar and Virtual agents for the elderly users. PMID:28706725

  4. A comparative analysis of dynamic grids vs. virtual grids using the A3pviGrid framework.

    PubMed

    Shankaranarayanan, Avinas; Amaldas, Christine

    2010-11-01

    With the proliferation of Quad/Multi-core micro-processors in mainstream platforms such as desktops and workstations; a large number of unused CPU cycles can be utilized for running virtual machines (VMs) as dynamic nodes in distributed environments. Grid services and its service oriented business broker now termed cloud computing could deploy image based virtualization platforms enabling agent based resource management and dynamic fault management. In this paper we present an efficient way of utilizing heterogeneous virtual machines on idle desktops as an environment for consumption of high performance grid services. Spurious and exponential increases in the size of the datasets are constant concerns in medical and pharmaceutical industries due to the constant discovery and publication of large sequence databases. Traditional algorithms are not modeled at handing large data sizes under sudden and dynamic changes in the execution environment as previously discussed. This research was undertaken to compare our previous results with running the same test dataset with that of a virtual Grid platform using virtual machines (Virtualization). The implemented architecture, A3pviGrid utilizes game theoretic optimization and agent based team formation (Coalition) algorithms to improve upon scalability with respect to team formation. Due to the dynamic nature of distributed systems (as discussed in our previous work) all interactions were made local within a team transparently. This paper is a proof of concept of an experimental mini-Grid test-bed compared to running the platform on local virtual machines on a local test cluster. This was done to give every agent its own execution platform enabling anonymity and better control of the dynamic environmental parameters. We also analyze performance and scalability of Blast in a multiple virtual node setup and present our findings. This paper is an extension of our previous research on improving the BLAST application framework using dynamic Grids on virtualization platforms such as the virtual box.

  5. Impact of Machine Virtualization on Timing Precision for Performance-critical Tasks

    NASA Astrophysics Data System (ADS)

    Karpov, Kirill; Fedotova, Irina; Siemens, Eduard

    2017-07-01

    In this paper we present a measurement study to characterize the impact of hardware virtualization on basic software timing, as well as on precise sleep operations of an operating system. We investigated how timer hardware is shared among heavily CPU-, I/O- and Network-bound tasks on a virtual machine as well as on the host machine. VMware ESXi and QEMU/KVM have been chosen as commonly used examples of hypervisor- and host-based models. Based on statistical parameters of retrieved distributions, our results provide a very good estimation of timing behavior. It is essential for real-time and performance-critical applications such as image processing or real-time control.

  6. Using blackmail, bribery, and guilt to address the tragedy of the virtual intellectual commons

    NASA Astrophysics Data System (ADS)

    Griffith, P. C.; Cook, R. B.; Wilson, B. E.; Gentry, M. J.; Horta, L. M.; McGroddy, M.; Morrell, A. L.; Wilcox, L. E.

    2008-12-01

    One goal of the NSF's vision for 21st Century Cyberinfrastructure is to create a virtual intellectual commons for the scientific community where advanced technologies perpetuate transformation of this community's productivity and capabilities. The metadata describing scientific observations, like the first paragraph of a news story, should answer the questions who? what? why? where? when? and how?, making them discoverable, comprehensible, contextualized, exchangeable, and machine-readable. Investigators who create good scientific metadata increase the scientific value of their observations within such a virtual intellectual commons. But the tragedy of this commons arises when investigators wish to receive without giving in return. The authors of this talk will describe how they have used combinations of blackmail, bribery, and guilt to motivate good behavior by investigators participating in two major scientific programs (NASA's component of the Large-scale Biosphere-Atmosphere Experiment in Amazonia; and the US Climate Change Science Program's North American Carbon Program).

  7. Design of Xen Hybrid Multiple Police Model

    NASA Astrophysics Data System (ADS)

    Sun, Lei; Lin, Renhao; Zhu, Xianwei

    2017-10-01

    Virtualization Technology has attracted more and more attention. As a popular open-source virtualization tools, XEN is used more and more frequently. Xsm, XEN security model, has also been widespread concern. The safety status classification has not been established in the XSM, and it uses the virtual machine as a managed object to make Dom0 a unique administrative domain that does not meet the minimum privilege. According to these questions, we design a Hybrid multiple police model named SV_HMPMD that organically integrates multiple single security policy models include DTE,RBAC,BLP. It can fullfill the requirement of confidentiality and integrity for security model and use different particle size to different domain. In order to improve BLP’s practicability, the model introduce multi-level security labels. In order to divide the privilege in detail, we combine DTE with RBAC. In order to oversize privilege, we limit the privilege of domain0.

  8. Hands-free human-machine interaction with voice

    NASA Astrophysics Data System (ADS)

    Juang, B. H.

    2004-05-01

    Voice is natural communication interface between a human and a machine. The machine, when placed in today's communication networks, may be configured to provide automation to save substantial operating cost, as demonstrated in AT&T's VRCP (Voice Recognition Call Processing), or to facilitate intelligent services, such as virtual personal assistants, to enhance individual productivity. These intelligent services often need to be accessible anytime, anywhere (e.g., in cars when the user is in a hands-busy-eyes-busy situation or during meetings where constantly talking to a microphone is either undersirable or impossible), and thus call for advanced signal processing and automatic speech recognition techniques which support what we call ``hands-free'' human-machine communication. These techniques entail a broad spectrum of technical ideas, ranging from use of directional microphones and acoustic echo cancellatiion to robust speech recognition. In this talk, we highlight a number of key techniques that were developed for hands-free human-machine communication in the mid-1990s after Bell Labs became a unit of Lucent Technologies. A video clip will be played to demonstrate the accomplishement.

  9. Review on CNC-Rapid Prototyping

    NASA Astrophysics Data System (ADS)

    Z, M. Nafis O.; Y, Nafrizuan M.; A, Munira M.; J, Kartina

    2012-09-01

    This article reviewed developments of Computerized Numerical Control (CNC) technology in rapid prototyping process. Rapid prototyping (RP) can be classified into three major groups; subtractive, additive and virtual. CNC rapid prototyping is grouped under the subtractive category which involves material removal from the workpiece that is larger than the final part. Richard Wysk established the use of CNC machines for rapid prototyping using sets of 2½-D tool paths from various orientations about a rotary axis to machine parts without refixturing. Since then, there are few developments on this process mainly aimed to optimized the operation and increase the process capabilities to stand equal with common additive type of RP. These developments include the integration between machining and deposition process (hybrid RP), adoption of RP to the conventional machine and optimization of the CNC rapid prototyping process based on controlled parameters. The article ended by concluding that the CNC rapid prototyping research area has a vast space for improvement as in the conventional machining processes. Further developments and findings will enhance the usage of this method and minimize the limitation of current approach in building a prototype.

  10. Software architecture standard for simulation virtual machine, version 2.0

    NASA Technical Reports Server (NTRS)

    Sturtevant, Robert; Wessale, William

    1994-01-01

    The Simulation Virtual Machine (SBM) is an Ada architecture which eases the effort involved in the real-time software maintenance and sustaining engineering. The Software Architecture Standard defines the infrastructure which all the simulation models are built from. SVM was developed for and used in the Space Station Verification and Training Facility.

  11. CernVM WebAPI - Controlling Virtual Machines from the Web

    NASA Astrophysics Data System (ADS)

    Charalampidis, I.; Berzano, D.; Blomer, J.; Buncic, P.; Ganis, G.; Meusel, R.; Segal, B.

    2015-12-01

    Lately, there is a trend in scientific projects to look for computing resources in the volunteering community. In addition, to reduce the development effort required to port the scientific software stack to all the known platforms, the use of Virtual Machines (VMs)u is becoming increasingly popular. Unfortunately their use further complicates the software installation and operation, restricting the volunteer audience to sufficiently expert people. CernVM WebAPI is a software solution addressing this specific case in a way that opens wide new application opportunities. It offers a very simple API for setting-up, controlling and interfacing with a VM instance in the users computer, while in the same time offloading the user from all the burden of downloading, installing and configuring the hypervisor. WebAPI comes with a lightweight javascript library that guides the user through the application installation process. Malicious usage is prohibited by offering a per-domain PKI validation mechanism. In this contribution we will overview this new technology, discuss its security features and examine some test cases where it is already in use.

  12. Faster, Less Expensive Dies Using RSP Tooling

    NASA Astrophysics Data System (ADS)

    Knirsch, James R.

    2007-08-01

    RSP Tooling is an indirect spray form additive process that can produce production tooling for virtually any forming process and from virtually any metal. In the past 24 months a significant amount of research and development has been performed. This resulted in an increase in the basic metallurgical understanding of what transpires during the rapid solidification of the metal, significant improvements in the production machine up time, ceramic developments that have improved finish, process changes that have resulted in a shorter lead time for tool delivery, and the testing of many new alloys. RSP stands for Rapid Solidification Process and is the key to the superior metallurgical properties that result from the technology. Most metals that are sprayed in the process leave the machine with the same physical properties as the same metal normally achieves through heat treatment and in some cases the properties are superior. Many new applications are being pursued including INVAR tools for aerospace composite materials, and bimetallic tools made from tool steel and beryllium copper for die casting and plastic injection molding. Recent feasibility studies have been performed with tremendous success.

  13. Runtime Performance and Virtual Network Control Alternatives in VM-Based High-Fidelity Network Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoginath, Srikanth B; Perumalla, Kalyan S; Henz, Brian J

    2012-01-01

    In prior work (Yoginath and Perumalla, 2011; Yoginath, Perumalla and Henz, 2012), the motivation, challenges and issues were articulated in favor of virtual time ordering of Virtual Machines (VMs) in network simulations hosted on multi-core machines. Two major components in the overall virtualization challenge are (1) virtual timeline establishment and scheduling of VMs, and (2) virtualization of inter-VM communication. Here, we extend prior work by presenting scaling results for the first component, with experiment results on up to 128 VMs scheduled in virtual time order on a single 12-core host. We also explore the solution space of design alternatives formore » the second component, and present performance results from a multi-threaded, multi-queue implementation of inter-VM network control for synchronized execution with VM scheduling, incorporated in our NetWarp simulation system.« less

  14. minimega

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    David Fritz, John Floren

    2013-08-27

    Minimega is a simple emulytics platform for creating testbeds of networked devices. The platform consists of easily deployable tools to facilitate bringing up large networks of virtual machines including Windows, Linux, and Android. Minimega attempts to allow experiments to be brought up quickly with nearly no configuration. Minimega also includes tools for simple cluster management, as well as tools for creating Linux based virtual machine images.

  15. The use of physical and virtual manipulatives in an undergraduate mechanical engineering (Dynamics) course

    NASA Astrophysics Data System (ADS)

    Pan, Edward A.

    Science, technology, engineering, and mathematics (STEM) education is a national focus. Engineering education, as part of STEM education, needs to adapt to meet the needs of the nation in a rapidly changing world. Using computer-based visualization tools and corresponding 3D printed physical objects may help nontraditional students succeed in engineering classes. This dissertation investigated how adding physical or virtual learning objects (called manipulatives) to courses that require mental visualization of mechanical systems can aid student performance. Dynamics is one such course, and tends to be taught using lecture and textbooks with static diagrams of moving systems. Students often fail to solve the problems correctly and an inability to mentally visualize the system can contribute to student difficulties. This study found no differences between treatment groups on quantitative measures of spatial ability and conceptual knowledge. There were differences between treatments on measures of mechanical reasoning ability, in favor of the use of physical and virtual manipulatives over static diagrams alone. There were no major differences in student performance between the use of physical and virtual manipulatives. Students used the physical and virtual manipulatives to test their theories about how the machines worked, however their actual time handling the manipulatives was extremely limited relative to the amount of time they spent working on the problems. Students used the physical and virtual manipulatives as visual aids when communicating about the problem with their partners, and this behavior was also seen with Traditional group students who had to use the static diagrams and gesture instead. The explanations students gave for how the machines worked provided evidence of mental simulation; however, their causal chain analyses were often flawed, probably due to attempts to decrease cognitive load. Student opinions about the static diagrams and dynamic models varied by type of model (static, physical, virtual), but were generally favorable. The Traditional group students, however, indicated that the lack of adequate representation of motion in the static diagrams was a problem, and wished they had access to the physical and virtual models.

  16. Virtual collaborative environments: programming and controlling robotic devices remotely

    NASA Astrophysics Data System (ADS)

    Davies, Brady R.; McDonald, Michael J., Jr.; Harrigan, Raymond W.

    1995-12-01

    This paper describes a technology for remote sharing of intelligent electro-mechanical devices. An architecture and actual system have been developed and tested, based on the proposed National Information Infrastructure (NII) or Information Highway, to facilitate programming and control of intelligent programmable machines (like robots, machine tools, etc.). Using appropriate geometric models, integrated sensors, video systems, and computing hardware; computer controlled resources owned and operated by different (in a geographic sense as well as legal sense) entities can be individually or simultaneously programmed and controlled from one or more remote locations. Remote programming and control of intelligent machines will create significant opportunities for sharing of expensive capital equipment. Using the technology described in this paper, university researchers, manufacturing entities, automation consultants, design entities, and others can directly access robotic and machining facilities located across the country. Disparate electro-mechanical resources will be shared in a manner similar to the way supercomputers are accessed by multiple users. Using this technology, it will be possible for researchers developing new robot control algorithms to validate models and algorithms right from their university labs without ever owning a robot. Manufacturers will be able to model, simulate, and measure the performance of prospective robots before selecting robot hardware optimally suited for their intended application. Designers will be able to access CNC machining centers across the country to fabricate prototypic parts during product design validation. An existing prototype architecture and system has been developed and proven. Programming and control of a large gantry robot located at Sandia National Laboratories in Albuquerque, New Mexico, was demonstrated from such remote locations as Washington D.C., Washington State, and Southern California.

  17. The autonomous sciencecraft constellations

    NASA Technical Reports Server (NTRS)

    Sherwood, R. L.; Chien, S.; Castano, R.; Rabideau, G.

    2003-01-01

    The Autonomous Sciencecraft Experiment (ASE) will fly onboard the Air Force TechSat 21 constellation of three spacecraft scheduled for launch in 2006. ASE uses onboard continuous planning, robust task and goal-based execution, model-based mode identification and reconfiguration, and onboard machine learning and pattern recognition to radically increase science return by enabling intelligent downlink selection and autonomous retargeting. In this paper we discuss how these AI technologies are synergistically integrated in a hybrid multi-layer control architecture to enable a virtual spacecraft science agent. Demonstration of these capabilities in a flight environment will open up tremendous new opportunities in planetary science, space physics, and earth science that would be unreachable without this technology.

  18. Fabrication of an infrared Shack-Hartmann sensor by combining high-speed single-point diamond milling and precision compression molding processes.

    PubMed

    Zhang, Lin; Zhou, Wenchen; Naples, Neil J; Yi, Allen Y

    2018-05-01

    A novel fabrication method by combining high-speed single-point diamond milling and precision compression molding processes for fabrication of discontinuous freeform microlens arrays was proposed. Compared with slow tool servo diamond broaching, high-speed single-point diamond milling was selected for its flexibility in the fabrication of true 3D optical surfaces with discontinuous features. The advantage of single-point diamond milling is that the surface features can be constructed sequentially by spacing the axes of a virtual spindle at arbitrary positions based on the combination of rotational and translational motions of both the high-speed spindle and linear slides. By employing this method, each micro-lenslet was regarded as a microstructure cell by passing the axis of the virtual spindle through the vertex of each cell. An optimization arithmetic based on minimum-area fabrication was introduced to the machining process to further increase the machining efficiency. After the mold insert was machined, it was employed to replicate the microlens array onto chalcogenide glass. In the ensuing optical measurement, the self-built Shack-Hartmann wavefront sensor was proven to be accurate in detecting an infrared wavefront by both experiments and numerical simulation. The combined results showed that precision compression molding of chalcogenide glasses could be an economic and precision optical fabrication technology for high-volume production of infrared optics.

  19. Research on elastic resource management for multi-queue under cloud computing environment

    NASA Astrophysics Data System (ADS)

    CHENG, Zhenjing; LI, Haibo; HUANG, Qiulan; Cheng, Yaodong; CHEN, Gang

    2017-10-01

    As a new approach to manage computing resource, virtualization technology is more and more widely applied in the high-energy physics field. A virtual computing cluster based on Openstack was built at IHEP, using HTCondor as the job queue management system. In a traditional static cluster, a fixed number of virtual machines are pre-allocated to the job queue of different experiments. However this method cannot be well adapted to the volatility of computing resource requirements. To solve this problem, an elastic computing resource management system under cloud computing environment has been designed. This system performs unified management of virtual computing nodes on the basis of job queue in HTCondor based on dual resource thresholds as well as the quota service. A two-stage pool is designed to improve the efficiency of resource pool expansion. This paper will present several use cases of the elastic resource management system in IHEPCloud. The practical run shows virtual computing resource dynamically expanded or shrunk while computing requirements change. Additionally, the CPU utilization ratio of computing resource was significantly increased when compared with traditional resource management. The system also has good performance when there are multiple condor schedulers and multiple job queues.

  20. Virtual reality applied to teletesting

    NASA Astrophysics Data System (ADS)

    van den Berg, Thomas J.; Smeenk, Roland J. M.; Mazy, Alain; Jacques, Patrick; Arguello, Luis; Mills, Simon

    2003-05-01

    The activity "Virtual Reality applied to Teletesting" is related to a wider European Space Agency (ESA) initiative of cost reduction, in particular the reduction of test costs. Reduction of costs of space related projects have to be performed on test centre operating costs and customer company costs. This can accomplished by increasing the automation and remote testing ("teletesting") capabilities of the test centre. Main problems related to teletesting are a lack of situational awareness and the separation of control over the test environment. The objective of the activity is to evaluate the use of distributed computing and Virtual Reality technology to support the teletesting of a payload under vacuum conditions, and to provide a unified man-machine interface for the monitoring and control of payload, vacuum chamber and robotics equipment. The activity includes the development and testing of a "Virtual Reality Teletesting System" (VRTS). The VRTS is deployed at one of the ESA certified test centres to perform an evaluation and test campaign using a real payload. The VRTS is entirely written in the Java programming language, using the J2EE application model. The Graphical User Interface runs as an applet in a Web browser, enabling easy access from virtually any place.

  1. Virtual Machine Language Controls Remote Devices

    NASA Technical Reports Server (NTRS)

    2014-01-01

    Kennedy Space Center worked with Blue Sun Enterprises, based in Boulder, Colorado, to enhance the company's virtual machine language (VML) to control the instruments on the Regolith and Environment Science and Oxygen and Lunar Volatiles Extraction mission. Now the NASA-improved VML is available for crewed and uncrewed spacecraft, and has potential applications on remote systems such as weather balloons, unmanned aerial vehicles, and submarines.

  2. A Virtual Astronomical Research Machine in No Time (VARMiNT)

    NASA Astrophysics Data System (ADS)

    Beaver, John

    2012-05-01

    We present early results of using virtual machine software to help make astronomical research computing accessible to a wider range of individuals. Our Virtual Astronomical Research Machine in No Time (VARMiNT) is an Ubuntu Linux virtual machine with free, open-source software already installed and configured (and in many cases documented). The purpose of VARMiNT is to provide a ready-to-go astronomical research computing environment that can be freely shared between researchers, or between amateur and professional, teacher and student, etc., and to circumvent the often-difficult task of configuring a suitable computing environment from scratch. Thus we hope that VARMiNT will make it easier for individuals to engage in research computing even if they have no ready access to the facilities of a research institution. We describe our current version of VARMiNT and some of the ways it is being used at the University of Wisconsin - Fox Valley, a two-year teaching campus of the University of Wisconsin System, as a means to enhance student independent study research projects and to facilitate collaborations with researchers at other locations. We also outline some future plans and prospects.

  3. Human-machine interface for a VR-based medical imaging environment

    NASA Astrophysics Data System (ADS)

    Krapichler, Christian; Haubner, Michael; Loesch, Andreas; Lang, Manfred K.; Englmeier, Karl-Hans

    1997-05-01

    Modern 3D scanning techniques like magnetic resonance imaging (MRI) or computed tomography (CT) produce high- quality images of the human anatomy. Virtual environments open new ways to display and to analyze those tomograms. Compared with today's inspection of 2D image sequences, physicians are empowered to recognize spatial coherencies and examine pathological regions more facile, diagnosis and therapy planning can be accelerated. For that purpose a powerful human-machine interface is required, which offers a variety of tools and features to enable both exploration and manipulation of the 3D data. Man-machine communication has to be intuitive and efficacious to avoid long accustoming times and to enhance familiarity with and acceptance of the interface. Hence, interaction capabilities in virtual worlds should be comparable to those in the real work to allow utilization of our natural experiences. In this paper the integration of hand gestures and visual focus, two important aspects in modern human-computer interaction, into a medical imaging environment is shown. With the presented human- machine interface, including virtual reality displaying and interaction techniques, radiologists can be supported in their work. Further, virtual environments can even alleviate communication between specialists from different fields or in educational and training applications.

  4. Dynamically allocated virtual clustering management system

    NASA Astrophysics Data System (ADS)

    Marcus, Kelvin; Cannata, Jess

    2013-05-01

    The U.S Army Research Laboratory (ARL) has built a "Wireless Emulation Lab" to support research in wireless mobile networks. In our current experimentation environment, our researchers need the capability to run clusters of heterogeneous nodes to model emulated wireless tactical networks where each node could contain a different operating system, application set, and physical hardware. To complicate matters, most experiments require the researcher to have root privileges. Our previous solution of using a single shared cluster of statically deployed virtual machines did not sufficiently separate each user's experiment due to undesirable network crosstalk, thus only one experiment could be run at a time. In addition, the cluster did not make efficient use of our servers and physical networks. To address these concerns, we created the Dynamically Allocated Virtual Clustering management system (DAVC). This system leverages existing open-source software to create private clusters of nodes that are either virtual or physical machines. These clusters can be utilized for software development, experimentation, and integration with existing hardware and software. The system uses the Grid Engine job scheduler to efficiently allocate virtual machines to idle systems and networks. The system deploys stateless nodes via network booting. The system uses 802.1Q Virtual LANs (VLANs) to prevent experimentation crosstalk and to allow for complex, private networks eliminating the need to map each virtual machine to a specific switch port. The system monitors the health of the clusters and the underlying physical servers and it maintains cluster usage statistics for historical trends. Users can start private clusters of heterogeneous nodes with root privileges for the duration of the experiment. Users also control when to shutdown their clusters.

  5. Open multi-agent control architecture to support virtual-reality-based man-machine interfaces

    NASA Astrophysics Data System (ADS)

    Freund, Eckhard; Rossmann, Juergen; Brasch, Marcel

    2001-10-01

    Projective Virtual Reality is a new and promising approach to intuitively operable man machine interfaces for the commanding and supervision of complex automation systems. The user interface part of Projective Virtual Reality heavily builds on latest Virtual Reality techniques, a task deduction component and automatic action planning capabilities. In order to realize man machine interfaces for complex applications, not only the Virtual Reality part has to be considered but also the capabilities of the underlying robot and automation controller are of great importance. This paper presents a control architecture that has proved to be an ideal basis for the realization of complex robotic and automation systems that are controlled by Virtual Reality based man machine interfaces. The architecture does not just provide a well suited framework for the real-time control of a multi robot system but also supports Virtual Reality metaphors and augmentations which facilitate the user's job to command and supervise a complex system. The developed control architecture has already been used for a number of applications. Its capability to integrate sensor information from sensors of different levels of abstraction in real-time helps to make the realized automation system very responsive to real world changes. In this paper, the architecture will be described comprehensively, its main building blocks will be discussed and one realization that is built based on an open source real-time operating system will be presented. The software design and the features of the architecture which make it generally applicable to the distributed control of automation agents in real world applications will be explained. Furthermore its application to the commanding and control of experiments in the Columbus space laboratory, the European contribution to the International Space Station (ISS), is only one example which will be described.

  6. Staghorn: An Automated Large-Scale Distributed System Analysis Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gabert, Kasimir; Burns, Ian; Elliott, Steven

    2016-09-01

    Conducting experiments on large-scale distributed computing systems is becoming significantly easier with the assistance of emulation. Researchers can now create a model of a distributed computing environment and then generate a virtual, laboratory copy of the entire system composed of potentially thousands of virtual machines, switches, and software. The use of real software, running at clock rate in full virtual machines, allows experiments to produce meaningful results without necessitating a full understanding of all model components. However, the ability to inspect and modify elements within these models is bound by the limitation that such modifications must compete with the model,more » either running in or alongside it. This inhibits entire classes of analyses from being conducted upon these models. We developed a mechanism to snapshot an entire emulation-based model as it is running. This allows us to \\freeze time" and subsequently fork execution, replay execution, modify arbitrary parts of the model, or deeply explore the model. This snapshot includes capturing packets in transit and other input/output state along with the running virtual machines. We were able to build this system in Linux using Open vSwitch and Kernel Virtual Machines on top of Sandia's emulation platform Firewheel. This primitive opens the door to numerous subsequent analyses on models, including state space exploration, debugging distributed systems, performance optimizations, improved training environments, and improved experiment repeatability.« less

  7. Dynamic Data-Driven Prognostics and Condition Monitoring of On-board Electronics

    DTIC Science & Technology

    2012-08-27

    of functionality and accessibility; it is an open language unlike Java or Visual meaning that it is also free. It is also one of the most popular...and C# are able to run without the use of a virtual machine like Java . 4.2.1.5 Implementation For building of an OSA-CBM system, the primer...documentation [7] recommends the following steps: 1. Choose a middleware technology (DCOM, CORBA, Web Services, Java RMI, etc.). 2. Transform OSA-CBM UML

  8. Dynamic VMs placement for energy efficiency by PSO in cloud computing

    NASA Astrophysics Data System (ADS)

    Dashti, Seyed Ebrahim; Rahmani, Amir Masoud

    2016-03-01

    Recently, cloud computing is growing fast and helps to realise other high technologies. In this paper, we propose a hieratical architecture to satisfy both providers' and consumers' requirements in these technologies. We design a new service in the PaaS layer for scheduling consumer tasks. In the providers' perspective, incompatibility between specification of physical machine and user requests in cloud leads to problems such as energy-performance trade-off and large power consumption so that profits are decreased. To guarantee Quality of service of users' tasks, and reduce energy efficiency, we proposed to modify Particle Swarm Optimisation to reallocate migrated virtual machines in the overloaded host. We also dynamically consolidate the under-loaded host which provides power saving. Simulation results in CloudSim demonstrated that whatever simulation condition is near to the real environment, our method is able to save as much as 14% more energy and the number of migrations and simulation time significantly reduces compared with the previous works.

  9. Integration of an intelligent systems behavior simulator and a scalable soldier-machine interface

    NASA Astrophysics Data System (ADS)

    Johnson, Tony; Manteuffel, Chris; Brewster, Benjamin; Tierney, Terry

    2007-04-01

    As the Army's Future Combat Systems (FCS) introduce emerging technologies and new force structures to the battlefield, soldiers will increasingly face new challenges in workload management. The next generation warfighter will be responsible for effectively managing robotic assets in addition to performing other missions. Studies of future battlefield operational scenarios involving the use of automation, including the specification of existing and proposed technologies, will provide significant insight into potential problem areas regarding soldier workload. The US Army Tank Automotive Research, Development, and Engineering Center (TARDEC) is currently executing an Army technology objective program to analyze and evaluate the effect of automated technologies and their associated control devices with respect to soldier workload. The Human-Robotic Interface (HRI) Intelligent Systems Behavior Simulator (ISBS) is a human performance measurement simulation system that allows modelers to develop constructive simulations of military scenarios with various deployments of interface technologies in order to evaluate operator effectiveness. One such interface is TARDEC's Scalable Soldier-Machine Interface (SMI). The scalable SMI provides a configurable machine interface application that is capable of adapting to several hardware platforms by recognizing the physical space limitations of the display device. This paper describes the integration of the ISBS and Scalable SMI applications, which will ultimately benefit both systems. The ISBS will be able to use the Scalable SMI to visualize the behaviors of virtual soldiers performing HRI tasks, such as route planning, and the scalable SMI will benefit from stimuli provided by the ISBS simulation environment. The paper describes the background of each system and details of the system integration approach.

  10. Ergonomics Perspective in Agricultural Research: A User-Centred Approach Using CAD and Digital Human Modeling (DHM) Technologies

    NASA Astrophysics Data System (ADS)

    Patel, Thaneswer; Sanjog, J.; Karmakar, Sougata

    2016-09-01

    Computer-aided Design (CAD) and Digital Human Modeling (DHM) (specialized CAD software for virtual human representation) technologies endow unique opportunities to incorporate human factors pro-actively in design development. Challenges of enhancing agricultural productivity through improvement of agricultural tools/machineries and better human-machine compatibility can be ensured by adoption of these modern technologies. Objectives of present work are to provide the detailed scenario of CAD and DHM applications in agricultural sector; and finding out means for wide adoption of these technologies for design and development of cost-effective, user-friendly, efficient and safe agricultural tools/equipment and operator's workplace. Extensive literature review has been conducted for systematic segregation and representation of available information towards drawing inferences. Although applications of various CAD software have momentum in agricultural research particularly for design and manufacturing of agricultural equipment/machinery, use of DHM is still at its infancy in this sector. Current review discusses about reasons of less adoption of these technologies in agricultural sector and steps to be taken for their wide adoption. It also suggests possible future research directions to come up with better ergonomic design strategies for improvement of agricultural equipment/machines and workstations through application of CAD and DHM.

  11. The Implications of Virtual Machine Introspection for Digital Forensics on Nonquiescent Virtual Machines

    DTIC Science & Technology

    2011-06-01

    metacity [ 2788] gnome-panel [ 2790] nautilus [ 2794] bonobo -activati [ 2797] gnome-vfs-daemo [ 2799] eggcups [ 2800] gnome-volume-ma [ 2809] bt...xrdb [ 2784] metacity [ 2788] gnome-panel [ 2790] nautilus [ 2794] bonobo -activati [ 2797] gnome-vfs-daemo [ 2799] eggcups [ 2800] gnome-volume...gnome-keyring-d [ 2764] gnome-settings- [ 2780] xrdb [ 2784] metacity [ 2788] gnome-panel [ 2790] nautilus [ 2794] bonobo -activati [ 2797] gnome

  12. Slot Machines: Pursuing Responsible Gaming Practices for Virtual Reels and Near Misses

    ERIC Educational Resources Information Center

    Harrigan, Kevin A.

    2009-01-01

    Since 1983, slot machines in North America have used a computer and virtual reels to determine the odds. Since at least 1988, a technique called clustering has been used to create a high number of near misses, failures that are close to wins. The result is that what the player sees does not represent the underlying probabilities and randomness,…

  13. Virtual screening by a new Clustering-based Weighted Similarity Extreme Learning Machine approach

    PubMed Central

    Kudisthalert, Wasu

    2018-01-01

    Machine learning techniques are becoming popular in virtual screening tasks. One of the powerful machine learning algorithms is Extreme Learning Machine (ELM) which has been applied to many applications and has recently been applied to virtual screening. We propose the Weighted Similarity ELM (WS-ELM) which is based on a single layer feed-forward neural network in a conjunction of 16 different similarity coefficients as activation function in the hidden layer. It is known that the performance of conventional ELM is not robust due to random weight selection in the hidden layer. Thus, we propose a Clustering-based WS-ELM (CWS-ELM) that deterministically assigns weights by utilising clustering algorithms i.e. k-means clustering and support vector clustering. The experiments were conducted on one of the most challenging datasets–Maximum Unbiased Validation Dataset–which contains 17 activity classes carefully selected from PubChem. The proposed algorithms were then compared with other machine learning techniques such as support vector machine, random forest, and similarity searching. The results show that CWS-ELM in conjunction with support vector clustering yields the best performance when utilised together with Sokal/Sneath(1) coefficient. Furthermore, ECFP_6 fingerprint presents the best results in our framework compared to the other types of fingerprints, namely ECFP_4, FCFP_4, and FCFP_6. PMID:29652912

  14. sRNAtoolboxVM: Small RNA Analysis in a Virtual Machine.

    PubMed

    Gómez-Martín, Cristina; Lebrón, Ricardo; Rueda, Antonio; Oliver, José L; Hackenberg, Michael

    2017-01-01

    High-throughput sequencing (HTS) data for small RNAs (noncoding RNA molecules that are 20-250 nucleotides in length) can now be routinely generated by minimally equipped wet laboratories; however, the bottleneck in HTS-based research has shifted now to the analysis of such huge amount of data. One of the reasons is that many analysis types require a Linux environment but computers, system administrators, and bioinformaticians suppose additional costs that often cannot be afforded by small to mid-sized groups or laboratories. Web servers are an alternative that can be used if the data is not subjected to privacy issues (what very often is an important issue with medical data). However, in any case they are less flexible than stand-alone programs limiting the number of workflows and analysis types that can be carried out.We show in this protocol how virtual machines can be used to overcome those problems and limitations. sRNAtoolboxVM is a virtual machine that can be executed on all common operating systems through virtualization programs like VirtualBox or VMware, providing the user with a high number of preinstalled programs like sRNAbench for small RNA analysis without the need to maintain additional servers and/or operating systems.

  15. Dots and dashes: art, virtual reality, and the telegraph

    NASA Astrophysics Data System (ADS)

    Ruzanka, Silvia; Chang, Ben

    2009-02-01

    Dots and Dashes is a virtual reality artwork that explores online romance over the telegraph, based on Ella Cheever Thayer's novel Wired Love - a Romance in Dots and Dashes (an Old Story Told in a New Way)1. The uncanny similarities between this story and the world of today's virtual environments provides the springboard for an exploration of a wealth of anxieties and dreams, including the construction of identities in an electronically mediated environment, the shifting boundaries between the natural and machine worlds, and the spiritual dimensions of science and technology. In this paper we examine the parallels between the telegraph networks and our current conceptions of cyberspace, as well as unique social and cultural impacts specific to the telegraph. These include the new opportunities and roles available to women in the telegraph industry and the connection between the telegraph and the Spiritualist movement. We discuss the development of the artwork, its structure and aesthetics, and the technical development of the work.

  16. Freeform object design and simultaneous manufacturing

    NASA Astrophysics Data System (ADS)

    Zhang, Wei; Zhang, Weihan; Lin, Heng; Leu, Ming C.

    2003-04-01

    Today's product design, especially the consuming product design, focuses more and more on individuation, originality, and the time to market. One way to meet these challenges is using the interactive and creationary product design methods and rapid prototyping/rapid tooling. This paper presents a novel Freeform Object Design and Simultaneous Manufacturing (FODSM) method that combines the natural interaction feature in the design phase and simultaneous manufacturing feature in the prototyping phase. The natural interactive three-dimensional design environment is achieved by adopting virtual reality technology. The geometry of the designed object is defined through the process of "virtual sculpting" during which the designer can touch and visualize the designed object and can hear the virtual manufacturing environment noise. During the designing process, the computer records the sculpting trajectories and automatically translates them into NC codes so as to simultaneously machine the designed part. The paper introduced the principle, implementation process, and key techniques of the new method, and compared it with other popular rapid prototyping methods.

  17. Towards a real-time interface between a biomimetic model of sensorimotor cortex and a robotic arm

    PubMed Central

    Dura-Bernal, Salvador; Chadderdon, George L; Neymotin, Samuel A; Francis, Joseph T; Lytton, William W

    2015-01-01

    Brain-machine interfaces can greatly improve the performance of prosthetics. Utilizing biomimetic neuronal modeling in brain machine interfaces (BMI) offers the possibility of providing naturalistic motor-control algorithms for control of a robotic limb. This will allow finer control of a robot, while also giving us new tools to better understand the brain’s use of electrical signals. However, the biomimetic approach presents challenges in integrating technologies across multiple hardware and software platforms, so that the different components can communicate in real-time. We present the first steps in an ongoing effort to integrate a biomimetic spiking neuronal model of motor learning with a robotic arm. The biomimetic model (BMM) was used to drive a simple kinematic two-joint virtual arm in a motor task requiring trial-and-error convergence on a single target. We utilized the output of this model in real time to drive mirroring motion of a Barrett Technology WAM robotic arm through a user datagram protocol (UDP) interface. The robotic arm sent back information on its joint positions, which was then used by a visualization tool on the remote computer to display a realistic 3D virtual model of the moving robotic arm in real time. This work paves the way towards a full closed-loop biomimetic brain-effector system that can be incorporated in a neural decoder for prosthetic control, to be used as a platform for developing biomimetic learning algorithms for controlling real-time devices. PMID:26709323

  18. Active tactile exploration using a brain-machine-brain interface.

    PubMed

    O'Doherty, Joseph E; Lebedev, Mikhail A; Ifft, Peter J; Zhuang, Katie Z; Shokur, Solaiman; Bleuler, Hannes; Nicolelis, Miguel A L

    2011-10-05

    Brain-machine interfaces use neuronal activity recorded from the brain to establish direct communication with external actuators, such as prosthetic arms. It is hoped that brain-machine interfaces can be used to restore the normal sensorimotor functions of the limbs, but so far they have lacked tactile sensation. Here we report the operation of a brain-machine-brain interface (BMBI) that both controls the exploratory reaching movements of an actuator and allows signalling of artificial tactile feedback through intracortical microstimulation (ICMS) of the primary somatosensory cortex. Monkeys performed an active exploration task in which an actuator (a computer cursor or a virtual-reality arm) was moved using a BMBI that derived motor commands from neuronal ensemble activity recorded in the primary motor cortex. ICMS feedback occurred whenever the actuator touched virtual objects. Temporal patterns of ICMS encoded the artificial tactile properties of each object. Neuronal recordings and ICMS epochs were temporally multiplexed to avoid interference. Two monkeys operated this BMBI to search for and distinguish one of three visually identical objects, using the virtual-reality arm to identify the unique artificial texture associated with each. These results suggest that clinical motor neuroprostheses might benefit from the addition of ICMS feedback to generate artificial somatic perceptions associated with mechanical, robotic or even virtual prostheses.

  19. Grid Application Meta-Repository System: Repository Interconnectivity and Cross-domain Application Usage in Distributed Computing Environments

    NASA Astrophysics Data System (ADS)

    Tudose, Alexandru; Terstyansky, Gabor; Kacsuk, Peter; Winter, Stephen

    Grid Application Repositories vary greatly in terms of access interface, security system, implementation technology, communication protocols and repository model. This diversity has become a significant limitation in terms of interoperability and inter-repository access. This paper presents the Grid Application Meta-Repository System (GAMRS) as a solution that offers better options for the management of Grid applications. GAMRS proposes a generic repository architecture, which allows any Grid Application Repository (GAR) to be connected to the system independent of their underlying technology. It also presents applications in a uniform manner and makes applications from all connected repositories visible to web search engines, OGSI/WSRF Grid Services and other OAI (Open Archive Initiative)-compliant repositories. GAMRS can also function as a repository in its own right and can store applications under a new repository model. With the help of this model, applications can be presented as embedded in virtual machines (VM) and therefore they can be run in their native environments and can easily be deployed on virtualized infrastructures allowing interoperability with new generation technologies such as cloud computing, application-on-demand, automatic service/application deployments and automatic VM generation.

  20. Optimization of armored fighting vehicle crew performance in a net-centric battlefield

    NASA Astrophysics Data System (ADS)

    McKeen, William P.; Espenant, Mark

    2002-08-01

    Traditional display, control and situational awareness technologies may not allow the fighting vehicle commander to take full advantage of the rich data environment made available in the net-centric battle field of the future. Indeed, the sheer complexity and volume of available data, if not properly managed, may actually reduce crew performance by overloading or confusing the commander with irrelevant information. New techniques must be explored to understand how to present battlefield information and provide the commander with continuous high quality situational awareness without significant cognitive overhead. Control of the vehicle's many complex systems must also be addressed the entire Soldier Machine Interface must be optimized if we are to realize the potential performance improvements. Defence Research and Development Canada (DRDC) and General Dynamics Canada Ltd. have embarked on a joint program called Future Armoured Fighting Vehicle Systems Technology Demonstrator, to explore these issues. The project is based on man-in-the-loop experimentation using virtual reality technology on a six degree-of-freedom motion platform that simulates the motion, sights and sounds inside a future armoured vehicle. The vehicle commander is provided with a virtual reality vision system to view a simulated 360 degree multi-spectrum representation of the battlespace, thus providing enhanced situational awareness. Graphic overlays with decision aid information will be added to reduce cognitive loading. Experiments will be conducted to evaluate the effectiveness of virtual control systems. The simulations are carried out in a virtual battlefield created by linking our simulation system with other simulation centers to provide a net-centric battlespace where enemy forces can be engaged in fire fights. Survivability and lethality will be measured in successive test sequences using real armoured fighting vehicle crews to optimize overall system effectiveness.

  1. Development of a 3-D Rehabilitation System for Upper Limbs Using ER Actuators in a Nedo Project

    NASA Astrophysics Data System (ADS)

    Furusho, Junji; Koyanagi, Ken'ichi; Nakanishi, Kazuhiko; Ryu, Ushio; Takenaka, Shigekazu; Inoue, Akio; Domen, Kazuhisa; Miyakoshi, Koichi

    New training methods and exercises for upper limbs rehabilitation are made possible by application of robotics and virtual reality technology. The technologies can also make quantitative evaluations and enhance the qualitative effect of training. We have joined a project managed by NEDO (New Energy and Industrial Technology Development Organization as a semi-governmental organization under the Ministry of Economy, Trade and Industry of Japan) 5-year Project, "Rehabilitation System for the Upper Limbs and Lower Limbs", and developed a 3-DOF exercise machine for upper limbs (EMUL) using ER actuators. In this paper, we also present the development of software for motion exercise trainings and some results of clinical evaluation. Moreover, it is discussed how ER actuators ensure the mechanical safety.

  2. Applying Spatial Audio to Human Interfaces: 25 Years of NASA Experience

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Wenzel, Elizabeth M.; Godfrey, Martine; Miller, Joel D.; Anderson, Mark R.

    2010-01-01

    From the perspective of human factors engineering, the inclusion of spatial audio within a human-machine interface is advantageous from several perspectives. Demonstrated benefits include the ability to monitor multiple streams of speech and non-speech warning tones using a cocktail party advantage, and for aurally-guided visual search. Other potential benefits include the spatial coordination and interaction of multimodal events, and evaluation of new communication technologies and alerting systems using virtual simulation. Many of these technologies were developed at NASA Ames Research Center, beginning in 1985. This paper reviews examples and describes the advantages of spatial sound in NASA-related technologies, including space operations, aeronautics, and search and rescue. The work has involved hardware and software development as well as basic and applied research.

  3. A Virtual Sensor for Online Fault Detection of Multitooth-Tools

    PubMed Central

    Bustillo, Andres; Correa, Maritza; Reñones, Anibal

    2011-01-01

    The installation of suitable sensors close to the tool tip on milling centres is not possible in industrial environments. It is therefore necessary to design virtual sensors for these machines to perform online fault detection in many industrial tasks. This paper presents a virtual sensor for online fault detection of multitooth tools based on a Bayesian classifier. The device that performs this task applies mathematical models that function in conjunction with physical sensors. Only two experimental variables are collected from the milling centre that performs the machining operations: the electrical power consumption of the feed drive and the time required for machining each workpiece. The task of achieving reliable signals from a milling process is especially complex when multitooth tools are used, because each kind of cutting insert in the milling centre only works on each workpiece during a certain time window. Great effort has gone into designing a robust virtual sensor that can avoid re-calibration due to, e.g., maintenance operations. The virtual sensor developed as a result of this research is successfully validated under real conditions on a milling centre used for the mass production of automobile engine crankshafts. Recognition accuracy, calculated with a k-fold cross validation, had on average 0.957 of true positives and 0.986 of true negatives. Moreover, measured accuracy was 98%, which suggests that the virtual sensor correctly identifies new cases. PMID:22163766

  4. A virtual sensor for online fault detection of multitooth-tools.

    PubMed

    Bustillo, Andres; Correa, Maritza; Reñones, Anibal

    2011-01-01

    The installation of suitable sensors close to the tool tip on milling centres is not possible in industrial environments. It is therefore necessary to design virtual sensors for these machines to perform online fault detection in many industrial tasks. This paper presents a virtual sensor for online fault detection of multitooth tools based on a bayesian classifier. The device that performs this task applies mathematical models that function in conjunction with physical sensors. Only two experimental variables are collected from the milling centre that performs the machining operations: the electrical power consumption of the feed drive and the time required for machining each workpiece. The task of achieving reliable signals from a milling process is especially complex when multitooth tools are used, because each kind of cutting insert in the milling centre only works on each workpiece during a certain time window. Great effort has gone into designing a robust virtual sensor that can avoid re-calibration due to, e.g., maintenance operations. The virtual sensor developed as a result of this research is successfully validated under real conditions on a milling centre used for the mass production of automobile engine crankshafts. Recognition accuracy, calculated with a k-fold cross validation, had on average 0.957 of true positives and 0.986 of true negatives. Moreover, measured accuracy was 98%, which suggests that the virtual sensor correctly identifies new cases.

  5. A Comprehensive Availability Modeling and Analysis of a Virtualized Servers System Using Stochastic Reward Nets

    PubMed Central

    Kim, Dong Seong; Park, Jong Sou

    2014-01-01

    It is important to assess availability of virtualized systems in IT business infrastructures. Previous work on availability modeling and analysis of the virtualized systems used a simplified configuration and assumption in which only one virtual machine (VM) runs on a virtual machine monitor (VMM) hosted on a physical server. In this paper, we show a comprehensive availability model using stochastic reward nets (SRN). The model takes into account (i) the detailed failures and recovery behaviors of multiple VMs, (ii) various other failure modes and corresponding recovery behaviors (e.g., hardware faults, failure and recovery due to Mandelbugs and aging-related bugs), and (iii) dependency between different subcomponents (e.g., between physical host failure and VMM, etc.) in a virtualized servers system. We also show numerical analysis on steady state availability, downtime in hours per year, transaction loss, and sensitivity analysis. This model provides a new finding on how to increase system availability by combining both software rejuvenations at VM and VMM in a wise manner. PMID:25165732

  6. Cloud-based opportunities in scientific computing: insights from processing Suomi National Polar-Orbiting Partnership (S-NPP) Direct Broadcast data

    NASA Astrophysics Data System (ADS)

    Evans, J. D.; Hao, W.; Chettri, S.

    2013-12-01

    The cloud is proving to be a uniquely promising platform for scientific computing. Our experience with processing satellite data using Amazon Web Services highlights several opportunities for enhanced performance, flexibility, and cost effectiveness in the cloud relative to traditional computing -- for example: - Direct readout from a polar-orbiting satellite such as the Suomi National Polar-Orbiting Partnership (S-NPP) requires bursts of processing a few times a day, separated by quiet periods when the satellite is out of receiving range. In the cloud, by starting and stopping virtual machines in minutes, we can marshal significant computing resources quickly when needed, but not pay for them when not needed. To take advantage of this capability, we are automating a data-driven approach to the management of cloud computing resources, in which new data availability triggers the creation of new virtual machines (of variable size and processing power) which last only until the processing workflow is complete. - 'Spot instances' are virtual machines that run as long as one's asking price is higher than the provider's variable spot price. Spot instances can greatly reduce the cost of computing -- for software systems that are engineered to withstand unpredictable interruptions in service (as occurs when a spot price exceeds the asking price). We are implementing an approach to workflow management that allows data processing workflows to resume with minimal delays after temporary spot price spikes. This will allow systems to take full advantage of variably-priced 'utility computing.' - Thanks to virtual machine images, we can easily launch multiple, identical machines differentiated only by 'user data' containing individualized instructions (e.g., to fetch particular datasets or to perform certain workflows or algorithms) This is particularly useful when (as is the case with S-NPP data) we need to launch many very similar machines to process an unpredictable number of data files concurrently. Our experience shows the viability and flexibility of this approach to workflow management for scientific data processing. - Finally, cloud computing is a promising platform for distributed volunteer ('interstitial') computing, via mechanisms such as the Berkeley Open Infrastructure for Network Computing (BOINC) popularized with the SETI@Home project and others such as ClimatePrediction.net and NASA's Climate@Home. Interstitial computing faces significant challenges as commodity computing shifts from (always on) desktop computers towards smartphones and tablets (untethered and running on scarce battery power); but cloud computing offers significant slack capacity. This capacity includes virtual machines with unused RAM or underused CPUs; virtual storage volumes allocated (& paid for) but not full; and virtual machines that are paid up for the current hour but whose work is complete. We are devising ways to facilitate the reuse of these resources (i.e., cloud-based interstitial computing) for satellite data processing and related analyses. We will present our findings and research directions on these and related topics.

  7. A discrete Fourier transform for virtual memory machines

    NASA Technical Reports Server (NTRS)

    Galant, David C.

    1992-01-01

    An algebraic theory of the Discrete Fourier Transform is developed in great detail. Examination of the details of the theory leads to a computationally efficient fast Fourier transform for the use on computers with virtual memory. Such an algorithm is of great use on modern desktop machines. A FORTRAN coded version of the algorithm is given for the case when the sequence of numbers to be transformed is a power of two.

  8. Productive High Performance Parallel Programming with Auto-tuned Domain-Specific Embedded Languages

    DTIC Science & Technology

    2013-01-02

    Compilation JVM Java Virtual Machine KB Kilobyte KDT Knowledge Discovery Toolbox LAPACK Linear Algebra Package LLVM Low-Level Virtual Machine LOC Lines...different starting points. Leo Meyerovich also helped solidify some of the ideas here in discussions during Par Lab retreats. I would also like to thank...multi-timestep computations by blocking in both time and space. 88 Implementation Output Approx DSL Type Language Language Parallelism LoC Graphite

  9. Robust Airborne Networking Extensions (RANGE)

    DTIC Science & Technology

    2008-02-01

    IMUNES [13] project, which provides an entire network stack virtualization and topology control inside a single FreeBSD machine . The emulated topology...Multicast versus broadcast in a manet.” in ADHOC-NOW, 2004, pp. 14–27. [9] J. Mukherjee, R. Atwood , “ Rendezvous point relocation in protocol independent...computer with an Ethernet connection, or a Linux virtual machine on some other (e.g., Windows) operating system, should work. 2.1 Patching the source code

  10. Lifelong personal health data and application software via virtual machines in the cloud.

    PubMed

    Van Gorp, Pieter; Comuzzi, Marco

    2014-01-01

    Personal Health Records (PHRs) should remain the lifelong property of patients, who should be able to show them conveniently and securely to selected caregivers and institutions. In this paper, we present MyPHRMachines, a cloud-based PHR system taking a radically new architectural solution to health record portability. In MyPHRMachines, health-related data and the application software to view and/or analyze it are separately deployed in the PHR system. After uploading their medical data to MyPHRMachines, patients can access them again from remote virtual machines that contain the right software to visualize and analyze them without any need for conversion. Patients can share their remote virtual machine session with selected caregivers, who will need only a Web browser to access the pre-loaded fragments of their lifelong PHR. We discuss a prototype of MyPHRMachines applied to two use cases, i.e., radiology image sharing and personalized medicine.

  11. Hypermedia = hypercommunication

    NASA Technical Reports Server (NTRS)

    Laff, Mark R.

    1990-01-01

    New hardware and software technology gave application designers the freedom to use new realism in human computer interaction. High-quality images, motion video, stereo sound and music, speech, touch, gesture provide richer data channels between the person and the machine. Ultimately, this will lead to richer communication between people with the computer as an intermediary. The whole point of hyper-books, hyper-newspapers, virtual worlds, is to transfer the concept and relationships, the 'data structure', from the mind of creator to that of user. Some of the characteristics of this rich information channel are discussed, and some examples are presented.

  12. An assembler for the MOS Technology 6502 microprocessor as implemented in jolt (TM) and KIM-1 (TM)

    NASA Technical Reports Server (NTRS)

    Lilley, R. W.

    1976-01-01

    Design of low-cost, microcomputer-based navigation receivers, and the assembler are described. The development of computer software for microprocessors is materially aided by the assembler program using mnemonic variable names. The flexibility of the environment provided by the IBM's Virtual Machine Facility and the Conversational Monitor System, make possible the convenient assembler access. The implementation of the assembler for the microprocessor chip serves a part of the present need and forms a model for support of other microprocessors.

  13. BESIII physical offline data analysis on virtualization platform

    NASA Astrophysics Data System (ADS)

    Huang, Q.; Li, H.; Kan, B.; Shi, J.; Lei, X.

    2015-12-01

    In this contribution, we present an ongoing work, which aims at benefiting BESIII computing system for higher resource utilization and more efficient job operations brought by cloud and virtualization technology with Openstack and KVM. We begin with the architecture of BESIII offline software to understand how it works. We mainly report the KVM performance evaluation and optimization from various factors in hardware and kernel. Experimental results show the CPU performance penalty of KVM can be approximately decreased to 3%. In addition, the performance comparison between KVM and physical machines in aspect of CPU, disk IO and network IO is also presented. Finally, we present our development work, an adaptive cloud scheduler, which allocates and reclaims VMs dynamically according to the status of TORQUE queue and the size of resource pool to improve resource utilization and job processing efficiency.

  14. Bioinformatics Pipelines for Targeted Resequencing and Whole-Exome Sequencing of Human and Mouse Genomes: A Virtual Appliance Approach for Instant Deployment

    PubMed Central

    Saeed, Isaam; Wong, Stephen Q.; Mar, Victoria; Goode, David L.; Caramia, Franco; Doig, Ken; Ryland, Georgina L.; Thompson, Ella R.; Hunter, Sally M.; Halgamuge, Saman K.; Ellul, Jason; Dobrovic, Alexander; Campbell, Ian G.; Papenfuss, Anthony T.; McArthur, Grant A.; Tothill, Richard W.

    2014-01-01

    Targeted resequencing by massively parallel sequencing has become an effective and affordable way to survey small to large portions of the genome for genetic variation. Despite the rapid development in open source software for analysis of such data, the practical implementation of these tools through construction of sequencing analysis pipelines still remains a challenging and laborious activity, and a major hurdle for many small research and clinical laboratories. We developed TREVA (Targeted REsequencing Virtual Appliance), making pre-built pipelines immediately available as a virtual appliance. Based on virtual machine technologies, TREVA is a solution for rapid and efficient deployment of complex bioinformatics pipelines to laboratories of all sizes, enabling reproducible results. The analyses that are supported in TREVA include: somatic and germline single-nucleotide and insertion/deletion variant calling, copy number analysis, and cohort-based analyses such as pathway and significantly mutated genes analyses. TREVA is flexible and easy to use, and can be customised by Linux-based extensions if required. TREVA can also be deployed on the cloud (cloud computing), enabling instant access without investment overheads for additional hardware. TREVA is available at http://bioinformatics.petermac.org/treva/. PMID:24752294

  15. Sensor supervision and multiagent commanding by means of projective virtual reality

    NASA Astrophysics Data System (ADS)

    Rossmann, Juergen

    1998-10-01

    When autonomous systems with multiple agents are considered, conventional control- and supervision technologies are often inadequate because the amount of information available is often presented in a way that the user is effectively overwhelmed by the displayed data. New virtual reality (VR) techniques can help to cope with this problem, because VR offers the chance to convey information in an intuitive manner and can combine supervision capabilities and new, intuitive approaches to the control of autonomous systems. In the approach taken, control and supervision issues were equally stressed and finally led to the new ideas and the general framework for Projective Virtual Reality. The key idea of this new approach for an intuitively operable man machine interface for decentrally controlled multi-agent systems is to let the user act in the virtual world, detect the changes and have an action planning component automatically generate task descriptions for the agents involved to project actions that have been carried out by users in the virtual world into the physical world, e.g. with the help of robots. Thus the Projective Virtual Reality approach is to split the job between the task deduction in the VR and the task `projection' onto the physical automation components by the automatic action planning component. Besides describing the realized projective virtual reality system, the paper will also describe in detail the metaphors and visualization aids used to present different types of (e.g. sensor-) information in an intuitively comprehensible manner.

  16. Virtualization for Cost-Effective Teaching of Assembly Language Programming

    ERIC Educational Resources Information Center

    Cadenas, José O.; Sherratt, R. Simon; Howlett, Des; Guy, Chris G.; Lundqvist, Karsten O.

    2015-01-01

    This paper describes a virtual system that emulates an ARM-based processor machine, created to replace a traditional hardware-based system for teaching assembly language. The virtual system proposed here integrates, in a single environment, all the development tools necessary to deliver introductory or advanced courses on modern assembly language…

  17. Quantitative 3-D imaging topogrammetry for telemedicine applications

    NASA Technical Reports Server (NTRS)

    Altschuler, Bruce R.

    1994-01-01

    The technology to reliably transmit high-resolution visual imagery over short to medium distances in real time has led to the serious considerations of the use of telemedicine, telepresence, and telerobotics in the delivery of health care. These concepts may involve, and evolve toward: consultation from remote expert teaching centers; diagnosis; triage; real-time remote advice to the surgeon; and real-time remote surgical instrument manipulation (telerobotics with virtual reality). Further extrapolation leads to teledesign and telereplication of spare surgical parts through quantitative teleimaging of 3-D surfaces tied to CAD/CAM devices and an artificially intelligent archival data base of 'normal' shapes. The ability to generate 'topogrames' or 3-D surface numerical tables of coordinate values capable of creating computer-generated virtual holographic-like displays, machine part replication, and statistical diagnostic shape assessment is critical to the progression of telemedicine. Any virtual reality simulation will remain in 'video-game' realm until realistic dimensional and spatial relational inputs from real measurements in vivo during surgeries are added to an ever-growing statistical data archive. The challenges of managing and interpreting this 3-D data base, which would include radiographic and surface quantitative data, are considerable. As technology drives toward dynamic and continuous 3-D surface measurements, presenting millions of X, Y, Z data points per second of flexing, stretching, moving human organs, the knowledge base and interpretive capabilities of 'brilliant robots' to work as a surgeon's tireless assistants becomes imaginable. The brilliant robot would 'see' what the surgeon sees--and more, for the robot could quantify its 3-D sensing and would 'see' in a wider spectral range than humans, and could zoom its 'eyes' from the macro world to long-distance microscopy. Unerring robot hands could rapidly perform machine-aided suturing with precision micro-sewing machines, splice neural connections with laser welds, micro-bore through constricted vessels, and computer combine ultrasound, microradiography, and 3-D mini-borescopes to quickly assess and trace vascular problems in situ. The spatial relationships between organs, robotic arms, and end-effector diagnostic, manipulative, and surgical instruments would be constantly monitored by the robot 'brain' using inputs from its multiple 3-D quantitative 'eyes' remote sensing, as well as by contact and proximity force measuring devices. Methods to create accurate and quantitative 3-D topograms at continuous video data rates are described.

  18. Comparative study of state-of-the-art myoelectric controllers for multigrasp prosthetic hands.

    PubMed

    Segil, Jacob L; Controzzi, Marco; Weir, Richard F ff; Cipriani, Christian

    2014-01-01

    A myoelectric controller should provide an intuitive and effective human-machine interface that deciphers user intent in real-time and is robust enough to operate in daily life. Many myoelectric control architectures have been developed, including pattern recognition systems, finite state machines, and more recently, postural control schemes. Here, we present a comparative study of two types of finite state machines and a postural control scheme using both virtual and physical assessment procedures with seven nondisabled subjects. The Southampton Hand Assessment Procedure (SHAP) was used in order to compare the effectiveness of the controllers during activities of daily living using a multigrasp artificial hand. Also, a virtual hand posture matching task was used to compare the controllers when reproducing six target postures. The performance when using the postural control scheme was significantly better (p < 0.05) than the finite state machines during the physical assessment when comparing within-subject averages using the SHAP percent difference metric. The virtual assessment results described significantly greater completion rates (97% and 99%) for the finite state machines, but the movement time tended to be faster (2.7 s) for the postural control scheme. Our results substantiate that postural control schemes rival other state-of-the-art myoelectric controllers.

  19. A Prototype SSVEP Based Real Time BCI Gaming System

    PubMed Central

    Martišius, Ignas

    2016-01-01

    Although brain-computer interface technology is mainly designed with disabled people in mind, it can also be beneficial to healthy subjects, for example, in gaming or virtual reality systems. In this paper we discuss the typical architecture, paradigms, requirements, and limitations of electroencephalogram-based gaming systems. We have developed a prototype three-class brain-computer interface system, based on the steady state visually evoked potentials paradigm and the Emotiv EPOC headset. An online target shooting game, implemented in the OpenViBE environment, has been used for user feedback. The system utilizes wave atom transform for feature extraction, achieving an average accuracy of 78.2% using linear discriminant analysis classifier, 79.3% using support vector machine classifier with a linear kernel, and 80.5% using a support vector machine classifier with a radial basis function kernel. PMID:27051414

  20. A Prototype SSVEP Based Real Time BCI Gaming System.

    PubMed

    Martišius, Ignas; Damaševičius, Robertas

    2016-01-01

    Although brain-computer interface technology is mainly designed with disabled people in mind, it can also be beneficial to healthy subjects, for example, in gaming or virtual reality systems. In this paper we discuss the typical architecture, paradigms, requirements, and limitations of electroencephalogram-based gaming systems. We have developed a prototype three-class brain-computer interface system, based on the steady state visually evoked potentials paradigm and the Emotiv EPOC headset. An online target shooting game, implemented in the OpenViBE environment, has been used for user feedback. The system utilizes wave atom transform for feature extraction, achieving an average accuracy of 78.2% using linear discriminant analysis classifier, 79.3% using support vector machine classifier with a linear kernel, and 80.5% using a support vector machine classifier with a radial basis function kernel.

  1. Prospects for Evidence -Based Software Assurance: Models and Analysis

    DTIC Science & Technology

    2015-09-01

    virtual machine is much lighter than the workstation. The virtual machine doesn’t need to run anti- virus , firewalls, intrusion preven- tion systems...34] Maiorca, D., Corona , I., and Giacinto, G. Looking at the bag is not enough to find the bomb: An evasion of structural methods for malicious PDF...CCS ’13, ACM, pp. 119–130. [35] Maiorca, D., Giacinto, G., and Corona , I. A pattern recognition system for malicious PDF files detection. In

  2. Con-forming bodies: the interplay of machines and bodies and the implications of agency in medical imaging.

    PubMed

    Wood, Lisa A

    2016-06-01

    Attending to the material discursive constructions of the patient body within cone beam computed tomography (CBCT) imaging in radiotherapy treatments, in this paper I describe how bodies and machines co-create images. Using an analytical framework inspired by Science and Technology Studies and Feminist Technoscience, I describe the interplay between machines and bodies and the implications of materialities and agency. I argue that patients' bodies play a part in producing scans within acceptable limits of machines as set out through organisational arrangements. In doing so I argue that bodies are fabricated into the order of work prescribed and embedded within and around the CBCT system, becoming, not only the subject of resulting images, but part of that image. The scan is not therefore a representation of a passive subject (a body) but co-produced by the work of practitioners and patients who actively control (and contort) and discipline their body according to protocols and instructions and the CBCT system. In this way I suggest they are 'con-forming' the CBCT image. A Virtual Abstract of this paper can be found at: https://youtu.be/qysCcBGuNSM. © 2015 Foundation for the Sociology of Health & Illness.

  3. Final Report: Enabling Exascale Hardware and Software Design through Scalable System Virtualization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bridges, Patrick G.

    2015-02-01

    In this grant, we enhanced the Palacios virtual machine monitor to increase its scalability and suitability for addressing exascale system software design issues. This included a wide range of research on core Palacios features, large-scale system emulation, fault injection, perfomrance monitoring, and VMM extensibility. This research resulted in large number of high-impact publications in well-known venues, the support of a number of students, and the graduation of two Ph.D. students and one M.S. student. In addition, our enhanced version of the Palacios virtual machine monitor has been adopted as a core element of the Hobbes operating system under active DOE-fundedmore » research and development.« less

  4. minimega v. 3.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crussell, Jonathan; Erickson, Jeremy; Fritz, David

    minimega is an emulytics platform for creating testbeds of networked devices. The platoform consists of easily deployable tools to facilitate bringing up large networks of virtual machines including Windows, Linux, and Android. minimega allows experiments to be brought up quickly with almost no configuration. minimega also includes tools for simple cluster, management, as well as tools for creating Linux-based virtual machines. This release of minimega includes new emulated sensors for Android devices to improve the fidelity of testbeds that include mobile devices. Emulated sensors include GPS and

  5. Phenomenology tools on cloud infrastructures using OpenStack

    NASA Astrophysics Data System (ADS)

    Campos, I.; Fernández-del-Castillo, E.; Heinemeyer, S.; Lopez-Garcia, A.; Pahlen, F.; Borges, G.

    2013-04-01

    We present a new environment for computations in particle physics phenomenology employing recent developments in cloud computing. On this environment users can create and manage "virtual" machines on which the phenomenology codes/tools can be deployed easily in an automated way. We analyze the performance of this environment based on "virtual" machines versus the utilization of physical hardware. In this way we provide a qualitative result for the influence of the host operating system on the performance of a representative set of applications for phenomenology calculations.

  6. Enhanced networked server management with random remote backups

    NASA Astrophysics Data System (ADS)

    Kim, Song-Kyoo

    2003-08-01

    In this paper, the model is focused on available server management in network environments. The (remote) backup servers are hooked up by VPN (Virtual Private Network) and replace broken main severs immediately. A virtual private network (VPN) is a way to use a public network infrastructure and hooks up long-distance servers within a single network infrastructure. The servers can be represent as "machines" and then the system deals with main unreliable and random auxiliary spare (remote backup) machines. When the system performs a mandatory routine maintenance, auxiliary machines are being used for backups during idle periods. Unlike other existing models, the availability of auxiliary machines is changed for each activation in this enhanced model. Analytically tractable results are obtained by using several mathematical techniques and the results are demonstrated in the framework of optimized networked server allocation problems.

  7. Interfacing HTCondor-CE with OpenStack

    NASA Astrophysics Data System (ADS)

    Bockelman, B.; Caballero Bejar, J.; Hover, J.

    2017-10-01

    Over the past few years, Grid Computing technologies have reached a high level of maturity. One key aspect of this success has been the development and adoption of newer Compute Elements to interface the external Grid users with local batch systems. These new Compute Elements allow for better handling of jobs requirements and a more precise management of diverse local resources. However, despite this level of maturity, the Grid Computing world is lacking diversity in local execution platforms. As Grid Computing technologies have historically been driven by the needs of the High Energy Physics community, most resource providers run the platform (operating system version and architecture) that best suits the needs of their particular users. In parallel, the development of virtualization and cloud technologies has accelerated recently, making available a variety of solutions, both commercial and academic, proprietary and open source. Virtualization facilitates performing computational tasks on platforms not available at most computing sites. This work attempts to join the technologies, allowing users to interact with computing sites through one of the standard Computing Elements, HTCondor-CE, but running their jobs within VMs on a local cloud platform, OpenStack, when needed. The system will re-route, in a transparent way, end user jobs into dynamically-launched VM worker nodes when they have requirements that cannot be satisfied by the static local batch system nodes. Also, once the automated mechanisms are in place, it becomes straightforward to allow an end user to invoke a custom Virtual Machine at the site. This will allow cloud resources to be used without requiring the user to establish a separate account. Both scenarios are described in this work.

  8. A Cooperative Approach to Virtual Machine Based Fault Injection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naughton III, Thomas J; Engelmann, Christian; Vallee, Geoffroy R

    Resilience investigations often employ fault injection (FI) tools to study the effects of simulated errors on a target system. It is important to keep the target system under test (SUT) isolated from the controlling environment in order to maintain control of the experiement. Virtual machines (VMs) have been used to aid these investigations due to the strong isolation properties of system-level virtualization. A key challenge in fault injection tools is to gain proper insight and context about the SUT. In VM-based FI tools, this challenge of target con- text is increased due to the separation between host and guest (VM).more » We discuss an approach to VM-based FI that leverages virtual machine introspection (VMI) methods to gain insight into the target s context running within the VM. The key to this environment is the ability to provide basic information to the FI system that can be used to create a map of the target environment. We describe a proof- of-concept implementation and a demonstration of its use to introduce simulated soft errors into an iterative solver benchmark running in user-space of a guest VM.« less

  9. Nonvolatile Memory Materials for Neuromorphic Intelligent Machines.

    PubMed

    Jeong, Doo Seok; Hwang, Cheol Seong

    2018-04-18

    Recent progress in deep learning extends the capability of artificial intelligence to various practical tasks, making the deep neural network (DNN) an extremely versatile hypothesis. While such DNN is virtually built on contemporary data centers of the von Neumann architecture, physical (in part) DNN of non-von Neumann architecture, also known as neuromorphic computing, can remarkably improve learning and inference efficiency. Particularly, resistance-based nonvolatile random access memory (NVRAM) highlights its handy and efficient application to the multiply-accumulate (MAC) operation in an analog manner. Here, an overview is given of the available types of resistance-based NVRAMs and their technological maturity from the material- and device-points of view. Examples within the strategy are subsequently addressed in comparison with their benchmarks (virtual DNN in deep learning). A spiking neural network (SNN) is another type of neural network that is more biologically plausible than the DNN. The successful incorporation of resistance-based NVRAM in SNN-based neuromorphic computing offers an efficient solution to the MAC operation and spike timing-based learning in nature. This strategy is exemplified from a material perspective. Intelligent machines are categorized according to their architecture and learning type. Also, the functionality and usefulness of NVRAM-based neuromorphic computing are addressed. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. A Web-based cost-effective training tool with possible application to brain injury rehabilitation.

    PubMed

    Wang, Peijun; Kreutzer, Ina Anna; Bjärnemo, Robert; Davies, Roy C

    2004-06-01

    Virtual reality (VR) has provoked enormous interest in the medical community. In particular, VR offers therapists new approaches for improving rehabilitation effects. However, most of these VR assistant tools are not very portable, extensible or economical. Due to the vast amount of 3D data, they are not suitable for Internet transfer. Furthermore, in order to run these VR systems smoothly, special hardware devices are needed. As a result, existing VR assistant tools tend to be available in hospitals but not in patients' homes. To overcome these disadvantages, as a case study, this paper proposes a Web-based Virtual Ticket Machine, called WBVTM, using VRML [VRML Consortium, The Virtual Reality Modeling Language: International Standard ISO/IEC DIS 14772-1, 1997, available at ], Java and EAI (External Authoring Interface) [Silicon Graphics, Inc., The External Authoring Interface (EAI), available at ], to help people with acquired brain injury (ABI) to relearn basic living skills at home at a low cost. As these technologies are open standard and feature usability on the Internet, WBVTM achieves the goals of portability, easy accessibility and cost-effectiveness.

  11. Virtual reality for intelligent and interactive operating, training, and visualization systems

    NASA Astrophysics Data System (ADS)

    Freund, Eckhard; Rossmann, Juergen; Schluse, Michael

    2000-10-01

    Virtual Reality Methods allow a new and intuitive way of communication between man and machine. The basic idea of Virtual Reality (VR) is the generation of artificial computer simulated worlds, which the user not only can look at but also can interact with actively using data glove and data helmet. The main emphasis for the use of such techniques at the IRF is the development of a new generation of operator interfaces for the control of robots and other automation components and for intelligent training systems for complex tasks. The basic idea of the methods developed at the IRF for the realization of Projective Virtual Reality is to let the user work in the virtual world as he would act in reality. The user actions are recognized by the Virtual reality System and by means of new and intelligent control software projected onto the automation components like robots which afterwards perform the necessary actions in reality to execute the users task. In this operation mode the user no longer has to be a robot expert to generate tasks for robots or to program them, because intelligent control software recognizes the users intention and generated automatically the commands for nearly every automation component. Now, Virtual Reality Methods are ideally suited for universal man-machine-interfaces for the control and supervision of a big class of automation components, interactive training and visualization systems. The Virtual Reality System of the IRF-COSIMIR/VR- forms the basis for different projects starting with the control of space automation systems in the projects CIROS, VITAL and GETEX, the realization of a comprehensive development tool for the International Space Station and last but not least with the realistic simulation fire extinguishing, forest machines and excavators which will be presented in the final paper in addition to the key ideas of this Virtual Reality System.

  12. A multi-group and preemptable scheduling of cloud resource based on HTCondor

    NASA Astrophysics Data System (ADS)

    Jiang, Xiaowei; Zou, Jiaheng; Cheng, Yaodong; Shi, Jingyan

    2017-10-01

    Due to the features of virtual machine-flexibility, easy controlling and various system environments, more and more fields utilize the virtualization technology to construct the distributed system with the virtual resources, also including high energy physics. This paper introduce a method used in high energy physics that supports multiple resource group and preemptable cloud resource scheduling, combining virtual machine with HTCondor (a batch system). It makes resource controlling more flexible and more efficient and makes resource scheduling independent of job scheduling. Firstly, the resources belong to different experiment-groups, and the type of user-groups mapping to resource-groups(same as experiment-group) is one-to-one or many-to-one. In order to make the confused group simply to be managed, we designed the permission controlling component to ensure that the different resource-groups can get the suitable jobs. Secondly, for the purpose of elastically allocating resources for suitable resource-group, it is necessary to schedule resources like scheduling jobs. So this paper designs the cloud resource scheduling to maintain a resource queue and allocate an appropriate amount of virtual resources to the request resource-group. Thirdly, in some kind of situations, because of the resource occupied for a long time, resources need to be preempted. This paper adds the preemption function for the resource scheduling that implement resource preemption based on the group priority. Additionally, the way to preempting is soft that when virtual resources are preempted, jobs will not be killed but also be held and rematched later. It is implemented with the help of HTCondor, storing the held job information in scheduler, releasing the job to idle status and doing second matcher. In IHEP (institute of high energy physics), we have built a batch system based on HTCondor with a virtual resources pool based on Openstack. And this paper will show some cases of experiment JUNO and LHAASO. The result indicates that multi-group and preemptable resource scheduling is efficient to support multi-group and soft preemption. Additionally, the permission controlling component has been used in the local computing cluster, supporting for experiment JUNO, CMS and LHAASO, and the scale will be expanded to more experiments at the first half year, including DYW, BES and so on. Its evidence that the permission controlling is efficient.

  13. New Thermal Taste Actuation Technology for Future Multisensory Virtual Reality and Internet.

    PubMed

    Karunanayaka, Kasun; Johari, Nurafiqah; Hariri, Surina; Camelia, Hanis; Bielawski, Kevin Stanley; Cheok, Adrian David

    2018-04-01

    Today's virtual reality (VR) applications such as gaming, multisensory entertainment, remote dining, and online shopping are mainly based on audio, visual, and touch interactions between humans and virtual worlds. Integrating the sense of taste into VR is difficult since humans are dependent on chemical-based taste delivery systems. This paper presents the 'Thermal Taste Machine', a new digital taste actuation technology that can effectively produce and modify thermal taste sensations on the tongue. It modifies the temperature of the surface of the tongue within a short period of time (from 25°C to 40 °C while heating, and from 25°C to 10 °C while cooling). We tested this device on human subjects and described the experience of thermal taste using 20 known (taste and non-taste) sensations. Our results suggested that rapidly heating the tongue produces sweetness, fatty/oiliness, electric taste, warmness, and reduces the sensibility for metallic taste. Similarly, cooling the tongue produced mint taste, pleasantness, and coldness. By conducting another user study on the perceived sweetness of sucrose solutions after the thermal stimulation, we found that heating the tongue significantly enhances the intensity of sweetness for both thermal tasters and non-thermal tasters. Also, we found that faster temperature rises on the tongue produce more intense sweet sensations for thermal tasters. This technology will be useful in two ways: First, it can produce taste sensations without using chemicals for the individuals who are sensitive to thermal taste. Second, the temperature rise of the device can be used as a way to enhance the intensity of sweetness. We believe that this technology can be used to digitally produce and enhance taste sensations in future virtual reality applications. The key novelties of this paper are as follows: 1. Development of a thermal taste actuation technology for stimulating the human taste receptors, 2. Characterization of the thermal taste produced by the device using taste-related sensations and non-taste related sensations, 3. Research on enhancing the intensity for sucrose solutions using thermal stimulation, 4. Research on how different speeds of heating affect the intensity of sweetness produced by thermal stimulation.

  14. VR for Mars Pathfinder

    NASA Technical Reports Server (NTRS)

    Blackmon, Theodore

    1998-01-01

    Virtual reality (VR) technology has played an integral role for Mars Pathfinder mission, operations Using an automated machine vision algorithm, the 3d topography of the Martian surface was rapidly recovered fro -a the stereo images captured. by the Tender camera to produce photo-realistic 3d models, An advanced, interface was developed for visualization and interaction with. the virtual environment of the Pathfinder landing site for mission scientists at the Space Flight Operations Facility of the Jet Propulsion Laboratory. The VR aspect of the display allowed mission scientists to navigate on Mars in Bud while remaining here on Earth, thus improving their spatial awareness of the rock field that surrounds the lenders Measurements of positions, distances and angles could be easily extracted from the topographic models, providing valuable information for science analysis and mission. planning. Moreover, the VR map of Mars has also been used to assist with the archiving and planning of activities for the Sojourner rover.

  15. A collaborative interaction and visualization multi-modal environment for surgical planning.

    PubMed

    Foo, Jung Leng; Martinez-Escobar, Marisol; Peloquin, Catherine; Lobe, Thom; Winer, Eliot

    2009-01-01

    The proliferation of virtual reality visualization and interaction technologies has changed the way medical image data is analyzed and processed. This paper presents a multi-modal environment that combines a virtual reality application with a desktop application for collaborative surgical planning. Both visualization applications can function independently but can also be synced over a network connection for collaborative work. Any changes to either application is immediately synced and updated to the other. This is an efficient collaboration tool that allows multiple teams of doctors with only an internet connection to visualize and interact with the same patient data simultaneously. With this multi-modal environment framework, one team working in the VR environment and another team from a remote location working on a desktop machine can both collaborate in the examination and discussion for procedures such as diagnosis, surgical planning, teaching and tele-mentoring.

  16. A training platform for many-dimensional prosthetic devices using a virtual reality environment

    PubMed Central

    Putrino, David; Wong, Yan T.; Weiss, Adam; Pesaran, Bijan

    2014-01-01

    Brain machine interfaces (BMIs) have the potential to assist in the rehabilitation of millions of patients worldwide. Despite recent advancements in BMI technology for the restoration of lost motor function, a training environment to restore full control of the anatomical segments of an upper limb extremity has not yet been presented. Here, we develop a virtual upper limb prosthesis with 27 independent dimensions, the anatomical dimensions of the human arm and hand, and deploy the virtual prosthesis as an avatar in a virtual reality environment (VRE) that can be controlled in real-time. The prosthesis avatar accepts kinematic control inputs that can be captured from movements of the arm and hand as well as neural control inputs derived from processed neural signals. We characterize the system performance under kinematic control using a commercially available motion capture system. We also present the performance under kinematic control achieved by two non-human primates (Macaca Mulatta) trained to use the prosthetic avatar to perform reaching and grasping tasks. This is the first virtual prosthetic device that is capable of emulating all the anatomical movements of a healthy upper limb in real-time. Since the system accepts both neural and kinematic inputs for a variety of many-dimensional skeletons, we propose it provides a customizable training platform for the acquisition of many-dimensional neural prosthetic control. PMID:24726625

  17. Modularity in robotic systems

    NASA Technical Reports Server (NTRS)

    Tesar, Delbert; Butler, Michael S.

    1989-01-01

    Most robotic systems today are designed one at a time, at a high cost of time and money. This wasteful approach has been necessary because the industry has not established a foundation for the continued evolution of intelligent machines. The next generation of robots will have to be generic, versatile machines capable of absorbing new technology rapidly and economically. This approach is demonstrated in the success of the personal computer, which can be upgraded or expanded with new software and hardware at virtually every level. Modularity is perceived as a major opportunity to reduce the 6 to 7 year design cycle time now required for new robotic manipulators, greatly increasing the breadth and speed of diffusion of robotic systems in manufacturing. Modularity and its crucial role in the next generation of intelligent machines are the focus of interest. The main advantages that modularity provides are examined; types of modules needed to create a generic robot are discussed. Structural modules designed by the robotics group at the University of Texas at Austin are examined to demonstrate the advantages of modular design.

  18. Charliecloud

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Priedhorsky, Reid; Randles, Tim

    Charliecloud is a set of scripts to let users run a virtual cluster of virtual machines (VMs) on a desktop or supercomputer. Key functions include: 1. Creating (typically by installing an operating system from vendor media) and updating VM images; 2. Running a single VM; 3. Running multiple VMs in a virtual cluster. The virtual machines can talk to one another over the network and (in some cases) the outside world. This is accomplished by calling external programs such as QEMU and the Virtual Distributed Ethernet (VDE) suite. The goal is to let users have a virtual cluster containing nodesmore » where they have privileged access, while isolating that privilege within the virtual cluster so it cannot affect the physical compute resources. Host configuration enforces security; this is not included in Charliecloud, though security guidelines are included in its documentation and Charliecloud is designed to facilitate such configuration. Charliecloud manages passing information from host computers into and out of the virtual machines, such as parameters of the virtual cluster, input data specified by the user, output data from virtual compute jobs, VM console display, and network connections (e.g., SSH or X11). Parameters for the virtual cluster (number of VMs, RAM and disk per VM, etc.) are specified by the user or gathered from the environment (e.g., SLURM environment variables). Example job scripts are included. These include computation examples (such as a "hello world" MPI job) as well as performance tests. They also include a security test script to verify that the virtual cluster is appropriately sandboxed. Tests include: 1. Pinging hosts inside and outside the virtual cluster to explore connectivity; 2. Port scans (again inside and outside) to see what services are available; 3. Sniffing tests to see what traffic is visible to running VMs; 4. IP address spoofing to test network functionality in this case; 5. File access tests to make sure host access permissions are enforced. This test script is not a comprehensive scanner and does not test for specific vulnerabilities. Importantly, no information about physical hosts or network topology is included in this script (or any of Charliecloud); while part of a sensible test, such information is specified by the user when the test is run. That is, one cannot learn anything about the LANL network or computing infrastructure by examining Charliecloud code.« less

  19. Learning Rationales and Virtual Reality Technology in Education.

    ERIC Educational Resources Information Center

    Chiou, Guey-Fa

    1995-01-01

    Defines and describes virtual reality technology and differentiates between virtual learning environment, learning material, and learning tools. Links learning rationales to virtual reality technology to pave conceptual foundations for application of virtual reality technology education. Constructivism, case-based learning, problem-based learning,…

  20. Using PVM to host CLIPS in distributed environments

    NASA Technical Reports Server (NTRS)

    Myers, Leonard; Pohl, Kym

    1994-01-01

    It is relatively easy to enhance CLIPS (C Language Integrated Production System) to support multiple expert systems running in a distributed environment with heterogeneous machines. The task is minimized by using the PVM (Parallel Virtual Machine) code from Oak Ridge Labs to provide the distributed utility. PVM is a library of C and FORTRAN subprograms that supports distributive computing on many different UNIX platforms. A PVM deamon is easily installed on each CPU that enters the virtual machine environment. Any user with rsh or rexec access to a machine can use the one PVM deamon to obtain a generous set of distributed facilities. The ready availability of both CLIPS and PVM makes the combination of software particularly attractive for budget conscious experimentation of heterogeneous distributive computing with multiple CLIPS executables. This paper presents a design that is sufficient to provide essential message passing functions in CLIPS and enable the full range of PVM facilities.

  1. A Concept for Optimizing Behavioural Effectiveness & Efficiency

    NASA Astrophysics Data System (ADS)

    Barca, Jan Carlo; Rumantir, Grace; Li, Raymond

    Both humans and machines exhibit strengths and weaknesses that can be enhanced by merging the two entities. This research aims to provide a broader understanding of how closer interactions between these two entities can facilitate more optimal goal-directed performance through the use of artificial extensions of the human body. Such extensions may assist us in adapting to and manipulating our environments in a more effective way than any system known today. To demonstrate this concept, we have developed a simulation where a semi interactive virtual spider can be navigated through an environment consisting of several obstacles and a virtual predator capable of killing the spider. The virtual spider can be navigated through the use of three different control systems that can be used to assist in optimising overall goal directed performance. The first two control systems use, an onscreen button interface and a touch sensor, respectively to facilitate human navigation of the spider. The third control system is an autonomous navigation system through the use of machine intelligence embedded in the spider. This system enables the spider to navigate and react to changes in its local environment. The results of this study indicate that machines should be allowed to override human control in order to maximise the benefits of collaboration between man and machine. This research further indicates that the development of strong machine intelligence, sensor systems that engage all human senses, extra sensory input systems, physical remote manipulators, multiple intelligent extensions of the human body, as well as a tighter symbiosis between man and machine, can support an upgrade of the human form.

  2. The influence of negative training set size on machine learning-based virtual screening.

    PubMed

    Kurczab, Rafał; Smusz, Sabina; Bojarski, Andrzej J

    2014-01-01

    The paper presents a thorough analysis of the influence of the number of negative training examples on the performance of machine learning methods. The impact of this rather neglected aspect of machine learning methods application was examined for sets containing a fixed number of positive and a varying number of negative examples randomly selected from the ZINC database. An increase in the ratio of positive to negative training instances was found to greatly influence most of the investigated evaluating parameters of ML methods in simulated virtual screening experiments. In a majority of cases, substantial increases in precision and MCC were observed in conjunction with some decreases in hit recall. The analysis of dynamics of those variations let us recommend an optimal composition of training data. The study was performed on several protein targets, 5 machine learning algorithms (SMO, Naïve Bayes, Ibk, J48 and Random Forest) and 2 types of molecular fingerprints (MACCS and CDK FP). The most effective classification was provided by the combination of CDK FP with SMO or Random Forest algorithms. The Naïve Bayes models appeared to be hardly sensitive to changes in the number of negative instances in the training set. In conclusion, the ratio of positive to negative training instances should be taken into account during the preparation of machine learning experiments, as it might significantly influence the performance of particular classifier. What is more, the optimization of negative training set size can be applied as a boosting-like approach in machine learning-based virtual screening.

  3. The influence of negative training set size on machine learning-based virtual screening

    PubMed Central

    2014-01-01

    Background The paper presents a thorough analysis of the influence of the number of negative training examples on the performance of machine learning methods. Results The impact of this rather neglected aspect of machine learning methods application was examined for sets containing a fixed number of positive and a varying number of negative examples randomly selected from the ZINC database. An increase in the ratio of positive to negative training instances was found to greatly influence most of the investigated evaluating parameters of ML methods in simulated virtual screening experiments. In a majority of cases, substantial increases in precision and MCC were observed in conjunction with some decreases in hit recall. The analysis of dynamics of those variations let us recommend an optimal composition of training data. The study was performed on several protein targets, 5 machine learning algorithms (SMO, Naïve Bayes, Ibk, J48 and Random Forest) and 2 types of molecular fingerprints (MACCS and CDK FP). The most effective classification was provided by the combination of CDK FP with SMO or Random Forest algorithms. The Naïve Bayes models appeared to be hardly sensitive to changes in the number of negative instances in the training set. Conclusions In conclusion, the ratio of positive to negative training instances should be taken into account during the preparation of machine learning experiments, as it might significantly influence the performance of particular classifier. What is more, the optimization of negative training set size can be applied as a boosting-like approach in machine learning-based virtual screening. PMID:24976867

  4. Enterprise Cloud Architecture for Chinese Ministry of Railway

    NASA Astrophysics Data System (ADS)

    Shan, Xumei; Liu, Hefeng

    Enterprise like PRC Ministry of Railways (MOR), is facing various challenges ranging from highly distributed computing environment and low legacy system utilization, Cloud Computing is increasingly regarded as one workable solution to address this. This article describes full scale cloud solution with Intel Tashi as virtual machine infrastructure layer, Hadoop HDFS as computing platform, and self developed SaaS interface, gluing virtual machine and HDFS with Xen hypervisor. As a result, on demand computing task application and deployment have been tackled per MOR real working scenarios at the end of article.

  5. Design and implementation of spatial knowledge grid for integrated spatial analysis

    NASA Astrophysics Data System (ADS)

    Liu, Xiangnan; Guan, Li; Wang, Ping

    2006-10-01

    Supported by spatial information grid(SIG), the spatial knowledge grid (SKG) for integrated spatial analysis utilizes the middleware technology in constructing the spatial information grid computation environment and spatial information service system, develops spatial entity oriented spatial data organization technology, carries out the profound computation of the spatial structure and spatial process pattern on the basis of Grid GIS infrastructure, spatial data grid and spatial information grid (specialized definition). At the same time, it realizes the complex spatial pattern expression and the spatial function process simulation by taking the spatial intelligent agent as the core to establish space initiative computation. Moreover through the establishment of virtual geographical environment with man-machine interactivity and blending, complex spatial modeling, network cooperation work and spatial community decision knowledge driven are achieved. The framework of SKG is discussed systematically in this paper. Its implement flow and the key technology with examples of overlay analysis are proposed as well.

  6. Virtual Reality Enhanced Instructional Learning

    ERIC Educational Resources Information Center

    Nachimuthu, K.; Vijayakumari, G.

    2009-01-01

    Virtual Reality (VR) is a creation of virtual 3D world in which one can feel and sense the world as if it is real. It is allowing engineers to design machines and Educationists to design AV [audiovisual] equipment in real time but in 3-dimensional hologram as if the actual material is being made and worked upon. VR allows a least-cost (energy…

  7. Virtual manufacturing work cell for engineering

    NASA Astrophysics Data System (ADS)

    Watanabe, Hideo; Ohashi, Kazushi; Takahashi, Nobuyuki; Kato, Kiyotaka; Fujita, Satoru

    1997-12-01

    The life cycles of products have been getting shorter. To meet this rapid turnover, manufacturing systems must be frequently changed as well. In engineering to develop manufacturing systems, there are several tasks such as process planning, layout design, programming, and final testing using actual machines. This development of manufacturing systems takes a long time and is expensive. To aid the above engineering process, we have developed the virtual manufacturing workcell (VMW). This paper describes a concept of VMW and design method through computer aided manufacturing engineering using VMW (CAME-VMW) related to the above engineering tasks. The VMW has all design data, and realizes a behavior of equipment and devices using a simulator. The simulator has logical and physical functionality. The one simulates a sequence control and the other simulates motion control, shape movement in 3D space. The simulator can execute the same control software made for actual machines. Therefore we can verify the behavior precisely before the manufacturing workcell will be constructed. The VMW creates engineering work space for several engineers and offers debugging tools such as virtual equipment and virtual controllers. We applied this VMW to development of a transfer workcell for vaporization machine in actual manufacturing system to produce plasma display panel (PDP) workcell and confirmed its effectiveness.

  8. Identification of Program Signatures from Cloud Computing System Telemetry Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nichols, Nicole M.; Greaves, Mark T.; Smith, William P.

    Malicious cloud computing activity can take many forms, including running unauthorized programs in a virtual environment. Detection of these malicious activities while preserving the privacy of the user is an important research challenge. Prior work has shown the potential viability of using cloud service billing metrics as a mechanism for proxy identification of malicious programs. Previously this novel detection method has been evaluated in a synthetic and isolated computational environment. In this paper we demonstrate the ability of billing metrics to identify programs, in an active cloud computing environment, including multiple virtual machines running on the same hypervisor. The openmore » source cloud computing platform OpenStack, is used for private cloud management at Pacific Northwest National Laboratory. OpenStack provides a billing tool (Ceilometer) to collect system telemetry measurements. We identify four different programs running on four virtual machines under the same cloud user account. Programs were identified with up to 95% accuracy. This accuracy is dependent on the distinctiveness of telemetry measurements for the specific programs we tested. Future work will examine the scalability of this approach for a larger selection of programs to better understand the uniqueness needed to identify a program. Additionally, future work should address the separation of signatures when multiple programs are running on the same virtual machine.« less

  9. Dynamic provisioning of a HEP computing infrastructure on a shared hybrid HPC system

    NASA Astrophysics Data System (ADS)

    Meier, Konrad; Fleig, Georg; Hauth, Thomas; Janczyk, Michael; Quast, Günter; von Suchodoletz, Dirk; Wiebelt, Bernd

    2016-10-01

    Experiments in high-energy physics (HEP) rely on elaborate hardware, software and computing systems to sustain the high data rates necessary to study rare physics processes. The Institut fr Experimentelle Kernphysik (EKP) at KIT is a member of the CMS and Belle II experiments, located at the LHC and the Super-KEKB accelerators, respectively. These detectors share the requirement, that enormous amounts of measurement data must be processed and analyzed and a comparable amount of simulated events is required to compare experimental results with theoretical predictions. Classical HEP computing centers are dedicated sites which support multiple experiments and have the required software pre-installed. Nowadays, funding agencies encourage research groups to participate in shared HPC cluster models, where scientist from different domains use the same hardware to increase synergies. This shared usage proves to be challenging for HEP groups, due to their specialized software setup which includes a custom OS (often Scientific Linux), libraries and applications. To overcome this hurdle, the EKP and data center team of the University of Freiburg have developed a system to enable the HEP use case on a shared HPC cluster. To achieve this, an OpenStack-based virtualization layer is installed on top of a bare-metal cluster. While other user groups can run their batch jobs via the Moab workload manager directly on bare-metal, HEP users can request virtual machines with a specialized machine image which contains a dedicated operating system and software stack. In contrast to similar installations, in this hybrid setup, no static partitioning of the cluster into a physical and virtualized segment is required. As a unique feature, the placement of the virtual machine on the cluster nodes is scheduled by Moab and the job lifetime is coupled to the lifetime of the virtual machine. This allows for a seamless integration with the jobs sent by other user groups and honors the fairshare policies of the cluster. The developed thin integration layer between OpenStack and Moab can be adapted to other batch servers and virtualization systems, making the concept also applicable for other cluster operators. This contribution will report on the concept and implementation of an OpenStack-virtualized cluster used for HEP workflows. While the full cluster will be installed in spring 2016, a test-bed setup with 800 cores has been used to study the overall system performance and dedicated HEP jobs were run in a virtualized environment over many weeks. Furthermore, the dynamic integration of the virtualized worker nodes, depending on the workload at the institute's computing system, will be described.

  10. Evolution of Cardiac Biomodels from Computational to Therapeutics.

    PubMed

    Rathinam, Alwin Kumar; Mokhtar, Raja Amin Raja

    2016-08-23

    Biomodeling the human anatomy in exact structure and size is an exciting field of medical science. Utilizing medical data from various medical imaging topography, the data of an anatomical structure can be extracted and converted into a three-dimensional virtual biomodel; thereafter a physical biomodel can be generated utilizing rapid prototyping machines. Here, we have reviewed the utilization of this technology and have provided some guidelines to develop biomodels of cardiac structures. Cardiac biomodels provide insights for cardiothoracic surgeons, cardiologists, and patients alike. Additionally, the technology may have future usability for tissue engineering, robotic surgery, or routine hospital usage as a diagnostic and therapeutic tool for cardiovascular diseases (CVD). Given the broad areas of application of cardiac biomodels, attention should be given to further research and development of their potential.

  11. Windmills: Ancestors of the wind power generation

    NASA Astrophysics Data System (ADS)

    Rossi, Cesare; Russo, Flavio; Savino, Sergio

    2017-09-01

    A brief description of the windmills from the second millennium BC to the Renaissance is presented. This survey is a part of several studies conducted by the authors on technology in the ancient world. The windmills are the first motor, other than human muscles, and are the ancestors of the modern wind turbines. Some authors' virtual reconstructions of old windmills are also presented. The paper shows that the operating principle of many modern machines had already been conceived in the ancient times by using a technology that was more advanced than expected, but with two main differences, as follows: Similar tasks were accomplished by using much less energy; and the environmental impact was nil or very low. Modern designers should sometimes consider simplicity rather than the use of a large amount of energy.

  12. The Virtual Xenbase: transitioning an online bioinformatics resource to a private cloud

    PubMed Central

    Karimi, Kamran; Vize, Peter D.

    2014-01-01

    As a model organism database, Xenbase has been providing informatics and genomic data on Xenopus (Silurana) tropicalis and Xenopus laevis frogs for more than a decade. The Xenbase database contains curated, as well as community-contributed and automatically harvested literature, gene and genomic data. A GBrowse genome browser, a BLAST+ server and stock center support are available on the site. When this resource was first built, all software services and components in Xenbase ran on a single physical server, with inherent reliability, scalability and inter-dependence issues. Recent advances in networking and virtualization techniques allowed us to move Xenbase to a virtual environment, and more specifically to a private cloud. To do so we decoupled the different software services and components, such that each would run on a different virtual machine. In the process, we also upgraded many of the components. The resulting system is faster and more reliable. System maintenance is easier, as individual virtual machines can now be updated, backed up and changed independently. We are also experiencing more effective resource allocation and utilization. Database URL: www.xenbase.org PMID:25380782

  13. Naval Applications of Virtual Reality,

    DTIC Science & Technology

    1993-01-01

    Expert Virtual Reality Special Report 󈨡, pp. 67- 72. 14. SUBJECT TERMS 15 NUMBER o0 PAGES man-machine interface virtual reality decision support...collective and individual performance. -" Virtual reality projects could help *y by Mark Gembicki Av-t-abilty CodesA Avafllat Idt Iofe and David Rousseau...alt- 67 VIRTUAL . REALITY SPECIAl, REPORT r-OPY avcriaikxb to DD)C qg .- 154,41X~~~~~~~~~~~~j 1411 iI..:41 T a].’ 1,1 4 1111 I 4 1 * .11 ~ 4 l.~w111511 I

  14. BioImg.org: A Catalog of Virtual Machine Images for the Life Sciences

    PubMed Central

    Dahlö, Martin; Haziza, Frédéric; Kallio, Aleksi; Korpelainen, Eija; Bongcam-Rudloff, Erik; Spjuth, Ola

    2015-01-01

    Virtualization is becoming increasingly important in bioscience, enabling assembly and provisioning of complete computer setups, including operating system, data, software, and services packaged as virtual machine images (VMIs). We present an open catalog of VMIs for the life sciences, where scientists can share information about images and optionally upload them to a server equipped with a large file system and fast Internet connection. Other scientists can then search for and download images that can be run on the local computer or in a cloud computing environment, providing easy access to bioinformatics environments. We also describe applications where VMIs aid life science research, including distributing tools and data, supporting reproducible analysis, and facilitating education. BioImg.org is freely available at: https://bioimg.org. PMID:26401099

  15. BioImg.org: A Catalog of Virtual Machine Images for the Life Sciences.

    PubMed

    Dahlö, Martin; Haziza, Frédéric; Kallio, Aleksi; Korpelainen, Eija; Bongcam-Rudloff, Erik; Spjuth, Ola

    2015-01-01

    Virtualization is becoming increasingly important in bioscience, enabling assembly and provisioning of complete computer setups, including operating system, data, software, and services packaged as virtual machine images (VMIs). We present an open catalog of VMIs for the life sciences, where scientists can share information about images and optionally upload them to a server equipped with a large file system and fast Internet connection. Other scientists can then search for and download images that can be run on the local computer or in a cloud computing environment, providing easy access to bioinformatics environments. We also describe applications where VMIs aid life science research, including distributing tools and data, supporting reproducible analysis, and facilitating education. BioImg.org is freely available at: https://bioimg.org.

  16. Access, Equity, and Opportunity. Women in Machining: A Model Program.

    ERIC Educational Resources Information Center

    Warner, Heather

    The Women in Machining (WIM) program is a Machine Action Project (MAP) initiative that was developed in response to a local skilled metalworking labor shortage, despite a virtual absence of women and people of color from area shops. The project identified post-war stereotypes and other barriers that must be addressed if women are to have an equal…

  17. A genetic algorithm for a bi-objective mathematical model for dynamic virtual cell formation problem

    NASA Astrophysics Data System (ADS)

    Moradgholi, Mostafa; Paydar, Mohammad Mahdi; Mahdavi, Iraj; Jouzdani, Javid

    2016-09-01

    Nowadays, with the increasing pressure of the competitive business environment and demand for diverse products, manufacturers are force to seek for solutions that reduce production costs and rise product quality. Cellular manufacturing system (CMS), as a means to this end, has been a point of attraction to both researchers and practitioners. Limitations of cell formation problem (CFP), as one of important topics in CMS, have led to the introduction of virtual CMS (VCMS). This research addresses a bi-objective dynamic virtual cell formation problem (DVCFP) with the objective of finding the optimal formation of cells, considering the material handling costs, fixed machine installation costs and variable production costs of machines and workforce. Furthermore, we consider different skills on different machines in workforce assignment in a multi-period planning horizon. The bi-objective model is transformed to a single-objective fuzzy goal programming model and to show its performance; numerical examples are solved using the LINGO software. In addition, genetic algorithm (GA) is customized to tackle large-scale instances of the problems to show the performance of the solution method.

  18. Integration of virtualized worker nodes in standard batch systems

    NASA Astrophysics Data System (ADS)

    Büge, Volker; Hessling, Hermann; Kemp, Yves; Kunze, Marcel; Oberst, Oliver; Quast, Günter; Scheurer, Armin; Synge, Owen

    2010-04-01

    Current experiments in HEP only use a limited number of operating system flavours. Their software might only be validated on one single OS platform. Resource providers might have other operating systems of choice for the installation of the batch infrastructure. This is especially the case if a cluster is shared with other communities, or communities that have stricter security requirements. One solution would be to statically divide the cluster into separated sub-clusters. In such a scenario, no opportunistic distribution of the load can be achieved, resulting in a poor overall utilization efficiency. Another approach is to make the batch system aware of virtualization, and to provide each community with its favoured operating system in a virtual machine. Here, the scheduler has full flexibility, resulting in a better overall efficiency of the resources. In our contribution, we present a lightweight concept for the integration of virtual worker nodes into standard batch systems. The virtual machines are started on the worker nodes just before jobs are executed there. No meta-scheduling is introduced. We demonstrate two prototype implementations, one based on the Sun Grid Engine (SGE), the other using Maui/Torque as a batch system. Both solutions support local job as well as Grid job submission. The hypervisors currently used are Xen and KVM, a port to another system is easily envisageable. To better handle different virtual machines on the physical host, the management solution VmImageManager is developed. We will present first experience from running the two prototype implementations. In a last part, we will show the potential future use of this lightweight concept when integrated into high-level (i.e. Grid) work-flows.

  19. Smart manufacturing systems for Industry 4.0: Conceptual framework, scenarios, and future perspectives

    NASA Astrophysics Data System (ADS)

    Zheng, Pai; wang, Honghui; Sang, Zhiqian; Zhong, Ray Y.; Liu, Yongkui; Liu, Chao; Mubarok, Khamdi; Yu, Shiqiang; Xu, Xun

    2018-06-01

    Information and communication technology is undergoing rapid development, and many disruptive technologies, such as cloud computing, Internet of Things, big data, and artificial intelligence, have emerged. These technologies are permeating the manufacturing industry and enable the fusion of physical and virtual worlds through cyber-physical systems (CPS), which mark the advent of the fourth stage of industrial production (i.e., Industry 4.0). The widespread application of CPS in manufacturing environments renders manufacturing systems increasingly smart. To advance research on the implementation of Industry 4.0, this study examines smart manufacturing systems for Industry 4.0. First, a conceptual framework of smart manufacturing systems for Industry 4.0 is presented. Second, demonstrative scenarios that pertain to smart design, smart machining, smart control, smart monitoring, and smart scheduling, are presented. Key technologies and their possible applications to Industry 4.0 smart manufacturing systems are reviewed based on these demonstrative scenarios. Finally, challenges and future perspectives are identified and discussed.

  20. An Argument for Partial Admissibility of Polygraph Results in Trials by Courts-Martial

    DTIC Science & Technology

    1990-04-01

    FUNDAMENTALS OF THE POLYGRAPH TECHNIQUE 6 A. THE POLYGRAPH MACHINE 7 1. THE CARDIOSPHYMOGRAPH 8 2. THE PNEUMOGRAPH 9 3. THE GALVANOMETER 10 4. THE... Machine Anyone observing a polygraph machine for the first time could easily conclude it is a survivor of the Spanish Inquisition. The lengths of...wire and coils get the immediate attention of the subject. However, the various polygraph machines in use today cause virtually no discomfort. Several

  1. Virtual reality: Avatars in human spaceflight training

    NASA Astrophysics Data System (ADS)

    Osterlund, Jeffrey; Lawrence, Brad

    2012-02-01

    With the advancements in high spatial and temporal resolution graphics, along with advancements in 3D display capabilities to model, simulate, and analyze human-to-machine interfaces and interactions, the world of virtual environments is being used to develop everything from gaming, movie special affects and animations to the design of automobiles. The use of multiple object motion capture technology and digital human tools in aerospace has demonstrated to be a more cost effective alternative to the cost of physical prototypes, provides a more efficient, flexible and responsive environment to changes in the design and training, and provides early human factors considerations concerning the operation of a complex launch vehicle or spacecraft. United Space Alliance (USA) has deployed this technique and tool under Research and Development (R&D) activities on both spacecraft assembly and ground processing operations design and training on the Orion Crew Module. USA utilizes specialized products that were chosen based on functionality, including software and fixed based hardware (e.g., infrared and visible red cameras), along with cyber gloves to ensure fine motor dexterity of the hands. The key findings of the R&D were: mock-ups should be built to not obstruct cameras from markers being tracked; a mock-up toolkit be assembled to facilitate dynamic design changes; markers should be placed in accurate positions on humans and flight hardware to help with tracking; 3D models used in the virtual environment be striped of non-essential data; high computational capable workstations are required to handle the large model data sets; and Technology Interchange Meetings with vendors and other industries also utilizing virtual reality applications need to occur on a continual basis enabling USA to maintain its leading edge within this technology. Parameters of interest and benefit in human spaceflight simulation training that utilizes virtual reality technologies are to familiarize and assess operational processes, allow the ability to train virtually, experiment with "what if" scenarios, and expedite immediate changes to validate the design implementation are all parameters of interest in human spaceflight. Training benefits encompass providing 3D animation for post-training assessment, placement of avatars within 3D replicated work environments in assembling or processing hardware, offering various viewpoints of processes viewed and assessed giving the evaluators the ability to assess task feasibility and identify potential support equipment needs; and provide human factors determinations, such as reach, visibility, and accessibility. Multiple object motion capture technology provides an effective tool to train and assess ergonomic risks, simulations for determination of negative interactions between technicians and their proposed workspaces, and evaluation of spaceflight systems prior to, and as part of, the design process to contain costs and reduce schedule delays.

  2. Application Analysis of BIM Technology in Metro Rail Transit

    NASA Astrophysics Data System (ADS)

    Liu, Bei; Sun, Xianbin

    2018-03-01

    With the rapid development of urban roads, especially the construction of subway rail transit, it is an effective way to alleviate urban traffic congestion. There are limited site space, complex resource allocation, tight schedule, underground pipeline complex engineering problems. BIM technology, three-dimensional visualization, parameterization, virtual simulation and many other advantages can effectively solve these technical problems. Based on the project of Shenzhen Metro Line 9, BIM technology is innovatively researched throughout the lifecycle of BIM technology in the context of the metro rail transit project rarely used at this stage. The model information file is imported into Navisworks for four-dimensional animation simulation to determine the optimum construction scheme of the shield machine. Subway construction management application platform based on BIM and private cloud technology, the use of cameras and sensors to achieve electronic integration, dynamic monitoring of the operation and maintenance of underground facilities. Make full use of the many advantages of BIM technology to improve the engineering quality and construction efficiency of the subway rail transit project and to complete the operation and maintenance.

  3. Teaching Cybersecurity Using the Cloud

    ERIC Educational Resources Information Center

    Salah, Khaled; Hammoud, Mohammad; Zeadally, Sherali

    2015-01-01

    Cloud computing platforms can be highly attractive to conduct course assignments and empower students with valuable and indispensable hands-on experience. In particular, the cloud can offer teaching staff and students (whether local or remote) on-demand, elastic, dedicated, isolated, (virtually) unlimited, and easily configurable virtual machines.…

  4. Ant-Based Cyber Defense (also known as

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Glenn Fink, PNNL

    2015-09-29

    ABCD is a four-level hierarchy with human supervisors at the top, a top-level agent called a Sergeant controlling each enclave, Sentinel agents located at each monitored host, and mobile Sensor agents that swarm through the enclaves to detect cyber malice and misconfigurations. The code comprises four parts: (1) the core agent framework, (2) the user interface and visualization, (3) test-range software to create a network of virtual machines including a simulated Internet and user and host activity emulation scripts, and (4) a test harness to allow the safe running of adversarial code within the framework of monitored virtual machines.

  5. Prototyping Faithful Execution in a Java virtual machine.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tarman, Thomas David; Campbell, Philip LaRoche; Pierson, Lyndon George

    2003-09-01

    This report presents the implementation of a stateless scheme for Faithful Execution, the design for which is presented in a companion report, ''Principles of Faithful Execution in the Implementation of Trusted Objects'' (SAND 2003-2328). We added a simple cryptographic capability to an already simplified class loader and its associated Java Virtual Machine (JVM) to provide a byte-level implementation of Faithful Execution. The extended class loader and JVM we refer to collectively as the Sandia Faithfully Executing Java architecture (or JavaFE for short). This prototype is intended to enable exploration of more sophisticated techniques which we intend to implement in hardware.

  6. Perceptual Fidelity vs. Engineering Compromises In Virtual Acoustic Displays

    NASA Technical Reports Server (NTRS)

    Wenzel, Elizabeth M.; Ahumada, Albert (Technical Monitor)

    1997-01-01

    Immersive, three-dimensional displays are increasingly becoming a goal of advanced human-machine interfaces. While the technology for achieving truly useful multisensory environments is still being developed, techniques for generating three-dimensional sound are now both sophisticated and practical enough to be applied to acoustic displays. The ultimate goal of virtual acoustics is to simulate the complex acoustic field experienced by a listener freely moving around within an environment. Of course, such complexity, freedom of movement and interactively is not always possible in a "true" virtual environment, much less in lower-fidelity multimedia systems. However, many of the perceptual and engineering constraints (and frustrations) that researchers, engineers and listeners have experienced in virtual audio are relevant to multimedia. In fact, some of the problems that have been studied will be even more of an issue for lower fidelity systems that are attempting to address the requirements of a huge, diverse and ultimately unknown audience. Examples include individual differences in head-related transfer functions, a lack of real interactively (head-tracking) in many multimedia displays, and perceptual degradation due to low sampling rates and/or low-bit compression. This paper discusses some of the engineering Constraints faced during implementation of virtual acoustic environments and the perceptual consequences of these constraints. Specific examples are given for NASA applications such as telerobotic control, aeronautical displays, and shuttle launch communications. An attempt will also be made to relate these issues to low-fidelity implementations such as the internet.

  7. Perceptual Fidelity Versus Engineering Compromises in Virtual Acoustic Displays

    NASA Technical Reports Server (NTRS)

    Wenzel, Elizabeth M.; Ellis, Stephen R. (Technical Monitor); Frey, Mary Anne (Technical Monitor); Schneider, Victor S. (Technical Monitor)

    1997-01-01

    Immersive, three-dimensional displays are increasingly becoming a goal of advanced human-machine interfaces. While the technology for achieving truly useful multisensory environments is still being developed, techniques for generating three-dimensional sound are now both sophisticated and practical enough to be applied to acoustic displays. The ultimate goal of virtual acoustics is to simulate the complex acoustic field experienced by a listener freely moving around within an environment. Of course, such complexity, freedom of movement and interactivity is not always possible in a 'true' virtual environment, much less in lower-fidelity multimedia systems. However, many of the perceptual and engineering constraints (and frustrations) that researchers, engineers and listeners have experienced in virtual audio are relevant to multimedia. In fact, some of the problems that have been studied will be even more of an issue for lower fidelity systems that are attempting to address the requirements of a huge, diverse and ultimately unknown audience. Examples include individual differences in head-related transfer functions, A lack of real interactively (head-tracking) in many multimedia displays, and perceptual degradation due to low sampling rates and/or low-bit compression. This paper discusses some of the engineering constraints faced during implementation of virtual acoustic environments and the perceptual consequences of these constraints. Specific examples are given for NASA applications such as telerobotic control, aeronautical displays, and shuttle launch communications. An attempt will also be made to relate these issues to low-fidelity implementations such as the internet.

  8. Build and Execute Environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guan, Qiang

    At exascale, the challenge becomes to develop applications that run at scale and use exascale platforms reliably, efficiently, and flexibly. Workflows become much more complex because they must seamlessly integrate simulation and data analytics. They must include down-sampling, post-processing, feature extraction, and visualization. Power and data transfer limitations require these analysis tasks to be run in-situ or in-transit. We expect successful workflows will comprise multiple linked simulations along with tens of analysis routines. Users will have limited development time at scale and, therefore, must have rich tools to develop, debug, test, and deploy applications. At this scale, successful workflows willmore » compose linked computations from an assortment of reliable, well-defined computation elements, ones that can come and go as required, based on the needs of the workflow over time. We propose a novel framework that utilizes both virtual machines (VMs) and software containers to create a workflow system that establishes a uniform build and execution environment (BEE) beyond the capabilities of current systems. In this environment, applications will run reliably and repeatably across heterogeneous hardware and software. Containers, both commercial (Docker and Rocket) and open-source (LXC and LXD), define a runtime that isolates all software dependencies from the machine operating system. Workflows may contain multiple containers that run different operating systems, different software, and even different versions of the same software. We will run containers in open-source virtual machines (KVM) and emulators (QEMU) so that workflows run on any machine entirely in user-space. On this platform of containers and virtual machines, we will deliver workflow software that provides services, including repeatable execution, provenance, checkpointing, and future proofing. We will capture provenance about how containers were launched and how they interact to annotate workflows for repeatable and partial re-execution. We will coordinate the physical snapshots of virtual machines with parallel programming constructs, such as barriers, to automate checkpoint and restart. We will also integrate with HPC-specific container runtimes to gain access to accelerators and other specialized hardware to preserve native performance. Containers will link development to continuous integration. When application developers check code in, it will automatically be tested on a suite of different software and hardware architectures.« less

  9. The distributed agent-based approach in the e-manufacturing environment

    NASA Astrophysics Data System (ADS)

    Sękala, A.; Kost, G.; Dobrzańska-Danikiewicz, A.; Banaś, W.; Foit, K.

    2015-11-01

    The deficiency of a coherent flow of information from a production department causes unplanned downtime and failures of machines and their equipment, which in turn results in production planning process based on incorrect and out-of-date information. All of these factors entail, as the consequence, the additional difficulties associated with the process of decision-making. They concern, among other, the coordination of components of a distributed system and providing the access to the required information, thereby generating unnecessary costs. The use of agent technology significantly speeds up the flow of information within the virtual enterprise. This paper includes the proposal of a multi-agent approach for the integration of processes within the virtual enterprise concept. The presented concept was elaborated to investigate the possible solutions of the ways of transmission of information in the production system taking into account the self-organization of constituent components. Thus it implicated the linking of the concept of multi-agent system with the system of managing the production information, based on the idea of e-manufacturing. The paper presents resulting scheme that should be the base for elaborating an informatics model of the target virtual system. The computer system itself is intended to be developed next.

  10. SRM 2460/2461 Standard Bullets and Casings Project

    PubMed Central

    Song, J.; Whitenton, E.; Kelley, D.; Clary, R.; Ma, L.; Ballou, S.; Ols, M.

    2004-01-01

    The National Institute of Standards and Technology Standard Reference Material (SRM) 2460/2461 standard bullets and casings project will provide support to firearms examiners and to the National Integrated Ballistics Information Network (NIBIN) in the United States. The SRM bullet is designed as both a virtual and a physical bullet profile signature standard. The virtual standard is a set of six digitized bullet profile signatures originally traced from six master bullets fired at the Bureau of Alcohol, Tobacco and Firearms (ATF) and the Federal Bureau of Investigation (FBI). By using the virtual signature standard to control the tool path on a numerically controlled diamond turning machine, 40 SRM bullets were produced. A profile signature measurement system was established for the SRM bullets. The profile signature differences are quantified by the maximum of the cross correlation function and by the signature difference between pairs of compared profile signatures measured on different SRM bullets. Initial measurement results showed high reproducibility for both the measurement system and production process of the SRM bullets. A traceability scheme has been proposed to establish the measurement traceability for nationwide bullet signature measurements to NIST, ATF and FBI. Prototype SRM casings have also been developed. PMID:27366632

  11. Prediction based proactive thermal virtual machine scheduling in green clouds.

    PubMed

    Kinger, Supriya; Kumar, Rajesh; Sharma, Anju

    2014-01-01

    Cloud computing has rapidly emerged as a widely accepted computing paradigm, but the research on Cloud computing is still at an early stage. Cloud computing provides many advanced features but it still has some shortcomings such as relatively high operating cost and environmental hazards like increasing carbon footprints. These hazards can be reduced up to some extent by efficient scheduling of Cloud resources. Working temperature on which a machine is currently running can be taken as a criterion for Virtual Machine (VM) scheduling. This paper proposes a new proactive technique that considers current and maximum threshold temperature of Server Machines (SMs) before making scheduling decisions with the help of a temperature predictor, so that maximum temperature is never reached. Different workload scenarios have been taken into consideration. The results obtained show that the proposed system is better than existing systems of VM scheduling, which does not consider current temperature of nodes before making scheduling decisions. Thus, a reduction in need of cooling systems for a Cloud environment has been obtained and validated.

  12. LAVA web-based remote simulation: enhancements for education and technology innovation

    NASA Astrophysics Data System (ADS)

    Lee, Sang Il; Ng, Ka Chun; Orimoto, Takashi; Pittenger, Jason; Horie, Toshi; Adam, Konstantinos; Cheng, Mosong; Croffie, Ebo H.; Deng, Yunfei; Gennari, Frank E.; Pistor, Thomas V.; Robins, Garth; Williamson, Mike V.; Wu, Bo; Yuan, Lei; Neureuther, Andrew R.

    2001-09-01

    The Lithography Analysis using Virtual Access (LAVA) web site at http://cuervo.eecs.berkeley.edu/Volcano/ has been enhanced with new optical and deposition applets, graphical infrastructure and linkage to parallel execution on networks of workstations. More than ten new graphical user interface applets have been designed to support education, illustrate novel concepts from research, and explore usage of parallel machines. These applets have been improved through feedback and classroom use. Over the last year LAVA provided industry and other academic communities 1,300 session and 700 rigorous simulations per month among the SPLAT, SAMPLE2D, SAMPLE3D, TEMPEST, STORM, and BEBS simulators.

  13. Research on Visualization of Ground Laser Radar Data Based on Osg

    NASA Astrophysics Data System (ADS)

    Huang, H.; Hu, C.; Zhang, F.; Xue, H.

    2018-04-01

    Three-dimensional (3D) laser scanning is a new advanced technology integrating light, machine, electricity, and computer technologies. It can conduct 3D scanning to the whole shape and form of space objects with high precision. With this technology, you can directly collect the point cloud data of a ground object and create the structure of it for rendering. People use excellent 3D rendering engine to optimize and display the 3D model in order to meet the higher requirements of real time realism rendering and the complexity of the scene. OpenSceneGraph (OSG) is an open source 3D graphics engine. Compared with the current mainstream 3D rendering engine, OSG is practical, economical, and easy to expand. Therefore, OSG is widely used in the fields of virtual simulation, virtual reality, science and engineering visualization. In this paper, a dynamic and interactive ground LiDAR data visualization platform is constructed based on the OSG and the cross-platform C++ application development framework Qt. In view of the point cloud data of .txt format and the triangulation network data file of .obj format, the functions of 3D laser point cloud and triangulation network data display are realized. It is proved by experiments that the platform is of strong practical value as it is easy to operate and provides good interaction.

  14. Virtual Environment Training: Auxiliary Machinery Room (AMR) Watchstation Trainer.

    ERIC Educational Resources Information Center

    Hriber, Dennis C.; And Others

    1993-01-01

    Describes a project implemented at Newport News Shipbuilding that used Virtual Environment Training to improve the performance of submarine crewmen. Highlights include development of the Auxiliary Machine Room (AMR) Watchstation Trainer; Digital Video Interactive (DVI); screen layout; test design and evaluation; user reactions; authoring language;…

  15. The virtual machine (VM) scaler: an infrastructure manager supporting environmental modeling on IaaS clouds

    USDA-ARS?s Scientific Manuscript database

    Infrastructure-as-a-service (IaaS) clouds provide a new medium for deployment of environmental modeling applications. Harnessing advancements in virtualization, IaaS clouds can provide dynamic scalable infrastructure to better support scientific modeling computational demands. Providing scientific m...

  16. Global Detection of Live Virtual Machine Migration Based on Cellular Neural Networks

    PubMed Central

    Xie, Kang; Yang, Yixian; Zhang, Ling; Jing, Maohua; Xin, Yang; Li, Zhongxian

    2014-01-01

    In order to meet the demands of operation monitoring of large scale, autoscaling, and heterogeneous virtual resources in the existing cloud computing, a new method of live virtual machine (VM) migration detection algorithm based on the cellular neural networks (CNNs), is presented. Through analyzing the detection process, the parameter relationship of CNN is mapped as an optimization problem, in which improved particle swarm optimization algorithm based on bubble sort is used to solve the problem. Experimental results demonstrate that the proposed method can display the VM migration processing intuitively. Compared with the best fit heuristic algorithm, this approach reduces the processing time, and emerging evidence has indicated that this new approach is affordable to parallelism and analog very large scale integration (VLSI) implementation allowing the VM migration detection to be performed better. PMID:24959631

  17. Global detection of live virtual machine migration based on cellular neural networks.

    PubMed

    Xie, Kang; Yang, Yixian; Zhang, Ling; Jing, Maohua; Xin, Yang; Li, Zhongxian

    2014-01-01

    In order to meet the demands of operation monitoring of large scale, autoscaling, and heterogeneous virtual resources in the existing cloud computing, a new method of live virtual machine (VM) migration detection algorithm based on the cellular neural networks (CNNs), is presented. Through analyzing the detection process, the parameter relationship of CNN is mapped as an optimization problem, in which improved particle swarm optimization algorithm based on bubble sort is used to solve the problem. Experimental results demonstrate that the proposed method can display the VM migration processing intuitively. Compared with the best fit heuristic algorithm, this approach reduces the processing time, and emerging evidence has indicated that this new approach is affordable to parallelism and analog very large scale integration (VLSI) implementation allowing the VM migration detection to be performed better.

  18. Workflow4Metabolomics: a collaborative research infrastructure for computational metabolomics

    PubMed Central

    Giacomoni, Franck; Le Corguillé, Gildas; Monsoor, Misharl; Landi, Marion; Pericard, Pierre; Pétéra, Mélanie; Duperier, Christophe; Tremblay-Franco, Marie; Martin, Jean-François; Jacob, Daniel; Goulitquer, Sophie; Thévenot, Etienne A.; Caron, Christophe

    2015-01-01

    Summary: The complex, rapidly evolving field of computational metabolomics calls for collaborative infrastructures where the large volume of new algorithms for data pre-processing, statistical analysis and annotation can be readily integrated whatever the language, evaluated on reference datasets and chained to build ad hoc workflows for users. We have developed Workflow4Metabolomics (W4M), the first fully open-source and collaborative online platform for computational metabolomics. W4M is a virtual research environment built upon the Galaxy web-based platform technology. It enables ergonomic integration, exchange and running of individual modules and workflows. Alternatively, the whole W4M framework and computational tools can be downloaded as a virtual machine for local installation. Availability and implementation: http://workflow4metabolomics.org homepage enables users to open a private account and access the infrastructure. W4M is developed and maintained by the French Bioinformatics Institute (IFB) and the French Metabolomics and Fluxomics Infrastructure (MetaboHUB). Contact: contact@workflow4metabolomics.org PMID:25527831

  19. Workflow4Metabolomics: a collaborative research infrastructure for computational metabolomics.

    PubMed

    Giacomoni, Franck; Le Corguillé, Gildas; Monsoor, Misharl; Landi, Marion; Pericard, Pierre; Pétéra, Mélanie; Duperier, Christophe; Tremblay-Franco, Marie; Martin, Jean-François; Jacob, Daniel; Goulitquer, Sophie; Thévenot, Etienne A; Caron, Christophe

    2015-05-01

    The complex, rapidly evolving field of computational metabolomics calls for collaborative infrastructures where the large volume of new algorithms for data pre-processing, statistical analysis and annotation can be readily integrated whatever the language, evaluated on reference datasets and chained to build ad hoc workflows for users. We have developed Workflow4Metabolomics (W4M), the first fully open-source and collaborative online platform for computational metabolomics. W4M is a virtual research environment built upon the Galaxy web-based platform technology. It enables ergonomic integration, exchange and running of individual modules and workflows. Alternatively, the whole W4M framework and computational tools can be downloaded as a virtual machine for local installation. http://workflow4metabolomics.org homepage enables users to open a private account and access the infrastructure. W4M is developed and maintained by the French Bioinformatics Institute (IFB) and the French Metabolomics and Fluxomics Infrastructure (MetaboHUB). contact@workflow4metabolomics.org. © The Author 2014. Published by Oxford University Press.

  20. A Novel Topology Link-Controlling Approach for Active Defense of a Node in a Network.

    PubMed

    Li, Jun; Hu, HanPing; Ke, Qiao; Xiong, Naixue

    2017-03-09

    With the rapid development of virtual machine technology and cloud computing, distributed denial of service (DDoS) attacks, or some peak traffic, poses a great threat to the security of the network. In this paper, a novel topology link control technique and mitigation attacks in real-time environments is proposed. Firstly, a non-invasive method of deploying virtual sensors in the nodes is built, which uses the resource manager of each monitored node as a sensor. Secondly, a general topology-controlling approach of resisting the tolerant invasion is proposed. In the proposed approach, a prediction model is constructed by using copula functions for predicting the peak of a resource through another resource. The result of prediction determines whether or not to initiate the active defense. Finally, a minority game with incomplete strategy is employed to suppress attack flows and improve the permeability of the normal flows. The simulation results show that the proposed approach is very effective in protecting nodes.

  1. A Novel Topology Link-Controlling Approach for Active Defense of Nodes in Networks

    PubMed Central

    Li, Jun; Hu, HanPing; Ke, Qiao; Xiong, Naixue

    2017-01-01

    With the rapid development of virtual machine technology and cloud computing, distributed denial of service (DDoS) attacks, or some peak traffic, poses a great threat to the security of the network. In this paper, a novel topology link control technique and mitigation attacks in real-time environments is proposed. Firstly, a non-invasive method of deploying virtual sensors in the nodes is built, which uses the resource manager of each monitored node as a sensor. Secondly, a general topology-controlling approach of resisting the tolerant invasion is proposed. In the proposed approach, a prediction model is constructed by using copula functions for predicting the peak of a resource through another resource. The result of prediction determines whether or not to initiate the active defense. Finally, a minority game with incomplete strategy is employed to suppress attack flows and improve the permeability of the normal flows. The simulation results show that the proposed approach is very effective in protecting nodes. PMID:28282962

  2. The Virtual Xenbase: transitioning an online bioinformatics resource to a private cloud.

    PubMed

    Karimi, Kamran; Vize, Peter D

    2014-01-01

    As a model organism database, Xenbase has been providing informatics and genomic data on Xenopus (Silurana) tropicalis and Xenopus laevis frogs for more than a decade. The Xenbase database contains curated, as well as community-contributed and automatically harvested literature, gene and genomic data. A GBrowse genome browser, a BLAST+ server and stock center support are available on the site. When this resource was first built, all software services and components in Xenbase ran on a single physical server, with inherent reliability, scalability and inter-dependence issues. Recent advances in networking and virtualization techniques allowed us to move Xenbase to a virtual environment, and more specifically to a private cloud. To do so we decoupled the different software services and components, such that each would run on a different virtual machine. In the process, we also upgraded many of the components. The resulting system is faster and more reliable. System maintenance is easier, as individual virtual machines can now be updated, backed up and changed independently. We are also experiencing more effective resource allocation and utilization. Database URL: www.xenbase.org. © The Author(s) 2014. Published by Oxford University Press.

  3. Virtual imaging in sports broadcasting: an overview

    NASA Astrophysics Data System (ADS)

    Tan, Yi

    2003-04-01

    Virtual imaging technology is being used to augment television broadcasts -- virtual objects are seamlessly inserted into the video stream to appear as real entities to TV audiences. Virtual advertisements, the main application of this technology, are providing opportunities to improve the commercial value of television programming while enhancing the contents and the entertainment aspect of these programs. State-of-the-art technologies, such as image recognition, motion tracking and chroma keying, are central to a virtual imaging system. This paper reviews the general framework, the key techniques, and the sports broadcasting applications of virtual imaging technology.

  4. Virtual Planning, Control, and Machining for a Modular-Based Automated Factory Operation in an Augmented Reality Environment

    PubMed Central

    Pai, Yun Suen; Yap, Hwa Jen; Md Dawal, Siti Zawiah; Ramesh, S.; Phoon, Sin Ye

    2016-01-01

    This study presents a modular-based implementation of augmented reality to provide an immersive experience in learning or teaching the planning phase, control system, and machining parameters of a fully automated work cell. The architecture of the system consists of three code modules that can operate independently or combined to create a complete system that is able to guide engineers from the layout planning phase to the prototyping of the final product. The layout planning module determines the best possible arrangement in a layout for the placement of various machines, in this case a conveyor belt for transportation, a robot arm for pick-and-place operations, and a computer numerical control milling machine to generate the final prototype. The robotic arm module simulates the pick-and-place operation offline from the conveyor belt to a computer numerical control (CNC) machine utilising collision detection and inverse kinematics. Finally, the CNC module performs virtual machining based on the Uniform Space Decomposition method and axis aligned bounding box collision detection. The conducted case study revealed that given the situation, a semi-circle shaped arrangement is desirable, whereas the pick-and-place system and the final generated G-code produced the highest deviation of 3.83 mm and 5.8 mm respectively. PMID:27271840

  5. Virtual Planning, Control, and Machining for a Modular-Based Automated Factory Operation in an Augmented Reality Environment.

    PubMed

    Pai, Yun Suen; Yap, Hwa Jen; Md Dawal, Siti Zawiah; Ramesh, S; Phoon, Sin Ye

    2016-06-07

    This study presents a modular-based implementation of augmented reality to provide an immersive experience in learning or teaching the planning phase, control system, and machining parameters of a fully automated work cell. The architecture of the system consists of three code modules that can operate independently or combined to create a complete system that is able to guide engineers from the layout planning phase to the prototyping of the final product. The layout planning module determines the best possible arrangement in a layout for the placement of various machines, in this case a conveyor belt for transportation, a robot arm for pick-and-place operations, and a computer numerical control milling machine to generate the final prototype. The robotic arm module simulates the pick-and-place operation offline from the conveyor belt to a computer numerical control (CNC) machine utilising collision detection and inverse kinematics. Finally, the CNC module performs virtual machining based on the Uniform Space Decomposition method and axis aligned bounding box collision detection. The conducted case study revealed that given the situation, a semi-circle shaped arrangement is desirable, whereas the pick-and-place system and the final generated G-code produced the highest deviation of 3.83 mm and 5.8 mm respectively.

  6. Virtual Planning, Control, and Machining for a Modular-Based Automated Factory Operation in an Augmented Reality Environment

    NASA Astrophysics Data System (ADS)

    Pai, Yun Suen; Yap, Hwa Jen; Md Dawal, Siti Zawiah; Ramesh, S.; Phoon, Sin Ye

    2016-06-01

    This study presents a modular-based implementation of augmented reality to provide an immersive experience in learning or teaching the planning phase, control system, and machining parameters of a fully automated work cell. The architecture of the system consists of three code modules that can operate independently or combined to create a complete system that is able to guide engineers from the layout planning phase to the prototyping of the final product. The layout planning module determines the best possible arrangement in a layout for the placement of various machines, in this case a conveyor belt for transportation, a robot arm for pick-and-place operations, and a computer numerical control milling machine to generate the final prototype. The robotic arm module simulates the pick-and-place operation offline from the conveyor belt to a computer numerical control (CNC) machine utilising collision detection and inverse kinematics. Finally, the CNC module performs virtual machining based on the Uniform Space Decomposition method and axis aligned bounding box collision detection. The conducted case study revealed that given the situation, a semi-circle shaped arrangement is desirable, whereas the pick-and-place system and the final generated G-code produced the highest deviation of 3.83 mm and 5.8 mm respectively.

  7. The eXperience Induction Machine: A New Paradigm for Mixed-Reality Interaction Design and Psychological Experimentation

    NASA Astrophysics Data System (ADS)

    Bernardet, Ulysses; Bermúdez I Badia, Sergi; Duff, Armin; Inderbitzin, Martin; Le Groux, Sylvain; Manzolli, Jônatas; Mathews, Zenon; Mura, Anna; Väljamäe, Aleksander; Verschure, Paul F. M. J.

    The eXperience Induction Machine (XIM) is one of the most advanced mixed-reality spaces available today. XIM is an immersive space that consists of physical sensors and effectors and which is conceptualized as a general-purpose infrastructure for research in the field of psychology and human-artifact interaction. In this chapter, we set out the epistemological rational behind XIM by putting the installation in the context of psychological research. The design and implementation of XIM are based on principles and technologies of neuromorphic control. We give a detailed description of the hardware infrastructure and software architecture, including the logic of the overall behavioral control. To illustrate the approach toward psychological experimentation, we discuss a number of practical applications of XIM. These include the so-called, persistent virtual community, the application in the research of the relationship between human experience and multi-modal stimulation, and an investigation of a mixed-reality social interaction paradigm.

  8. AstroCloud, a Cyber-Infrastructure for Astronomy Research: Data Access and Interoperability

    NASA Astrophysics Data System (ADS)

    Fan, D.; He, B.; Xiao, J.; Li, S.; Li, C.; Cui, C.; Yu, C.; Hong, Z.; Yin, S.; Wang, C.; Cao, Z.; Fan, Y.; Mi, L.; Wan, W.; Wang, J.

    2015-09-01

    Data access and interoperability module connects the observation proposals, data, virtual machines and software. According to the unique identifier of PI (principal investigator), an email address or an internal ID, data can be collected by PI's proposals, or by the search interfaces, e.g. conesearch. Files associated with the searched results could be easily transported to cloud storages, including the storage with virtual machines, or several commercial platforms like Dropbox. Benefitted from the standards of IVOA (International Observatories Alliance), VOTable formatted searching result could be sent to kinds of VO software. Latter endeavor will try to integrate more data and connect archives and some other astronomical resources.

  9. An efficient approach for improving virtual machine placement in cloud computing environment

    NASA Astrophysics Data System (ADS)

    Ghobaei-Arani, Mostafa; Shamsi, Mahboubeh; Rahmanian, Ali A.

    2017-11-01

    The ever increasing demand for the cloud services requires more data centres. The power consumption in the data centres is a challenging problem for cloud computing, which has not been considered properly by the data centre developer companies. Especially, large data centres struggle with the power cost and the Greenhouse gases production. Hence, employing the power efficient mechanisms are necessary to optimise the mentioned effects. Moreover, virtual machine (VM) placement can be used as an effective method to reduce the power consumption in data centres. In this paper by grouping both virtual and physical machines, and taking into account the maximum absolute deviation during the VM placement, the power consumption as well as the service level agreement (SLA) deviation in data centres are reduced. To this end, the best-fit decreasing algorithm is utilised in the simulation to reduce the power consumption by about 5% compared to the modified best-fit decreasing algorithm, and at the same time, the SLA violation is improved by 6%. Finally, the learning automata are used to a trade-off between power consumption reduction from one side, and SLA violation percentage from the other side.

  10. European Pharmacy Students' Experience With Virtual Patient Technology

    PubMed Central

    Madeira, Filipe

    2012-01-01

    Objective. To describe how virtual patients are being used to simulate real-life clinical scenarios in undergraduate pharmacy education in Europe. Methods. One hundred ninety-four participants at the 2011 Congress of the European Pharmaceutical Students Association (EPSA) completed an exploratory cross-sectional survey instrument. Results. Of the 46 universities and 23 countries represented at the EPSA Congress, only 12 students from 6 universities in 6 different countries reported having experience with virtual patient technology. The students were satisfied with the virtual patient technology and considered it more useful as a teaching and learning tool than an assessment tool. Respondents who had not used virtual patient technology expressed support regarding its potential benefits in pharmacy education. French and Dutch students were significantly less interested in virtual patient technology than were their counterparts from other European countries. Conclusion. The limited use of virtual patients in pharmacy education in Europe suggests the need for initiatives to increase the use of virtual patient technology and the benefits of computer-assisted learning in pharmacy education. PMID:22919082

  11. European pharmacy students' experience with virtual patient technology.

    PubMed

    Cavaco, Afonso Miguel; Madeira, Filipe

    2012-08-10

    To describe how virtual patients are being used to simulate real-life clinical scenarios in undergraduate pharmacy education in Europe. One hundred ninety-four participants at the 2011 Congress of the European Pharmaceutical Students Association (EPSA) completed an exploratory cross-sectional survey instrument. Of the 46 universities and 23 countries represented at the EPSA Congress, only 12 students from 6 universities in 6 different countries reported having experience with virtual patient technology. The students were satisfied with the virtual patient technology and considered it more useful as a teaching and learning tool than an assessment tool. Respondents who had not used virtual patient technology expressed support regarding its potential benefits in pharmacy education. French and Dutch students were significantly less interested in virtual patient technology than were their counterparts from other European countries. The limited use of virtual patients in pharmacy education in Europe suggests the need for initiatives to increase the use of virtual patient technology and the benefits of computer-assisted learning in pharmacy education.

  12. Actualities and Development of Heavy-Duty CNC Machine Tool Thermal Error Monitoring Technology

    NASA Astrophysics Data System (ADS)

    Zhou, Zu-De; Gui, Lin; Tan, Yue-Gang; Liu, Ming-Yao; Liu, Yi; Li, Rui-Ya

    2017-09-01

    Thermal error monitoring technology is the key technological support to solve the thermal error problem of heavy-duty CNC (computer numerical control) machine tools. Currently, there are many review literatures introducing the thermal error research of CNC machine tools, but those mainly focus on the thermal issues in small and medium-sized CNC machine tools and seldom introduce thermal error monitoring technologies. This paper gives an overview of the research on the thermal error of CNC machine tools and emphasizes the study of thermal error of the heavy-duty CNC machine tool in three areas. These areas are the causes of thermal error of heavy-duty CNC machine tool and the issues with the temperature monitoring technology and thermal deformation monitoring technology. A new optical measurement technology called the "fiber Bragg grating (FBG) distributed sensing technology" for heavy-duty CNC machine tools is introduced in detail. This technology forms an intelligent sensing and monitoring system for heavy-duty CNC machine tools. This paper fills in the blank of this kind of review articles to guide the development of this industry field and opens up new areas of research on the heavy-duty CNC machine tool thermal error.

  13. Virtual Education: Guidelines for Using Games Technology

    ERIC Educational Resources Information Center

    Schofield, Damian

    2014-01-01

    Advanced three-dimensional virtual environment technology, similar to that used by the film and computer games industry, can allow educational developers to rapidly create realistic online virtual environments. This technology has been used to generate a range of interactive Virtual Reality (VR) learning environments across a spectrum of…

  14. The Deflector Selector: A Machine Learning Framework for Prioritizing Deflection Technology Development

    NASA Astrophysics Data System (ADS)

    Van Heerden, Elmarie; Erasmus, Nicolas; Greenberg, Adam; Nesvold, Erika; Galache, Jose Luis; Dahlstrom, Eric; Marchis, Franck

    2016-10-01

    On 15 February, 2013, a ~15 m diameter asteroid entered the Earth's atmosphere over Russia. The resulting shockwave injured nearly 1500 people, and incurred ~33 million (USD) in infrastructure damages. The Chelyabinsk meteor served as a forceful demonstration of the threat posed to Earth by the hundreds of potentially hazardous objects (PHOs) that pass near the Earth every year. Although no objects have yet been discovered on an impact course for Earth, an impact is virtually statistically guaranteed at some point in the future. While many impactor deflection technologies have been proposed, humanity has yet to demonstrate the ability to divert an impactor when one is found. Developing and testing any single proposed technology will require significant research time and funding. This leaves open an obvious question - towards which technologies should funding and research be directed, in order to maximize our preparedness for when an impactor is eventually found?To help answer this question, we have created a detailed framework for analyzing various deflection technologies and their effectiveness. Using an n-body integrator (REBOUND), we have simulated the attempted deflections of a population of Earth-impacting objects with a variety of velocity perturbations (∂Vs), and measured the effects that these perturbations had on impact probability. We then mapped the ∂Vs applied in the orbital simulations to the technologies capable of achieving those perturbations, and analyzed which set of technologies would be most effective at preventing a PHO from impacting the earth. As a final step, we used the results of these simulations to train a machine learning algorithm. This algorithm, combined with a simulated PHO population, can predict which technologies are most likely to be needed. The algorithm can also reveal which impactor observables (mass, spin, orbit, etc.) have the greatest effect on the choice of deflection technology. These results can be used as a tool to inform funding decisions for both deflection technology development and PHO characterization missions.

  15. Self-replicating machines in continuous space with virtual physics.

    PubMed

    Smith, Arnold; Turney, Peter; Ewaschuk, Robert

    2003-01-01

    JohnnyVon is an implementation of self-replicating machines in continuous two-dimensional space. Two types of particles drift about in a virtual liquid. The particles are automata with discrete internal states but continuous external relationships. Their internal states are governed by finite state machines, but their external relationships are governed by a simulated physics that includes Brownian motion, viscosity, and springlike attractive and repulsive forces. The particles can be assembled into patterns that can encode arbitrary strings of bits. We demonstrate that, if an arbitrary seed pattern is put in a soup of separate individual particles, the pattern will replicate by assembling the individual particles into copies of itself. We also show that, given sufficient time, a soup of separate individual particles will eventually spontaneously form self-replicating patterns. We discuss the implications of JohnnyVon for research in nanotechnology, theoretical biology, and artificial life.

  16. Noise and Vibration Risk Prevention Virtual Web for Ubiquitous Training

    ERIC Educational Resources Information Center

    Redel-Macías, María Dolores; Cubero-Atienza, Antonio J.; Martínez-Valle, José Miguel; Pedrós-Pérez, Gerardo; del Pilar Martínez-Jiménez, María

    2015-01-01

    This paper describes a new Web portal offering experimental labs for ubiquitous training of university engineering students in work-related risk prevention. The Web-accessible computer program simulates the noise and machine vibrations met in the work environment, in a series of virtual laboratories that mimic an actual laboratory and provide the…

  17. Realistic terrain visualization based on 3D virtual world technology

    NASA Astrophysics Data System (ADS)

    Huang, Fengru; Lin, Hui; Chen, Bin; Xiao, Cai

    2009-09-01

    The rapid advances in information technologies, e.g., network, graphics processing, and virtual world, have provided challenges and opportunities for new capabilities in information systems, Internet applications, and virtual geographic environments, especially geographic visualization and collaboration. In order to achieve meaningful geographic capabilities, we need to explore and understand how these technologies can be used to construct virtual geographic environments to help to engage geographic research. The generation of three-dimensional (3D) terrain plays an important part in geographical visualization, computer simulation, and virtual geographic environment applications. The paper introduces concepts and technologies of virtual worlds and virtual geographic environments, explores integration of realistic terrain and other geographic objects and phenomena of natural geographic environment based on SL/OpenSim virtual world technologies. Realistic 3D terrain visualization is a foundation of construction of a mirror world or a sand box model of the earth landscape and geographic environment. The capabilities of interaction and collaboration on geographic information are discussed as well. Further virtual geographic applications can be developed based on the foundation work of realistic terrain visualization in virtual environments.

  18. Realistic terrain visualization based on 3D virtual world technology

    NASA Astrophysics Data System (ADS)

    Huang, Fengru; Lin, Hui; Chen, Bin; Xiao, Cai

    2010-11-01

    The rapid advances in information technologies, e.g., network, graphics processing, and virtual world, have provided challenges and opportunities for new capabilities in information systems, Internet applications, and virtual geographic environments, especially geographic visualization and collaboration. In order to achieve meaningful geographic capabilities, we need to explore and understand how these technologies can be used to construct virtual geographic environments to help to engage geographic research. The generation of three-dimensional (3D) terrain plays an important part in geographical visualization, computer simulation, and virtual geographic environment applications. The paper introduces concepts and technologies of virtual worlds and virtual geographic environments, explores integration of realistic terrain and other geographic objects and phenomena of natural geographic environment based on SL/OpenSim virtual world technologies. Realistic 3D terrain visualization is a foundation of construction of a mirror world or a sand box model of the earth landscape and geographic environment. The capabilities of interaction and collaboration on geographic information are discussed as well. Further virtual geographic applications can be developed based on the foundation work of realistic terrain visualization in virtual environments.

  19. The Informal Workplace Learning Experiences of Virtual Team Members: A Look at the Role of Collaborative Technologies

    ERIC Educational Resources Information Center

    Jones, Frankie S.

    2007-01-01

    This qualitative study explored how collaborative technologies influence the informal learning experiences of virtual team members. Inputs revealed as critical to virtual informal learning were integrated, collaborative technological systems; positive relationships and trust; and organizational support and virtual team management. These inputs…

  20. A Virtual Education: Guidelines for Using Games Technology

    ERIC Educational Resources Information Center

    Schofield, Damian

    2014-01-01

    Advanced three-dimensional virtual environment technology, similar to that used by the film and computer games industry, can allow educational developers to rapidly create realistic online vir-tual environments. This technology has been used to generate a range of interactive Virtual Real-ity (VR) learning environments across a spectrum of…

  1. An Intelligent and Interactive Simulation and Tutoring Environment for Exploring and Learning Simple Machines

    NASA Astrophysics Data System (ADS)

    Myneni, Lakshman Sundeep

    Students in middle school science classes have difficulty mastering physics concepts such as energy and work, taught in the context of simple machines. Moreover, students' naive conceptions of physics often remain unchanged after completing a science class. To address this problem, I developed an intelligent tutoring system, called the Virtual Physics System (ViPS), which coaches students through problem solving with one class of simple machines, pulley systems. The tutor uses a unique cognitive based approach to teaching simple machines, and includes innovations in three areas. (1) It employs a teaching strategy that focuses on highlighting links among concepts of the domain that are essential for conceptual understanding yet are seldom learned by students. (2) Concepts are taught through a combination of effective human tutoring techniques (e.g., hinting) and simulations. (3) For each student, the system identifies which misconceptions he or she has, from a common set of student misconceptions gathered from domain experts, and tailors tutoring to match the correct line of scientific reasoning regarding the misconceptions. ViPS was implemented as a platform on which students can design and simulate pulley system experiments, integrated with a constraint-based tutor that intervenes when students make errors during problem solving to teach them and to help them. ViPS has a web-based client-server architecture, and has been implemented using Java technologies. ViPS is different from existing physics simulations and tutoring systems due to several original features. (1). It is the first system to integrate a simulation based virtual experimentation platform with an intelligent tutoring component. (2) It uses a novel approach, based on Bayesian networks, to help students construct correct pulley systems for experimental simulation. (3) It identifies student misconceptions based on a novel decision tree applied to student pretest scores, and tailors tutoring to individual students based on detected misconceptions. ViPS has been evaluated through usability and usefulness experiments with undergraduate engineering students taking their first college-level engineering physics course and undergraduate pre-service teachers taking their first college-level physics course. These experiments demonstrated that ViPS is highly usable and effective. Students using ViPS reduced their misconceptions, and students conducting virtual experiments in ViPS learned more than students who conducted experiments with physical pulley systems. Interestingly, it was also found that college students exhibited many of the same misconceptions that have been identified in middle school students.

  2. VirtualSpace: A vision of a machine-learned virtual space environment

    NASA Astrophysics Data System (ADS)

    Bortnik, J.; Sarno-Smith, L. K.; Chu, X.; Li, W.; Ma, Q.; Angelopoulos, V.; Thorne, R. M.

    2017-12-01

    Space borne instrumentation tends to come and go. A typical instrument will go through a phase of design and construction, be deployed on a spacecraft for several years while it collects data, and then be decommissioned and fade into obscurity. The data collected from that instrument will typically receive much attention while it is being collected, perhaps in the form of event studies, conjunctions with other instruments, or a few statistical surveys, but once the instrument or spacecraft is decommissioned, the data will be archived and receive progressively less attention with every passing year. This is the fate of all historical data, and will be the fate of data being collected by instruments even at the present time. But what if those instruments could come alive, and all be simultaneously present at any and every point in time and space? Imagine the scientific insights, and societal gains that could be achieved with a grand (virtual) heliophysical observatory that consists of every current and historical mission ever deployed? We propose that this is not just fantasy but is imminently doable with the data currently available, with the present computational resources, and with currently available algorithms. This project revitalizes existing data resources and lays the groundwork for incorporating data from every future mission to expand the scope and refine the resolution of the virtual observatory. We call this project VirtualSpace: a machine-learned virtual space environment.

  3. Electrical Machines Laminations Magnetic Properties: A Virtual Instrument Laboratory

    ERIC Educational Resources Information Center

    Martinez-Roman, Javier; Perez-Cruz, Juan; Pineda-Sanchez, Manuel; Puche-Panadero, Ruben; Roger-Folch, Jose; Riera-Guasp, Martin; Sapena-Baño, Angel

    2015-01-01

    Undergraduate courses in electrical machines often include an introduction to their magnetic circuits and to the various magnetic materials used in their construction and their properties. The students must learn to be able to recognize and compare the permeability, saturation, and losses of these magnetic materials, relate each material to its…

  4. MLViS: A Web Tool for Machine Learning-Based Virtual Screening in Early-Phase of Drug Discovery and Development

    PubMed Central

    Korkmaz, Selcuk; Zararsiz, Gokmen; Goksuluk, Dincer

    2015-01-01

    Virtual screening is an important step in early-phase of drug discovery process. Since there are thousands of compounds, this step should be both fast and effective in order to distinguish drug-like and nondrug-like molecules. Statistical machine learning methods are widely used in drug discovery studies for classification purpose. Here, we aim to develop a new tool, which can classify molecules as drug-like and nondrug-like based on various machine learning methods, including discriminant, tree-based, kernel-based, ensemble and other algorithms. To construct this tool, first, performances of twenty-three different machine learning algorithms are compared by ten different measures, then, ten best performing algorithms have been selected based on principal component and hierarchical cluster analysis results. Besides classification, this application has also ability to create heat map and dendrogram for visual inspection of the molecules through hierarchical cluster analysis. Moreover, users can connect the PubChem database to download molecular information and to create two-dimensional structures of compounds. This application is freely available through www.biosoft.hacettepe.edu.tr/MLViS/. PMID:25928885

  5. Prediction Based Proactive Thermal Virtual Machine Scheduling in Green Clouds

    PubMed Central

    Kinger, Supriya; Kumar, Rajesh; Sharma, Anju

    2014-01-01

    Cloud computing has rapidly emerged as a widely accepted computing paradigm, but the research on Cloud computing is still at an early stage. Cloud computing provides many advanced features but it still has some shortcomings such as relatively high operating cost and environmental hazards like increasing carbon footprints. These hazards can be reduced up to some extent by efficient scheduling of Cloud resources. Working temperature on which a machine is currently running can be taken as a criterion for Virtual Machine (VM) scheduling. This paper proposes a new proactive technique that considers current and maximum threshold temperature of Server Machines (SMs) before making scheduling decisions with the help of a temperature predictor, so that maximum temperature is never reached. Different workload scenarios have been taken into consideration. The results obtained show that the proposed system is better than existing systems of VM scheduling, which does not consider current temperature of nodes before making scheduling decisions. Thus, a reduction in need of cooling systems for a Cloud environment has been obtained and validated. PMID:24737962

  6. Fast mental states decoding in mixed reality.

    PubMed

    De Massari, Daniele; Pacheco, Daniel; Malekshahi, Rahim; Betella, Alberto; Verschure, Paul F M J; Birbaumer, Niels; Caria, Andrea

    2014-01-01

    The combination of Brain-Computer Interface (BCI) technology, allowing online monitoring and decoding of brain activity, with virtual and mixed reality (MR) systems may help to shape and guide implicit and explicit learning using ecological scenarios. Real-time information of ongoing brain states acquired through BCI might be exploited for controlling data presentation in virtual environments. Brain states discrimination during mixed reality experience is thus critical for adapting specific data features to contingent brain activity. In this study we recorded electroencephalographic (EEG) data while participants experienced MR scenarios implemented through the eXperience Induction Machine (XIM). The XIM is a novel framework modeling the integration of a sensing system that evaluates and measures physiological and psychological states with a number of actuators and effectors that coherently reacts to the user's actions. We then assessed continuous EEG-based discrimination of spatial navigation, reading and calculation performed in MR, using linear discriminant analysis (LDA) and support vector machine (SVM) classifiers. Dynamic single trial classification showed high accuracy of LDA and SVM classifiers in detecting multiple brain states as well as in differentiating between high and low mental workload, using a 5 s time-window shifting every 200 ms. Our results indicate overall better performance of LDA with respect to SVM and suggest applicability of our approach in a BCI-controlled MR scenario. Ultimately, successful prediction of brain states might be used to drive adaptation of data representation in order to boost information processing in MR.

  7. Fast mental states decoding in mixed reality

    PubMed Central

    De Massari, Daniele; Pacheco, Daniel; Malekshahi, Rahim; Betella, Alberto; Verschure, Paul F. M. J.; Birbaumer, Niels; Caria, Andrea

    2014-01-01

    The combination of Brain-Computer Interface (BCI) technology, allowing online monitoring and decoding of brain activity, with virtual and mixed reality (MR) systems may help to shape and guide implicit and explicit learning using ecological scenarios. Real-time information of ongoing brain states acquired through BCI might be exploited for controlling data presentation in virtual environments. Brain states discrimination during mixed reality experience is thus critical for adapting specific data features to contingent brain activity. In this study we recorded electroencephalographic (EEG) data while participants experienced MR scenarios implemented through the eXperience Induction Machine (XIM). The XIM is a novel framework modeling the integration of a sensing system that evaluates and measures physiological and psychological states with a number of actuators and effectors that coherently reacts to the user's actions. We then assessed continuous EEG-based discrimination of spatial navigation, reading and calculation performed in MR, using linear discriminant analysis (LDA) and support vector machine (SVM) classifiers. Dynamic single trial classification showed high accuracy of LDA and SVM classifiers in detecting multiple brain states as well as in differentiating between high and low mental workload, using a 5 s time-window shifting every 200 ms. Our results indicate overall better performance of LDA with respect to SVM and suggest applicability of our approach in a BCI-controlled MR scenario. Ultimately, successful prediction of brain states might be used to drive adaptation of data representation in order to boost information processing in MR. PMID:25505878

  8. Kinematics and Dynamics of Motion Control Based on Acceleration Control

    NASA Astrophysics Data System (ADS)

    Ohishi, Kiyoshi; Ohba, Yuzuru; Katsura, Seiichiro

    The first IEEE International Workshop on Advanced Motion Control was held in 1990 pointed out the importance of physical interpretation of motion control. The software servoing technology is now common in machine tools, robotics, and mechatronics. It has been intensively developed for the numerical control (NC) machines. Recently, motion control in unknown environment will be more and more important. Conventional motion control is not always suitable due to the lack of adaptive capability to the environment. A more sophisticated ability in motion control is necessary for compliant contact with environment. Acceleration control is the key technology of motion control in unknown environment. The acceleration control can make a motion system to be a zero control stiffness system without losing the robustness. Furthermore, a realization of multi-degree-of-freedom motion is necessary for future human assistance. A human assistant motion will require various control stiffness corresponding to the task. The review paper focuses on the modal coordinate system to integrate the various control stiffness in the virtual axes. A bilateral teleoperation is a good candidate to consider the future human assistant motion and integration of decentralized systems. Thus the paper reviews and discusses the bilateral teleoperation from the control stiffness and the modal control design points of view.

  9. The Potential of Using Virtual Reality Technology in Physical Activity Settings

    ERIC Educational Resources Information Center

    Pasco, Denis

    2013-01-01

    In recent years, virtual reality technology has been successfully used for learning purposes. The purposes of the article are to examine current research on the role of virtual reality in physical activity settings and discuss potential application of using virtual reality technology to enhance learning in physical education. The article starts…

  10. Automated inspection of gaps on the free-form shape parts by laser scanning technologies

    NASA Astrophysics Data System (ADS)

    Zhou, Sen; Xu, Jian; Tao, Lei; An, Lu; Yu, Yan

    2018-01-01

    In industrial manufacturing processes, the dimensional inspection of the gaps on the free-form shape parts is critical and challenging, and is directly associated with subsequent assembly and terminal product quality. In this paper, a fast measuring method for automated gap inspection based on laser scanning technologies is presented. The proposed measuring method consists of three steps: firstly, the relative position is determined according to the geometric feature of measuring gap, which considers constraints existing in a laser scanning operation. Secondly, in order to acquire a complete gap profile, a fast and effective scanning path is designed. Finally, the range dimension of the gaps on the free-form shape parts including width, depth and flush, correspondingly, is described in a virtual environment. In the future, an appliance machine based on the proposed method will be developed for the on-line dimensional inspection of gaps on the automobile or aerospace production line.

  11. A preliminary study of MR sickness evaluation using visual motion aftereffect for advanced driver assistance systems.

    PubMed

    Nakajima, Sawako; Ino, Shuichi; Ifukube, Tohru

    2007-01-01

    Mixed Reality (MR) technologies have recently been explored in many areas of Human-Machine Interface (HMI) such as medicine, manufacturing, entertainment and education. However MR sickness, a kind of motion sickness is caused by sensory conflicts between the real world and virtual world. The purpose of this paper is to find out a new evaluation method of motion and MR sickness. This paper investigates a relationship between the whole-body vibration related to MR technologies and the motion aftereffect (MAE) phenomenon in the human visual system. This MR environment is modeled after advanced driver assistance systems in near-future vehicles. The seated subjects in the MR simulator were shaken in the pitch direction ranging from 0.1 to 2.0 Hz. Results show that MAE is useful for evaluation of MR sickness incidence. In addition, a method to reduce the MR sickness by auditory stimulation is proposed.

  12. Virtual reality hardware and graphic display options for brain-machine interfaces

    PubMed Central

    Marathe, Amar R.; Carey, Holle L.; Taylor, Dawn M.

    2009-01-01

    Virtual reality hardware and graphic displays are reviewed here as a development environment for brain-machine interfaces (BMIs). Two desktop stereoscopic monitors and one 2D monitor were compared in a visual depth discrimination task and in a 3D target-matching task where able-bodied individuals used actual hand movements to match a virtual hand to different target hands. Three graphic representations of the hand were compared: a plain sphere, a sphere attached to the fingertip of a realistic hand and arm, and a stylized pacman-like hand. Several subjects had great difficulty using either stereo monitor for depth perception when perspective size cues were removed. A mismatch in stereo and size cues generated inappropriate depth illusions. This phenomenon has implications for choosing target and virtual hand sizes in BMI experiments. Target matching accuracy was about as good with the 2D monitor as with either 3D monitor. However, users achieved this accuracy by exploring the boundaries of the hand in the target with carefully controlled movements. This method of determining relative depth may not be possible in BMI experiments if movement control is more limited. Intuitive depth cues, such as including a virtual arm, can significantly improve depth perception accuracy with or without stereo viewing. PMID:18006069

  13. General-Purpose Front End for Real-Time Data Processing

    NASA Technical Reports Server (NTRS)

    James, Mark

    2007-01-01

    FRONTIER is a computer program that functions as a front end for any of a variety of other software of both the artificial intelligence (AI) and conventional data-processing types. As used here, front end signifies interface software needed for acquiring and preprocessing data and making the data available for analysis by the other software. FRONTIER is reusable in that it can be rapidly tailored to any such other software with minimum effort. Each component of FRONTIER is programmable and is executed in an embedded virtual machine. Each component can be reconfigured during execution. The virtual-machine implementation making FRONTIER independent of the type of computing hardware on which it is executed.

  14. New Virtual Field Trips. Revised Edition.

    ERIC Educational Resources Information Center

    Cooper, Gail; Cooper, Garry

    This book is an annotated guidebook, arranged by subject matter, of World Wide Web sites for K-12 students. The following chapters are included: (1) Virtual Time Machine (i.e., sites that cover topics in world history); (2) Tour the World (i.e., sites that include information about countries); (3) Outer Space; (4) The Great Outdoors; (5) Aquatic…

  15. Virtual Factory Framework for Supporting Production Planning and Control.

    PubMed

    Kibira, Deogratias; Shao, Guodong

    2017-01-01

    Developing optimal production plans for smart manufacturing systems is challenging because shop floor events change dynamically. A virtual factory incorporating engineering tools, simulation, and optimization generates and communicates performance data to guide wise decision making for different control levels. This paper describes such a platform specifically for production planning. We also discuss verification and validation of the constituent models. A case study of a machine shop is used to demonstrate data generation for production planning in a virtual factory.

  16. Aerospace applications of virtual environment technology.

    PubMed

    Loftin, R B

    1996-11-01

    The uses of virtual environment technology in the space program are examined with emphasis on training for the Hubble Space Telescope Repair and Maintenance Mission in 1993. Project ScienceSpace at the Virtual Environment Technology Lab is discussed.

  17. Application of computer virtual simulation technology in 3D animation production

    NASA Astrophysics Data System (ADS)

    Mo, Can

    2017-11-01

    In the continuous development of computer technology, the application system of virtual simulation technology has been further optimized and improved. It also has been widely used in various fields of social development, such as city construction, interior design, industrial simulation and tourism teaching etc. This paper mainly introduces the virtual simulation technology used in 3D animation. Based on analyzing the characteristics of virtual simulation technology, the application ways and means of this technology in 3D animation are researched. The purpose is to provide certain reference for the 3D effect promotion days after.

  18. [Virtual reality in medical education].

    PubMed

    Edvardsen, O; Steensrud, T

    1998-02-28

    Virtual reality technology has found new applications in industry over the last few years. Medical literature has for several years predicted a break-through in this technology for medical education. Although there is a great potential for this technology in medical education, there seems to be a wide gap between expectations and actual possibilities at present. State of the technology was explored by participation at the conference "Medicine meets virtual reality V" (San Diego Jan. 22-25 1997) and a visit to one of the leading laboratories on virtual reality in medical education. In this paper we introduce some of the basic terminology and technology, review some of the topics covered by the conference, and describe projects running in one of the leading laboratories on virtual reality technology for medical education. With this information in mind, we discuss potential applications of the current technology in medical education. Current virtual reality systems are judged to be too costly and their usefulness in education too limited for routine use in medical education.

  19. Exploring Infiniband Hardware Virtualization in OpenNebula towards Efficient High-Performance Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pais Pitta de Lacerda Ruivo, Tiago; Bernabeu Altayo, Gerard; Garzoglio, Gabriele

    2014-11-11

    has been widely accepted that software virtualization has a big negative impact on high-performance computing (HPC) application performance. This work explores the potential use of Infiniband hardware virtualization in an OpenNebula cloud towards the efficient support of MPI-based workloads. We have implemented, deployed, and tested an Infiniband network on the FermiCloud private Infrastructure-as-a-Service (IaaS) cloud. To avoid software virtualization towards minimizing the virtualization overhead, we employed a technique called Single Root Input/Output Virtualization (SRIOV). Our solution spanned modifications to the Linux’s Hypervisor as well as the OpenNebula manager. We evaluated the performance of the hardware virtualization on up to 56more » virtual machines connected by up to 8 DDR Infiniband network links, with micro-benchmarks (latency and bandwidth) as well as w a MPI-intensive application (the HPL Linpack benchmark).« less

  20. iRODS-Based Climate Data Services and Virtualization-as-a-Service in the NASA Center for Climate Simulation

    NASA Astrophysics Data System (ADS)

    Schnase, J. L.; Duffy, D. Q.; Tamkin, G. S.; Strong, S.; Ripley, D.; Gill, R.; Sinno, S. S.; Shen, Y.; Carriere, L. E.; Brieger, L.; Moore, R.; Rajasekar, A.; Schroeder, W.; Wan, M.

    2011-12-01

    Scientific data services are becoming an important part of the NASA Center for Climate Simulation's mission. Our technological response to this expanding role is built around the concept of specialized virtual climate data servers, repetitive cloud provisioning, image-based deployment and distribution, and virtualization-as-a-service. A virtual climate data server is an OAIS-compliant, iRODS-based data server designed to support a particular type of scientific data collection. iRODS is data grid middleware that provides policy-based control over collection-building, managing, querying, accessing, and preserving large scientific data sets. We have developed prototype vCDSs to manage NetCDF, HDF, and GeoTIF data products. We use RPM scripts to build vCDS images in our local computing environment, our local Virtual Machine Environment, NASA's Nebula Cloud Services, and Amazon's Elastic Compute Cloud. Once provisioned into these virtualized resources, multiple vCDSs can use iRODS's federation and realized object capabilities to create an integrated ecosystem of data servers that can scale and adapt to changing requirements. This approach enables platform- or software-as-a-service deployment of the vCDSs and allows the NCCS to offer virtualization-as-a-service, a capacity to respond in an agile way to new customer requests for data services, and a path for migrating existing services into the cloud. We have registered MODIS Atmosphere data products in a vCDS that contains 54 million registered files, 630TB of data, and over 300 million metadata values. We are now assembling IPCC AR5 data into a production vCDS that will provide the platform upon which NCCS's Earth System Grid (ESG) node publishes to the extended science community. In this talk, we describe our approach, experiences, lessons learned, and plans for the future.

  1. Virtual Employment Test Bed Operational Research and Systems Analysis to Test Armaments Designs Early in the Life Cycle

    DTIC Science & Technology

    2014-06-01

    motion capture data used to determine position and orientation of a Soldier’s head, turret and the M2 machine gun • Controlling and acquiring user/weapon...data from the M2 simulation machine gun • Controlling paintball guns used to fire at the GPK during an experimental run • Sending and receiving TCP...Mounted, Armor/Cavalry, Combat Engineers, Field Artillery Cannon Crewmember, or MP duty assignment – Currently M2 .50 Caliber Machine Gun qualified

  2. A new chapter in environmental sensing: The Open-Source Published Environmental Sensing (OPENS) laboratory

    NASA Astrophysics Data System (ADS)

    Selker, J. S.; Roques, C.; Higgins, C. W.; Good, S. P.; Hut, R.; Selker, A.

    2015-12-01

    The confluence of 3-Dimensional printing, low-cost solid-state-sensors, low-cost, low-power digital controllers (e.g., Arduinos); and open-source publishing (e.g., Github) is poised to transform environmental sensing. The Open-Source Published Environmental Sensing (OPENS) laboratory has launched and is available for all to use. OPENS combines cutting edge technologies and makes them available to the global environmental sensing community. OPENS includes a Maker lab space where people may collaborate in person or virtually via on-line forum for the publication and discussion of environmental sensing technology (Corvallis, Oregon, USA, please feel free to request a free reservation for space and equipment use). The physical lab houses a test-bed for sensors, as well as a complete classical machine shop, 3-D printers, electronics development benches, and workstations for code development. OPENS will provide a web-based formal publishing framework wherein global students and scientists can peer-review publish (with DOI) novel and evolutionary advancements in environmental sensor systems. This curated and peer-reviewed digital collection will include complete sets of "printable" parts and operating computer code for sensing systems. The physical lab will include all of the machines required to produce these sensing systems. These tools can be addressed in person or virtually, creating a truly global venue for advancement in monitoring earth's environment and agricultural systems. In this talk we will present an example of the process of design and publication the design and data from the OPENS-Permeameter. The publication includes 3-D printing code, Arduino (or other control/logging platform) operational code; sample data sets, and a full discussion of the design set in the scientific context of previous related devices. Editors for the peer-review process are currently sought - contact John.Selker@Oregonstate.edu or Clement.Roques@Oregonstate.edu.

  3. Virtual Teaming and Collaboration Technology: A Study of Influences on Virtual Project Outcomes

    ERIC Educational Resources Information Center

    Broils, Gary C.

    2014-01-01

    The purpose of this quantitative correlational study was to explore the relationships between the independent variables, contextual factors for virtual teams and collaboration technology, and the dependent variable, virtual project outcomes. The problem leading to the need for the study is a lower success rate for virtual projects compared to…

  4. Technology of machine tools. Volume 4. Machine tool controls

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1980-10-01

    The Machine Tool Task Force (MTTF) was formed to characterize the state of the art of machine tool technology and to identify promising future directions of this technology. This volume is one of a five-volume series that presents the MTTF findings; reports on various areas of the technology were contributed by experts in those areas.

  5. Technology of machine tools. Volume 3. Machine tool mechanics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tlusty, J.

    1980-10-01

    The Machine Tool Task Force (MTTF) was formed to characterize the state of the art of machine tool technology and to identify promising future directions of this technology. This volume is one of a five-volume series that presents the MTTF findings; reports on various areas of the technology were contributed by experts in those areas.

  6. Technology of machine tools. Volume 5. Machine tool accuracy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hocken, R.J.

    1980-10-01

    The Machine Tool Task Force (MTTF) was formed to characterize the state of the art of machine tool technology and to identify promising future directions of this technology. This volume is one of a five-volume series that presents the MTTF findings; reports on various areas of the technology were contributed by experts in those areas.

  7. Efficiently Scheduling Multi-core Guest Virtual Machines on Multi-core Hosts in Network Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoginath, Srikanth B; Perumalla, Kalyan S

    2011-01-01

    Virtual machine (VM)-based simulation is a method used by network simulators to incorporate realistic application behaviors by executing actual VMs as high-fidelity surrogates for simulated end-hosts. A critical requirement in such a method is the simulation time-ordered scheduling and execution of the VMs. Prior approaches such as time dilation are less efficient due to the high degree of multiplexing possible when multiple multi-core VMs are simulated on multi-core host systems. We present a new simulation time-ordered scheduler to efficiently schedule multi-core VMs on multi-core real hosts, with a virtual clock realized on each virtual core. The distinguishing features of ourmore » approach are: (1) customizable granularity of the VM scheduling time unit on the simulation time axis, (2) ability to take arbitrary leaps in virtual time by VMs to maximize the utilization of host (real) cores when guest virtual cores idle, and (3) empirically determinable optimality in the tradeoff between total execution (real) time and time-ordering accuracy levels. Experiments show that it is possible to get nearly perfect time-ordered execution, with a slight cost in total run time, relative to optimized non-simulation VM schedulers. Interestingly, with our time-ordered scheduler, it is also possible to reduce the time-ordering error from over 50% of non-simulation scheduler to less than 1% realized by our scheduler, with almost the same run time efficiency as that of the highly efficient non-simulation VM schedulers.« less

  8. Cloud BioLinux: pre-configured and on-demand bioinformatics computing for the genomics community.

    PubMed

    Krampis, Konstantinos; Booth, Tim; Chapman, Brad; Tiwari, Bela; Bicak, Mesude; Field, Dawn; Nelson, Karen E

    2012-03-19

    A steep drop in the cost of next-generation sequencing during recent years has made the technology affordable to the majority of researchers, but downstream bioinformatic analysis still poses a resource bottleneck for smaller laboratories and institutes that do not have access to substantial computational resources. Sequencing instruments are typically bundled with only the minimal processing and storage capacity required for data capture during sequencing runs. Given the scale of sequence datasets, scientific value cannot be obtained from acquiring a sequencer unless it is accompanied by an equal investment in informatics infrastructure. Cloud BioLinux is a publicly accessible Virtual Machine (VM) that enables scientists to quickly provision on-demand infrastructures for high-performance bioinformatics computing using cloud platforms. Users have instant access to a range of pre-configured command line and graphical software applications, including a full-featured desktop interface, documentation and over 135 bioinformatics packages for applications including sequence alignment, clustering, assembly, display, editing, and phylogeny. Each tool's functionality is fully described in the documentation directly accessible from the graphical interface of the VM. Besides the Amazon EC2 cloud, we have started instances of Cloud BioLinux on a private Eucalyptus cloud installed at the J. Craig Venter Institute, and demonstrated access to the bioinformatic tools interface through a remote connection to EC2 instances from a local desktop computer. Documentation for using Cloud BioLinux on EC2 is available from our project website, while a Eucalyptus cloud image and VirtualBox Appliance is also publicly available for download and use by researchers with access to private clouds. Cloud BioLinux provides a platform for developing bioinformatics infrastructures on the cloud. An automated and configurable process builds Virtual Machines, allowing the development of highly customized versions from a shared code base. This shared community toolkit enables application specific analysis platforms on the cloud by minimizing the effort required to prepare and maintain them.

  9. Cloud BioLinux: pre-configured and on-demand bioinformatics computing for the genomics community

    PubMed Central

    2012-01-01

    Background A steep drop in the cost of next-generation sequencing during recent years has made the technology affordable to the majority of researchers, but downstream bioinformatic analysis still poses a resource bottleneck for smaller laboratories and institutes that do not have access to substantial computational resources. Sequencing instruments are typically bundled with only the minimal processing and storage capacity required for data capture during sequencing runs. Given the scale of sequence datasets, scientific value cannot be obtained from acquiring a sequencer unless it is accompanied by an equal investment in informatics infrastructure. Results Cloud BioLinux is a publicly accessible Virtual Machine (VM) that enables scientists to quickly provision on-demand infrastructures for high-performance bioinformatics computing using cloud platforms. Users have instant access to a range of pre-configured command line and graphical software applications, including a full-featured desktop interface, documentation and over 135 bioinformatics packages for applications including sequence alignment, clustering, assembly, display, editing, and phylogeny. Each tool's functionality is fully described in the documentation directly accessible from the graphical interface of the VM. Besides the Amazon EC2 cloud, we have started instances of Cloud BioLinux on a private Eucalyptus cloud installed at the J. Craig Venter Institute, and demonstrated access to the bioinformatic tools interface through a remote connection to EC2 instances from a local desktop computer. Documentation for using Cloud BioLinux on EC2 is available from our project website, while a Eucalyptus cloud image and VirtualBox Appliance is also publicly available for download and use by researchers with access to private clouds. Conclusions Cloud BioLinux provides a platform for developing bioinformatics infrastructures on the cloud. An automated and configurable process builds Virtual Machines, allowing the development of highly customized versions from a shared code base. This shared community toolkit enables application specific analysis platforms on the cloud by minimizing the effort required to prepare and maintain them. PMID:22429538

  10. Biological ageing and clinical consequences of modern technology.

    PubMed

    Kyriazis, Marios

    2017-08-01

    The pace of technology is steadily increasing, and this has a widespread effect on all areas of health and society. When we interact with this technological environment we are exposed to a wide variety of new stimuli and challenges, which may modulate the stress response and thus change the way we respond and adapt. In this Opinion paper I will examine certain aspects of the human-computer interaction with regards to health and ageing. There are practical, everyday effects which also include social and cultural elements. I will discuss how human evolution may be affected by this new environmental change (the hormetic immersion in a virtual/technological environment). Finally, I will also explore certain biological aspects which have direct relevance to the ageing human. By embracing new technologies and engaging with a techno-social ecosystem (which is no longer formed by several interacting species, but by just two main elements: humans and machines), we may be subjected to beneficial hormetic effects, which upregulate the stress response and modulate adaptation. This is likely to improve overall health as we age and, as I speculate here, may also result in the reduction of age-related dysfunction.

  11. Enhanced emotional responses during social coordination with a virtual partner

    PubMed Central

    Dumas, Guillaume; Kelso, J.A. Scott; Tognoli, Emmanuelle

    2016-01-01

    Emotion and motion, though seldom studied in tandem, are complementary aspects of social experience. This study investigates variations in emotional responses during movement coordination between a human and a Virtual Partner (VP), an agent whose virtual finger movements are driven by the Haken-Kelso-Bunz (HKB) equations of Coordination Dynamics. Twenty-one subjects were instructed to coordinate finger movements with the VP in either inphase or antiphase patterns. By adjusting model parameters, we manipulated the ‘intention’ of VP as cooperative or competitive with the human's instructed goal. Skin potential responses (SPR) were recorded to quantify the intensity of emotional response. At the end of each trial, subjects rated the VP's intention and whether they thought their partner was another human being or a machine. We found greater emotional responses when subjects reported that their partner was human and when coordination was stable. That emotional responses are strongly influenced by dynamic features of the VP's behavior, has implications for mental health, brain disorders and the design of socially cooperative machines. PMID:27094374

  12. Achieving High Resolution Timer Events in Virtualized Environment.

    PubMed

    Adamczyk, Blazej; Chydzinski, Andrzej

    2015-01-01

    Virtual Machine Monitors (VMM) have become popular in different application areas. Some applications may require to generate the timer events with high resolution and precision. This however may be challenging due to the complexity of VMMs. In this paper we focus on the timer functionality provided by five different VMMs-Xen, KVM, Qemu, VirtualBox and VMWare. Firstly, we evaluate resolutions and precisions of their timer events. Apparently, provided resolutions and precisions are far too low for some applications (e.g. networking applications with the quality of service). Then, using Xen virtualization we demonstrate the improved timer design that greatly enhances both the resolution and precision of achieved timer events.

  13. Technology of machine tools. Volume 2. Machine tool systems management and utilization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thomson, A.R.

    1980-10-01

    The Machine Tool Task Force (MTTF) was formed to characterize the state of the art of machine tool technology and to identify promising future directions of this technology. This volume is one of a five-volume series that presents the MTTF findings; reports on various areas of the technology were contributed by experts in those areas.

  14. Virtual Education in Universities: A Technological Imperative.

    ERIC Educational Resources Information Center

    O'Donoghue, John; Singh, Gurmak; Dorward, Lisa

    2001-01-01

    Discusses virtual universities and virtual classrooms, exploring both the benefits and the disadvantages of technology-based delivery systems. Highlights include competitive pressures to use technology; impacts on students; the need for flexibility to meet unique student needs and learning styles; learning environments; impact on society; and…

  15. A video, text, and speech-driven realistic 3-d virtual head for human-machine interface.

    PubMed

    Yu, Jun; Wang, Zeng-Fu

    2015-05-01

    A multiple inputs-driven realistic facial animation system based on 3-D virtual head for human-machine interface is proposed. The system can be driven independently by video, text, and speech, thus can interact with humans through diverse interfaces. The combination of parameterized model and muscular model is used to obtain a tradeoff between computational efficiency and high realism of 3-D facial animation. The online appearance model is used to track 3-D facial motion from video in the framework of particle filtering, and multiple measurements, i.e., pixel color value of input image and Gabor wavelet coefficient of illumination ratio image, are infused to reduce the influence of lighting and person dependence for the construction of online appearance model. The tri-phone model is used to reduce the computational consumption of visual co-articulation in speech synchronized viseme synthesis without sacrificing any performance. The objective and subjective experiments show that the system is suitable for human-machine interaction.

  16. Strength computation of forged parts taking into account strain hardening and damage

    NASA Astrophysics Data System (ADS)

    Cristescu, Michel L.

    2004-06-01

    Modern non-linear simulation software, such as FORGE 3 (registered trade mark of TRANSVALOR), are able to compute the residual stresses, the strain hardening and the damage during the forging process. A thermally dependent elasto-visco-plastic law is used to simulate the behavior of the material of the hot forged piece. A modified Lemaitre law coupled with elasticiy, plasticity and thermic is used to simulate the damage. After the simulation of the different steps of the forging process, the part is cooled and then virtually machined, in order to obtain the finished part. An elastic computation is then performed to equilibrate the residual stresses, so that we obtain the true geometry of the finished part after machining. The response of the part to the loadings it will sustain during it's life is then computed, taking into account the residual stresses, the strain hardening and the damage that occur during forging. This process is illustrated by the forging, virtual machining and stress analysis of an aluminium wheel hub.

  17. GRIDVIEW: Recent Improvements in Research and Education Software for Exploring Mars Topography

    NASA Technical Reports Server (NTRS)

    Roark, J. H.; Masuoka, C. M.; Frey, H. V.

    2004-01-01

    GRIDVIEW is being developed by the GEODYNAMICS Branch at NASA's Goddard Space Flight Center and can be downloaded on the web at http://geodynamics.gsfc.nasa.gov/gridview/. The program is very mature and has been successfully used for more than four years, but is still under development as we add new features for data analysis and visualization. The software can run on any computer supported by the IDL virtual machine application supplied by RSI. The virtual machine application is currently available for recent versions of MS Windows, MacOS X, Red Hat Linux and UNIX. Minimum system memory requirement is 32 MB, however loading large data sets may require larger amounts of RAM to function adequately.

  18. Dynamically programmable cache

    NASA Astrophysics Data System (ADS)

    Nakkar, Mouna; Harding, John A.; Schwartz, David A.; Franzon, Paul D.; Conte, Thomas

    1998-10-01

    Reconfigurable machines have recently been used as co- processors to accelerate the execution of certain algorithms or program subroutines. The problems with the above approach include high reconfiguration time and limited partial reconfiguration. By far the most critical problems are: (1) the small on-chip memory which results in slower execution time, and (2) small FPGA areas that cannot implement large subroutines. Dynamically Programmable Cache (DPC) is a novel architecture for embedded processors which offers solutions to the above problems. To solve memory access problems, DPC processors merge reconfigurable arrays with the data cache at various cache levels to create a multi-level reconfigurable machines. As a result DPC machines have both higher data accessibility and FPGA memory bandwidth. To solve the limited FPGA resource problem, DPC processors implemented multi-context switching (Virtualization) concept. Virtualization allows implementation of large subroutines with fewer FPGA cells. Additionally, DPC processors can parallelize the execution of several operations resulting in faster execution time. In this paper, the speedup improvement for DPC machines are shown to be 5X faster than an Altera FLEX10K FPGA chip and 2X faster than a Sun Ultral SPARC station for two different algorithms (convolution and motion estimation).

  19. Agreements in Virtual Organizations

    NASA Astrophysics Data System (ADS)

    Pankowska, Malgorzata

    This chapter is an attempt to explain the important impact that contract theory delivers with respect to the concept of virtual organization. The author believes that not enough research has been conducted in order to transfer theoretical foundations for networking to the phenomena of virtual organizations and open autonomic computing environment to ensure the controllability and management of them. The main research problem of this chapter is to explain the significance of agreements for virtual organizations governance. The first part of this chapter comprises explanations of differences among virtual machines and virtual organizations for further descriptions of the significance of the first ones to the development of the second. Next, the virtual organization development tendencies are presented and problems of IT governance in highly distributed organizational environment are discussed. The last part of this chapter covers analysis of contracts and agreements management for governance in open computing environments.

  20. Virtual Environments in Scientific Visualization

    NASA Technical Reports Server (NTRS)

    Bryson, Steve; Lisinski, T. A. (Technical Monitor)

    1994-01-01

    Virtual environment technology is a new way of approaching the interface between computers and humans. Emphasizing display and user control that conforms to the user's natural ways of perceiving and thinking about space, virtual environment technologies enhance the ability to perceive and interact with computer generated graphic information. This enhancement potentially has a major effect on the field of scientific visualization. Current examples of this technology include the Virtual Windtunnel being developed at NASA Ames Research Center. Other major institutions such as the National Center for Supercomputing Applications and SRI International are also exploring this technology. This talk will be describe several implementations of virtual environments for use in scientific visualization. Examples include the visualization of unsteady fluid flows (the virtual windtunnel), the visualization of geodesics in curved spacetime, surface manipulation, and examples developed at various laboratories.

  1. Performance analysis of sliding window filtering of two dimensional signals based on stream data processing systems

    NASA Astrophysics Data System (ADS)

    Kazanskiy, Nikolay; Protsenko, Vladimir; Serafimovich, Pavel

    2016-03-01

    This research article contains an experiment with implementation of image filtering task in Apache Storm and IBM InfoSphere Streams stream data processing systems. The aim of presented research is to show that new technologies could be effectively used for sliding window filtering of image sequences. The analysis of execution was focused on two parameters: throughput and memory consumption. Profiling was performed on CentOS operating systems running on two virtual machines for each system. The experiment results showed that IBM InfoSphere Streams has about 1.5 to 13.5 times lower memory footprint than Apache Storm, but could be about 2.0 to 2.5 slower on a real hardware.

  2. A Review of Extra-Terrestrial Mining Robot Concepts

    NASA Technical Reports Server (NTRS)

    Mueller, Robert P.; Van Susante, Paul J.

    2011-01-01

    Outer space contains a vast amount of resources that offer virtually unlimited wealth to the humans that can access and use them for commercial purposes. One of the key technologies for harvesting these resources is robotic mining of regolith, minerals, ices and metals. The harsh environment and vast distances create challenges that are handled best by robotic machines working in collaboration with human explorers. Humans will benefit from the resources that will be mined by robots. They will visit outposts and mining camps as required for exploration, commerce and scientific research, but a continuous presence is most likely to be provided by robotic mining machines that are remotely controlled by humans. There have been a variety of extra-terrestrial robotic mining concepts proposed over the last 100 years and this paper will attempt to summarize and review concepts in the public domain (government, industry and academia) to serve as an informational resource for future mining robot developers and operators. The challenges associated with these concepts will be discussed and feasibility will be assessed. Future needs associated with commercial efforts will also be investigated.

  3. A Review of Extra-Terrestrial Mining Concepts

    NASA Technical Reports Server (NTRS)

    Mueller, R. P.; van Susante, P. J.

    2012-01-01

    Outer space contains a vast amount of resources that offer virtually unlimited wealth to the humans that can access and use them for commercial purposes. One of the key technologies for harvesting these resources is robotic mining of regolith, minerals, ices and metals. The harsh environment and vast distances create challenges that are handled best by robotic machines working in collaboration with human explorers. Humans will benefit from the resources that will be mined by robots. They will visit outposts and mining camps as required for exploration, commerce and scientific research, but a continuous presence is most likely to be provided by robotic mining machines that are remotely controlled by humans. There have been a variety of extra-terrestrial robotic mining concepts proposed over the last 40 years and this paper will attempt to summarize and review concepts in the public domain (government, industry and academia) to serve as an informational resource for future mining robot developers and operators. The challenges associated with these concepts will be discussed and feasibility will be assessed. Future needs associated with commercial efforts will also be investigated.

  4. Semantic Agent-Based Service Middleware and Simulation for Smart Cities

    PubMed Central

    Liu, Ming; Xu, Yang; Hu, Haixiao; Mohammed, Abdul-Wahid

    2016-01-01

    With the development of Machine-to-Machine (M2M) technology, a variety of embedded and mobile devices is integrated to interact via the platform of the Internet of Things, especially in the domain of smart cities. One of the primary challenges is that selecting the appropriate services or service combination for upper layer applications is hard, which is due to the absence of a unified semantical service description pattern, as well as the service selection mechanism. In this paper, we define a semantic service representation model from four key properties: Capability (C), Deployment (D), Resource (R) and IOData (IO). Based on this model, an agent-based middleware is built to support semantic service enablement. In this middleware, we present an efficient semantic service discovery and matching approach for a service combination process, which calculates the semantic similarity between services, and a heuristic algorithm to search the service candidates for a specific service request. Based on this design, we propose a simulation of virtual urban fire fighting, and the experimental results manifest the feasibility and efficiency of our design. PMID:28009818

  5. Semantic Agent-Based Service Middleware and Simulation for Smart Cities.

    PubMed

    Liu, Ming; Xu, Yang; Hu, Haixiao; Mohammed, Abdul-Wahid

    2016-12-21

    With the development of Machine-to-Machine (M2M) technology, a variety of embedded and mobile devices is integrated to interact via the platform of the Internet of Things, especially in the domain of smart cities. One of the primary challenges is that selecting the appropriate services or service combination for upper layer applications is hard, which is due to the absence of a unified semantical service description pattern, as well as the service selection mechanism. In this paper, we define a semantic service representation model from four key properties: Capability (C), Deployment (D), Resource (R) and IOData (IO). Based on this model, an agent-based middleware is built to support semantic service enablement. In this middleware, we present an efficient semantic service discovery and matching approach for a service combination process, which calculates the semantic similarity between services, and a heuristic algorithm to search the service candidates for a specific service request. Based on this design, we propose a simulation of virtual urban fire fighting, and the experimental results manifest the feasibility and efficiency of our design.

  6. Research and Development of Web-Based Virtual Online Classroom

    ERIC Educational Resources Information Center

    Yang, Zongkai; Liu, Qingtang

    2007-01-01

    To build a web-based virtual learning environment depends on information technologies, concerns technology supporting learning methods and theories. A web-based virtual online classroom is designed and developed based on learning theories and streaming media technologies. And it is composed of two parts: instructional communicating environment…

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCaskey, Alexander J.

    There is a lack of state-of-the-art HPC simulation tools for simulating general quantum computing. Furthermore, there are no real software tools that integrate current quantum computers into existing classical HPC workflows. This product, the Quantum Virtual Machine (QVM), solves this problem by providing an extensible framework for pluggable virtual, or physical, quantum processing units (QPUs). It enables the execution of low level quantum assembly codes and returns the results of such executions.

  8. The Fluke Security Project

    DTIC Science & Technology

    2000-04-01

    be an extension of Utah’s nascent Quarks system, oriented to closely coupled cluster environments. However, the grant did not actually begin until... Intel x86, implemented ten virtual machine monitors and servers, including a virtual memory manager, a checkpointer, a process manager, a file server...Fluke, we developed a novel hierarchical processor scheduling frame- work called CPU inheritance scheduling [5]. This is a framework for scheduling

  9. An innovative multimodal virtual platform for communication with devices in a natural way

    NASA Astrophysics Data System (ADS)

    Kinkar, Chhayarani R.; Golash, Richa; Upadhyay, Akhilesh R.

    2012-03-01

    As technology grows people are diverted and are more interested in communicating with machine or computer naturally. This will make machine more compact and portable by avoiding remote, keyboard etc. also it will help them to live in an environment free from electromagnetic waves. This thought has made 'recognition of natural modality in human computer interaction' a most appealing and promising research field. Simultaneously it has been observed that using single mode of interaction limit the complete utilization of commands as well as data flow. In this paper a multimodal platform, where out of many natural modalities like eye gaze, speech, voice, face etc. human gestures are combined with human voice is proposed which will minimize the mean square error. This will loosen the strict environment needed for accurate and robust interaction while using single mode. Gesture complement Speech, gestures are ideal for direct object manipulation and natural language is used for descriptive tasks. Human computer interaction basically requires two broad sections recognition and interpretation. Recognition and interpretation of natural modality in complex binary instruction is a tough task as it integrate real world to virtual environment. The main idea of the paper is to develop a efficient model for data fusion coming from heterogeneous sensors, camera and microphone. Through this paper we have analyzed that the efficiency is increased if heterogeneous data (image & voice) is combined at feature level using artificial intelligence. The long term goal of this paper is to design a robust system for physically not able or having less technical knowledge.

  10. Virtual Neurorobotics (VNR) to Accelerate Development of Plausible Neuromorphic Brain Architectures.

    PubMed

    Goodman, Philip H; Buntha, Sermsak; Zou, Quan; Dascalu, Sergiu-Mihai

    2007-01-01

    Traditional research in artificial intelligence and machine learning has viewed the brain as a specially adapted information-processing system. More recently the field of social robotics has been advanced to capture the important dynamics of human cognition and interaction. An overarching societal goal of this research is to incorporate the resultant knowledge about intelligence into technology for prosthetic, assistive, security, and decision support applications. However, despite many decades of investment in learning and classification systems, this paradigm has yet to yield truly "intelligent" systems. For this reason, many investigators are now attempting to incorporate more realistic neuromorphic properties into machine learning systems, encouraged by over two decades of neuroscience research that has provided parameters that characterize the brain's interdependent genomic, proteomic, metabolomic, anatomic, and electrophysiological networks. Given the complexity of neural systems, developing tenable models to capture the essence of natural intelligence for real-time application requires that we discriminate features underlying information processing and intrinsic motivation from those reflecting biological constraints (such as maintaining structural integrity and transporting metabolic products). We propose herein a conceptual framework and an iterative method of virtual neurorobotics (VNR) intended to rapidly forward-engineer and test progressively more complex putative neuromorphic brain prototypes for their ability to support intrinsically intelligent, intentional interaction with humans. The VNR system is based on the viewpoint that a truly intelligent system must be driven by emotion rather than programmed tasking, incorporating intrinsic motivation and intentionality. We report pilot results of a closed-loop, real-time interactive VNR system with a spiking neural brain, and provide a video demonstration as online supplemental material.

  11. The virtual environment display system

    NASA Technical Reports Server (NTRS)

    Mcgreevy, Michael W.

    1991-01-01

    Virtual environment technology is a display and control technology that can surround a person in an interactive computer generated or computer mediated virtual environment. It has evolved at NASA-Ames since 1984 to serve NASA's missions and goals. The exciting potential of this technology, sometimes called Virtual Reality, Artificial Reality, or Cyberspace, has been recognized recently by the popular media, industry, academia, and government organizations. Much research and development will be necessary to bring it to fruition.

  12. The Integration of CloudStack and OCCI/OpenNebula with DIRAC

    NASA Astrophysics Data System (ADS)

    Méndez Muñoz, Víctor; Fernández Albor, Víctor; Graciani Diaz, Ricardo; Casajús Ramo, Adriàn; Fernández Pena, Tomás; Merino Arévalo, Gonzalo; José Saborido Silva, Juan

    2012-12-01

    The increasing availability of Cloud resources is arising as a realistic alternative to the Grid as a paradigm for enabling scientific communities to access large distributed computing resources. The DIRAC framework for distributed computing is an easy way to efficiently access to resources from both systems. This paper explains the integration of DIRAC with two open-source Cloud Managers: OpenNebula (taking advantage of the OCCI standard) and CloudStack. These are computing tools to manage the complexity and heterogeneity of distributed data center infrastructures, allowing to create virtual clusters on demand, including public, private and hybrid clouds. This approach has required to develop an extension to the previous DIRAC Virtual Machine engine, which was developed for Amazon EC2, allowing the connection with these new cloud managers. In the OpenNebula case, the development has been based on the CernVM Virtual Software Appliance with appropriate contextualization, while in the case of CloudStack, the infrastructure has been kept more general, which permits other Virtual Machine sources and operating systems being used. In both cases, CernVM File System has been used to facilitate software distribution to the computing nodes. With the resulting infrastructure, the cloud resources are transparent to the users through a friendly interface, like the DIRAC Web Portal. The main purpose of this integration is to get a system that can manage cloud and grid resources at the same time. This particular feature pushes DIRAC to a new conceptual denomination as interware, integrating different middleware. Users from different communities do not need to care about the installation of the standard software that is available at the nodes, nor the operating system of the host machine which is transparent to the user. This paper presents an analysis of the overhead of the virtual layer, doing some tests to compare the proposed approach with the existing Grid solution. License Notice: Published under licence in Journal of Physics: Conference Series by IOP Publishing Ltd.

  13. The perception of spatial layout in real and virtual worlds.

    PubMed

    Arthur, E J; Hancock, P A; Chrysler, S T

    1997-01-01

    As human-machine interfaces grow more immersive and graphically-oriented, virtual environment systems become more prominent as the medium for human-machine communication. Often, virtual environments (VE) are built to provide exact metrical representations of existing or proposed physical spaces. However, it is not known how individuals develop representational models of these spaces in which they are immersed and how those models may be distorted with respect to both the virtual and real-world equivalents. To evaluate the process of model development, the present experiment examined participant's ability to reproduce a complex spatial layout of objects having experienced them previously under different viewing conditions. The layout consisted of nine common objects arranged on a flat plane. These objects could be viewed in a free binocular virtual condition, a free binocular real-world condition, and in a static monocular view of the real world. The first two allowed active exploration of the environment while the latter condition allowed the participant only a passive opportunity to observe from a single viewpoint. Viewing conditions were a between-subject variable with 10 participants randomly assigned to each condition. Performance was assessed using mapping accuracy and triadic comparisons of relative inter-object distances. Mapping results showed a significant effect of viewing condition where, interestingly, the static monocular condition was superior to both the active virtual and real binocular conditions. Results for the triadic comparisons showed a significant interaction for gender by viewing condition in which males were more accurate than females. These results suggest that the situation model resulting from interaction with a virtual environment was indistinguishable from interaction with real objects at least within the constraints of the present procedure.

  14. A Systematic Framework of Virtual Laboratories Using Mobile Agent and Design Pattern Technologies

    ERIC Educational Resources Information Center

    Li, Yi-Hsung; Dow, Chyi-Ren; Lin, Cheng-Min; Chen, Sheng-Chang; Hsu, Fu-Wei

    2009-01-01

    Innovations in network and information technology have transformed traditional classroom lectures into new approaches that have given universities the opportunity to create a virtual laboratory. However, there is no systematic framework in existing approaches for the development of virtual laboratories. Further, developing a virtual laboratory…

  15. New directions in virtual environments and gaming to address obesity and diabetes: industry perspective.

    PubMed

    Ruppert, Barb

    2011-03-01

    Virtual reality is increasingly used for education and treatment in the fields of health and medicine. What is the health potential of virtual reality technology from the software development industry perspective? This article presents interviews with Ben Sawyer of Games for Health, Dr. Walter Greenleaf of InWorld Solutions, and Dr. Ernie Medina of MedPlay Technologies. Games for Health brings together researchers, medical professionals, and game developers to share information on the impact that game technologies can have on health, health care, and policy. InWorld is an Internet-based virtual environment designed specifically for behavioral health care. MedPlay Technologies develops wellness training programs that include exergaming technology. The interviewees share their views on software development and other issues that must be addressed to advance the field of virtual reality for health applications. © 2011 Diabetes Technology Society.

  16. Designing Virtual Museum Using Web3D Technology

    NASA Astrophysics Data System (ADS)

    Zhao, Jianghai

    VRT was born to have the potentiality of constructing an effective learning environment due to its 3I characteristics: Interaction, Immersion and Imagination. It is now applied in education in a more profound way along with the development of VRT. Virtual Museum is one of the applications. The Virtual Museum is based on the WEB3D technology and extensibility is the most important factor. Considering the advantage and disadvantage of each WEB3D technology, VRML, CULT3D AND VIEWPOINT technologies are chosen. A web chatroom based on flash and ASP technology is also been created in order to make the Virtual Museum an interactive learning environment.

  17. Summer Students in Virtual Reality: A Pilot Study on Educational Applications of Virtual Reality Technology.

    ERIC Educational Resources Information Center

    Bricken, Meredith; Byrne, Chris M.

    The goal of this study was to take a first step in evaluating the potential of virtual reality (VR) as a learning environment. The context of the study was The Technology Academy, a technology-oriented summer day camp for students ages 5-18, where student activities center around hands-on exploration of new technology (e.g., robotics, MIDI digital…

  18. Performance of machine-learning scoring functions in structure-based virtual screening.

    PubMed

    Wójcikowski, Maciej; Ballester, Pedro J; Siedlecki, Pawel

    2017-04-25

    Classical scoring functions have reached a plateau in their performance in virtual screening and binding affinity prediction. Recently, machine-learning scoring functions trained on protein-ligand complexes have shown great promise in small tailored studies. They have also raised controversy, specifically concerning model overfitting and applicability to novel targets. Here we provide a new ready-to-use scoring function (RF-Score-VS) trained on 15 426 active and 893 897 inactive molecules docked to a set of 102 targets. We use the full DUD-E data sets along with three docking tools, five classical and three machine-learning scoring functions for model building and performance assessment. Our results show RF-Score-VS can substantially improve virtual screening performance: RF-Score-VS top 1% provides 55.6% hit rate, whereas that of Vina only 16.2% (for smaller percent the difference is even more encouraging: RF-Score-VS top 0.1% achieves 88.6% hit rate for 27.5% using Vina). In addition, RF-Score-VS provides much better prediction of measured binding affinity than Vina (Pearson correlation of 0.56 and -0.18, respectively). Lastly, we test RF-Score-VS on an independent test set from the DEKOIS benchmark and observed comparable results. We provide full data sets to facilitate further research in this area (http://github.com/oddt/rfscorevs) as well as ready-to-use RF-Score-VS (http://github.com/oddt/rfscorevs_binary).

  19. Extreme Mechanics in Soft Pneumatic Robots and Soft Microfluidic Electronics and Sensors

    NASA Astrophysics Data System (ADS)

    Majidi, Carmel

    2012-02-01

    In the near future, machines and robots will be completely soft, stretchable, impact resistance, and capable of adapting their shape and functionality to changes in mission and environment. Similar to biological tissue and soft-body organisms, these next-generation technologies will contain no rigid parts and instead be composed entirely of soft elastomers, gels, fluids, and other non-rigid matter. Using a combination of rapid prototyping tools, microfabrication methods, and emerging techniques in so-called ``soft lithography,'' scientists and engineers are currently introducing exciting new families of soft pneumatic robots, soft microfluidic sensors, and hyperelastic electronics that can be stretched to as much as 10x their natural length. Progress has been guided by an interdisciplinary collection of insights from chemistry, life sciences, robotics, microelectronics, and solid mechanics. In virtually every technology and application domain, mechanics and elasticity have a central role in governing functionality and design. Moreover, in contrast to conventional machines and electronics, soft pneumatic systems and microfluidics typically operate in the finite deformation regime, with materials stretching to several times their natural length. In this talk, I will review emerging paradigms in soft pneumatic robotics and soft microfluidic electronics and highlight modeling and design challenges that arise from the extreme mechanics of inflation, locomotion, sensor operation, and human interaction. I will also discuss perceived challenges and opportunities in a broad range of potential application, from medicine to wearable computing.

  20. 75 FR 41894 - Wapakoneta Machine Company, Currently Known as EF Industrial Technologies, Inc., Wapakoneta, OH...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-19

    ... Company, Currently Known as EF Industrial Technologies, Inc., Wapakoneta, OH; Amended Certification... of early 2010, Wapakoneta Machine Company is currently known as EF Industrial Technologies, Inc. Some... Wapakoneta Machine Company, currently known as EF Industrial Technologies, Inc., Wapakoneta, Ohio became...

  1. Creative Technology and Rap

    ERIC Educational Resources Information Center

    Ch'ien, Evelyn

    2011-01-01

    This paper describes how a linguistic form, rap, can evolve in tandem with technological advances and manifest human-machine creativity. Rather than assuming that the interplay between machines and technology makes humans robotic or machine-like, the paper explores how the pressure of executing artistic visions using technology can drive…

  2. Bridging Realty to Virtual Reality: Investigating Gender Effect and Student Engagement on Learning through Video Game Play in an Elementary School Classroom

    ERIC Educational Resources Information Center

    Annetta, Leonard; Mangrum, Jennifer; Holmes, Shawn; Collazo, Kimberly; Cheng, Meng-Tzu

    2009-01-01

    The purpose of this study was to examine students' learning of simple machines, a fifth-grade (ages 10-11) forces and motion unit, and student engagement using a teacher-created Multiplayer Educational Gaming Application. This mixed-method study collected pre-test/post-test results to determine student knowledge about simple machines. A survey…

  3. Innovation Education Enabled through a Collaborative Virtual Reality Learning Environment

    ERIC Educational Resources Information Center

    Thorsteinsson, Gisli; Page, Tom; Lehtonen, Miika; Ha, Joong Gyu

    2006-01-01

    This article provides a descriptive account of the development of an approach to the support of design and technology education with 3D Virtual Reality (VR) technologies on an open and distance learning basis. This work promotes an understanding of the implications and possibilities of advanced virtual learning technologies in education for…

  4. Research on 3D virtual campus scene modeling based on 3ds Max and VRML

    NASA Astrophysics Data System (ADS)

    Kang, Chuanli; Zhou, Yanliu; Liang, Xianyue

    2015-12-01

    With the rapid development of modem technology, the digital information management and the virtual reality simulation technology has become a research hotspot. Virtual campus 3D model can not only express the real world objects of natural, real and vivid, and can expand the campus of the reality of time and space dimension, the combination of school environment and information. This paper mainly uses 3ds Max technology to create three-dimensional model of building and on campus buildings, special land etc. And then, the dynamic interactive function is realized by programming the object model in 3ds Max by VRML .This research focus on virtual campus scene modeling technology and VRML Scene Design, and the scene design process in a variety of real-time processing technology optimization strategy. This paper guarantees texture map image quality and improve the running speed of image texture mapping. According to the features and architecture of Guilin University of Technology, 3ds Max, AutoCAD and VRML were used to model the different objects of the virtual campus. Finally, the result of virtual campus scene is summarized.

  5. Virtual Application of Darul Arif Palace from Serdang Sultanate using Virtual Reality

    NASA Astrophysics Data System (ADS)

    Syahputra, M. F.; Annisa, T.; Rahmat, R. F.; Muchtar, M. A.

    2017-01-01

    Serdang Sultanate is one of Malay Sultanate in Sumatera Utara. In the 18th century, many Malay Aristocrats have developed in Sumatera Utara. Social revolution has happened in 1946, many sultanates were overthrown and member of PKI (Communist Party of Indonesia) did mass killing on members of the sultanate families. As the results of this incident, many cultural and historical heritage destroyed. The integration of heritage preservation and the digital technology has become recent trend. The digital technology is not only able to record, preserve detailed documents and information of heritage completely, but also effectively bring the value-added. In this research, polygonal modelling techniques from 3D modelling technology is used to reconstruct Darul Arif Palace of Serdang Sultanate. After modelling the palace, it will be combined with virtual reality technology to allow user to explore the palace and the environment around the palace. Virtual technology is simulation of real objects in virtual world. The results in this research is that virtual reality application can run using Head-Mounted Display.

  6. Achieving High Resolution Timer Events in Virtualized Environment

    PubMed Central

    Adamczyk, Blazej; Chydzinski, Andrzej

    2015-01-01

    Virtual Machine Monitors (VMM) have become popular in different application areas. Some applications may require to generate the timer events with high resolution and precision. This however may be challenging due to the complexity of VMMs. In this paper we focus on the timer functionality provided by five different VMMs—Xen, KVM, Qemu, VirtualBox and VMWare. Firstly, we evaluate resolutions and precisions of their timer events. Apparently, provided resolutions and precisions are far too low for some applications (e.g. networking applications with the quality of service). Then, using Xen virtualization we demonstrate the improved timer design that greatly enhances both the resolution and precision of achieved timer events. PMID:26177366

  7. A computer-based training system combining virtual reality and multimedia

    NASA Technical Reports Server (NTRS)

    Stansfield, Sharon A.

    1993-01-01

    Training new users of complex machines is often an expensive and time-consuming process. This is particularly true for special purpose systems, such as those frequently encountered in DOE applications. This paper presents a computer-based training system intended as a partial solution to this problem. The system extends the basic virtual reality (VR) training paradigm by adding a multimedia component which may be accessed during interaction with the virtual environment. The 3D model used to create the virtual reality is also used as the primary navigation tool through the associated multimedia. This method exploits the natural mapping between a virtual world and the real world that it represents to provide a more intuitive way for the student to interact with all forms of information about the system.

  8. DIRAC universal pilots

    NASA Astrophysics Data System (ADS)

    Stagni, F.; McNab, A.; Luzzi, C.; Krzemien, W.; Consortium, DIRAC

    2017-10-01

    In the last few years, new types of computing models, such as IAAS (Infrastructure as a Service) and IAAC (Infrastructure as a Client), gained popularity. New resources may come as part of pledged resources, while others are in the form of opportunistic ones. Most but not all of these new infrastructures are based on virtualization techniques. In addition, some of them, present opportunities for multi-processor computing slots to the users. Virtual Organizations are therefore facing heterogeneity of the available resources and the use of an Interware software like DIRAC to provide the transparent, uniform interface has become essential. The transparent access to the underlying resources is realized by implementing the pilot model. DIRAC’s newest generation of generic pilots (the so-called Pilots 2.0) are the “pilots for all the skies”, and have been successfully released in production more than a year ago. They use a plugin mechanism that makes them easily adaptable. Pilots 2.0 have been used for fetching and running jobs on every type of resource, being it a Worker Node (WN) behind a CREAM/ARC/HTCondor/DIRAC Computing element, a Virtual Machine running on IaaC infrastructures like Vac or BOINC, on IaaS cloud resources managed by Vcycle, the LHCb High Level Trigger farm nodes, and any type of opportunistic computing resource. Make a machine a “Pilot Machine”, and all diversities between them will disappear. This contribution describes how pilots are made suitable for different resources, and the recent steps taken towards a fully unified framework, including monitoring. Also, the cases of multi-processor computing slots either on real or virtual machines, with the whole node or a partition of it, is discussed.

  9. A virtual simulator designed for collision prevention in proton therapy.

    PubMed

    Jung, Hyunuk; Kum, Oyeon; Han, Youngyih; Park, Hee Chul; Kim, Jin Sung; Choi, Doo Ho

    2015-10-01

    In proton therapy, collisions between the patient and nozzle potentially occur because of the large nozzle structure and efforts to minimize the air gap. Thus, software was developed to predict such collisions between the nozzle and patient using treatment virtual simulation. Three-dimensional (3D) modeling of a gantry inner-floor, nozzle, and robotic-couch was performed using SolidWorks based on the manufacturer's machine data. To obtain patient body information, a 3D-scanner was utilized right before CT scanning. Using the acquired images, a 3D-image of the patient's body contour was reconstructed. The accuracy of the image was confirmed against the CT image of a humanoid phantom. The machine components and the virtual patient were combined on the treatment-room coordinate system, resulting in a virtual simulator. The simulator simulated the motion of its components such as rotation and translation of the gantry, nozzle, and couch in real scale. A collision, if any, was examined both in static and dynamic modes. The static mode assessed collisions only at fixed positions of the machine's components, while the dynamic mode operated any time a component was in motion. A collision was identified if any voxels of two components, e.g., the nozzle and the patient or couch, overlapped when calculating volume locations. The event and collision point were visualized, and collision volumes were reported. All components were successfully assembled, and the motions were accurately controlled. The 3D-shape of the phantom agreed with CT images within a deviation of 2 mm. Collision situations were simulated within minutes, and the results were displayed and reported. The developed software will be useful in improving patient safety and clinical efficiency of proton therapy.

  10. Virtual Technologies to Develop Visual-Spatial Ability in Engineering Students

    ERIC Educational Resources Information Center

    Roca-González, Cristina; Martin-Gutierrez, Jorge; García-Dominguez, Melchor; Carrodeguas, Mª del Carmen Mato

    2017-01-01

    The present study assessed a short training experiment to improve spatial abilities using two tools based on virtual technologies: one focused on manipulation of specific geometric virtual pieces, and the other consisting of virtual orienteering game. The two tools can help improve spatial abilities required for many engineering problem-solving…

  11. Understanding the Adaptive Use of Virtual World Technology Capabilities and Trust in Virtual Teams

    ERIC Educational Resources Information Center

    Owens, Dawn

    2012-01-01

    In an environment of global competition and constant technological change, the use of virtual teams has become commonplace for many organizations. Virtual team members are geographically and temporally dispersed, experience cultural diversity, and lack shared social context and face-to-face encounters considered as irreplaceable for building and…

  12. The Impact of Virtual Collaboration and Collaboration Technologies on Knowledge Transfer and Team Performance in Distributed Organizations

    ERIC Educational Resources Information Center

    Ngoma, Ngoma Sylvestre

    2013-01-01

    Virtual teams are increasingly viewed as a powerful determinant of competitive advantage in geographically distributed organizations. This study was designed to provide insights into the interdependencies between virtual collaboration, collaboration technologies, knowledge transfer, and virtual team performance in an effort to understand whether…

  13. The Study on Virtual Medical Instrument based on LabVIEW.

    PubMed

    Chengwei, Li; Limei, Zhang; Xiaoming, Hu

    2005-01-01

    With the increasing performance of computer, the virtual instrument technology has greatly advanced over the years, and then virtual medical instrument technology becomes available. This paper presents the virtual medical instrument, and then as an example, an application of a signal acquisition, processing and analysis system using LabVIEW is also given.

  14. How can machine-learning methods assist in virtual screening for hyperuricemia? A healthcare machine-learning approach.

    PubMed

    Ichikawa, Daisuke; Saito, Toki; Ujita, Waka; Oyama, Hiroshi

    2016-12-01

    Our purpose was to develop a new machine-learning approach (a virtual health check-up) toward identification of those at high risk of hyperuricemia. Applying the system to general health check-ups is expected to reduce medical costs compared with administering an additional test. Data were collected during annual health check-ups performed in Japan between 2011 and 2013 (inclusive). We prepared training and test datasets from the health check-up data to build prediction models; these were composed of 43,524 and 17,789 persons, respectively. Gradient-boosting decision tree (GBDT), random forest (RF), and logistic regression (LR) approaches were trained using the training dataset and were then used to predict hyperuricemia in the test dataset. Undersampling was applied to build the prediction models to deal with the imbalanced class dataset. The results showed that the RF and GBDT approaches afforded the best performances in terms of sensitivity and specificity, respectively. The area under the curve (AUC) values of the models, which reflected the total discriminative ability of the classification, were 0.796 [95% confidence interval (CI): 0.766-0.825] for the GBDT, 0.784 [95% CI: 0.752-0.815] for the RF, and 0.785 [95% CI: 0.752-0.819] for the LR approaches. No significant differences were observed between pairs of each approach. Small changes occurred in the AUCs after applying undersampling to build the models. We developed a virtual health check-up that predicted the development of hyperuricemia using machine-learning methods. The GBDT, RF, and LR methods had similar predictive capability. Undersampling did not remarkably improve predictive power. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. The HARNESS Workbench: Unified and Adaptive Access to Diverse HPC Platforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sunderam, Vaidy S.

    2012-03-20

    The primary goal of the Harness WorkBench (HWB) project is to investigate innovative software environments that will help enhance the overall productivity of applications science on diverse HPC platforms. Two complementary frameworks were designed: one, a virtualized command toolkit for application building, deployment, and execution, that provides a common view across diverse HPC systems, in particular the DOE leadership computing platforms (Cray, IBM, SGI, and clusters); and two, a unified runtime environment that consolidates access to runtime services via an adaptive framework for execution-time and post processing activities. A prototype of the first was developed based on the concept ofmore » a 'system-call virtual machine' (SCVM), to enhance portability of the HPC application deployment process across heterogeneous high-end machines. The SCVM approach to portable builds is based on the insertion of toolkit-interpretable directives into original application build scripts. Modifications resulting from these directives preserve the semantics of the original build instruction flow. The execution of the build script is controlled by our toolkit that intercepts build script commands in a manner transparent to the end-user. We have applied this approach to a scientific production code (Gamess-US) on the Cray-XT5 machine. The second facet, termed Unibus, aims to facilitate provisioning and aggregation of multifaceted resources from resource providers and end-users perspectives. To achieve that, Unibus proposes a Capability Model and mediators (resource drivers) to virtualize access to diverse resources, and soft and successive conditioning to enable automatic and user-transparent resource provisioning. A proof of concept implementation has demonstrated the viability of this approach on high end machines, grid systems and computing clouds.« less

  16. Design of efficient molecular organic light-emitting diodes by a high-throughput virtual screening and experimental approach.

    PubMed

    Gómez-Bombarelli, Rafael; Aguilera-Iparraguirre, Jorge; Hirzel, Timothy D; Duvenaud, David; Maclaurin, Dougal; Blood-Forsythe, Martin A; Chae, Hyun Sik; Einzinger, Markus; Ha, Dong-Gwang; Wu, Tony; Markopoulos, Georgios; Jeon, Soonok; Kang, Hosuk; Miyazaki, Hiroshi; Numata, Masaki; Kim, Sunghan; Huang, Wenliang; Hong, Seong Ik; Baldo, Marc; Adams, Ryan P; Aspuru-Guzik, Alán

    2016-10-01

    Virtual screening is becoming a ground-breaking tool for molecular discovery due to the exponential growth of available computer time and constant improvement of simulation and machine learning techniques. We report an integrated organic functional material design process that incorporates theoretical insight, quantum chemistry, cheminformatics, machine learning, industrial expertise, organic synthesis, molecular characterization, device fabrication and optoelectronic testing. After exploring a search space of 1.6 million molecules and screening over 400,000 of them using time-dependent density functional theory, we identified thousands of promising novel organic light-emitting diode molecules across the visible spectrum. Our team collaboratively selected the best candidates from this set. The experimentally determined external quantum efficiencies for these synthesized candidates were as large as 22%.

  17. Using a Virtual Tablet Machine to Improve Student Understanding of the Complex Processes Involved in Tablet Manufacturing.

    PubMed

    Mattsson, Sofia; Sjöström, Hans-Erik; Englund, Claire

    2016-06-25

    Objective. To develop and implement a virtual tablet machine simulation to aid distance students' understanding of the processes involved in tablet production. Design. A tablet simulation was created enabling students to study the effects different parameters have on the properties of the tablet. Once results were generated, students interpreted and explained them on the basis of current theory. Assessment. The simulation was evaluated using written questionnaires and focus group interviews. Students appreciated the exercise and considered it to be motivational. Students commented that they found the simulation, together with the online seminar and the writing of the report, was beneficial for their learning process. Conclusion. According to students' perceptions, the use of the tablet simulation contributed to their understanding of the compaction process.

  18. Using a Virtual Tablet Machine to Improve Student Understanding of the Complex Processes Involved in Tablet Manufacturing

    PubMed Central

    Sjöström, Hans-Erik; Englund, Claire

    2016-01-01

    Objective. To develop and implement a virtual tablet machine simulation to aid distance students’ understanding of the processes involved in tablet production. Design. A tablet simulation was created enabling students to study the effects different parameters have on the properties of the tablet. Once results were generated, students interpreted and explained them on the basis of current theory. Assessment. The simulation was evaluated using written questionnaires and focus group interviews. Students appreciated the exercise and considered it to be motivational. Students commented that they found the simulation, together with the online seminar and the writing of the report, was beneficial for their learning process. Conclusion. According to students’ perceptions, the use of the tablet simulation contributed to their understanding of the compaction process. PMID:27402990

  19. Design of efficient molecular organic light-emitting diodes by a high-throughput virtual screening and experimental approach

    NASA Astrophysics Data System (ADS)

    Gómez-Bombarelli, Rafael; Aguilera-Iparraguirre, Jorge; Hirzel, Timothy D.; Duvenaud, David; MacLaurin, Dougal; Blood-Forsythe, Martin A.; Chae, Hyun Sik; Einzinger, Markus; Ha, Dong-Gwang; Wu, Tony; Markopoulos, Georgios; Jeon, Soonok; Kang, Hosuk; Miyazaki, Hiroshi; Numata, Masaki; Kim, Sunghan; Huang, Wenliang; Hong, Seong Ik; Baldo, Marc; Adams, Ryan P.; Aspuru-Guzik, Alán

    2016-10-01

    Virtual screening is becoming a ground-breaking tool for molecular discovery due to the exponential growth of available computer time and constant improvement of simulation and machine learning techniques. We report an integrated organic functional material design process that incorporates theoretical insight, quantum chemistry, cheminformatics, machine learning, industrial expertise, organic synthesis, molecular characterization, device fabrication and optoelectronic testing. After exploring a search space of 1.6 million molecules and screening over 400,000 of them using time-dependent density functional theory, we identified thousands of promising novel organic light-emitting diode molecules across the visible spectrum. Our team collaboratively selected the best candidates from this set. The experimentally determined external quantum efficiencies for these synthesized candidates were as large as 22%.

  20. Making extreme computations possible with virtual machines

    NASA Astrophysics Data System (ADS)

    Reuter, J.; Chokoufe Nejad, B.; Ohl, T.

    2016-10-01

    State-of-the-art algorithms generate scattering amplitudes for high-energy physics at leading order for high-multiplicity processes as compiled code (in Fortran, C or C++). For complicated processes the size of these libraries can become tremendous (many GiB). We show that amplitudes can be translated to byte-code instructions, which even reduce the size by one order of magnitude. The byte-code is interpreted by a Virtual Machine with runtimes comparable to compiled code and a better scaling with additional legs. We study the properties of this algorithm, as an extension of the Optimizing Matrix Element Generator (O'Mega). The bytecode matrix elements are available as alternative input for the event generator WHIZARD. The bytecode interpreter can be implemented very compactly, which will help with a future implementation on massively parallel GPUs.

  1. Machine vision 1992-1996: technology program to promote research and its utilization in industry

    NASA Astrophysics Data System (ADS)

    Soini, Antti J.

    1994-10-01

    Machine vision technology has got a strong interest in Finnish research organizations, which is resulting in many innovative products to industry. Despite this end users were very skeptical towards machine vision and its robustness for harsh industrial environments. Therefore Technology Development Centre, TEKES, who funds technology related research and development projects in universities and individual companies, decided to start a national technology program, Machine Vision 1992 - 1996. Led by industry the program boosts research in machine vision technology and seeks to put the research results to work in practical industrial applications. The emphasis is in nationally important, demanding applications. The program will create new industry and business for machine vision producers and encourage the process and manufacturing industry to take advantage of this new technology. So far 60 companies and all major universities and research centers are working on our forty different projects. The key themes that we have are process control, robot vision and quality control.

  2. Machine tool task force

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sutton, G.P.

    1980-10-22

    The Machine Tool Task Force (MTTF) is a multidisciplined team of international experts, whose mission was to investigate the state of the art of machine tool technology, to identify promising future directions of that technology for both the US government and private industry, and to disseminate the findings of its research in a conference and through the publication of a final report. MTTF was a two and one-half year effort that involved the participation of 122 experts in the specialized technologies of machine tools and in the management of machine tool operations. The scope of the MTTF was limited tomore » cutting-type or material-removal-type machine tools, because they represent the major share and value of all machine tools now installed or being built. The activities of the MTTF and the technical, commercial and economic signifiance of recommended activities for improving machine tool technology are discussed. (LCL)« less

  3. A One-Year Case Study: Understanding the Rich Potential of Project-Based Learning in a Virtual Reality Class for High School Students

    ERIC Educational Resources Information Center

    Morales, Teresa M.; Bang, EunJin; Andre, Thomas

    2013-01-01

    This paper presents a qualitative case analysis of a new and unique, high school, student-directed, project-based learning (PBL), virtual reality (VR) class. In order to create projects, students learned, on an independent basis, how to program an industrial-level VR machine. A constraint was that students were required to produce at least one…

  4. An Analysis of Learners' Intentions toward Virtual Reality Learning Based on Constructivist and Technology Acceptance Approaches

    ERIC Educational Resources Information Center

    Huang, Hsiu-Mei; Liaw, Shu-Sheng

    2018-01-01

    Within a constructivist paradigm, the virtual reality technology focuses on the learner's actively interactive learning processes and attempts to reduce the gap between the learner's knowledge and a real-life experience. Recently, virtual reality technologies have been developed for a wide range of applications in education, but further research…

  5. Methods For Self-Organizing Software

    DOEpatents

    Bouchard, Ann M.; Osbourn, Gordon C.

    2005-10-18

    A method for dynamically self-assembling and executing software is provided, containing machines that self-assemble execution sequences and data structures. In addition to ordered functions calls (found commonly in other software methods), mutual selective bonding between bonding sites of machines actuates one or more of the bonding machines. Two or more machines can be virtually isolated by a construct, called an encapsulant, containing a population of machines and potentially other encapsulants that can only bond with each other. A hierarchical software structure can be created using nested encapsulants. Multi-threading is implemented by populations of machines in different encapsulants that are interacting concurrently. Machines and encapsulants can move in and out of other encapsulants, thereby changing the functionality. Bonding between machines' sites can be deterministic or stochastic with bonding triggering a sequence of actions that can be implemented by each machine. A self-assembled execution sequence occurs as a sequence of stochastic binding between machines followed by their deterministic actuation. It is the sequence of bonding of machines that determines the execution sequence, so that the sequence of instructions need not be contiguous in memory.

  6. Modulation of visually evoked movement responses in moving virtual environments.

    PubMed

    Reed-Jones, Rebecca J; Vallis, Lori Ann

    2009-01-01

    Virtual-reality technology is being increasingly used to understand how humans perceive and act in the moving world around them. What is currently not clear is how virtual reality technology is perceived by human participants and what virtual scenes are effective in evoking movement responses to visual stimuli. We investigated the effect of virtual-scene context on human responses to a virtual visual perturbation. We hypothesised that exposure to a natural scene that matched the visual expectancies of the natural world would create a perceptual set towards presence, and thus visual guidance of body movement in a subsequently presented virtual scene. Results supported this hypothesis; responses to a virtual visual perturbation presented in an ambiguous virtual scene were increased when participants first viewed a scene that consisted of natural landmarks which provided 'real-world' visual motion cues. Further research in this area will provide a basis of knowledge for the effective use of this technology in the study of human movement responses.

  7. Virtual Reality as Innovative Approach to the Interior Designing

    NASA Astrophysics Data System (ADS)

    Kaleja, Pavol; Kozlovská, Mária

    2017-06-01

    We can observe significant potential of information and communication technologies (ICT) in interior designing field, by development of software and hardware virtual reality tools. Using ICT tools offer realistic perception of proposal in its initial idea (the study). A group of real-time visualization, supported by hardware tools like Oculus Rift HTC Vive, provides free walkthrough and movement in virtual interior with the possibility of virtual designing. By improving of ICT software tools for designing in virtual reality we can achieve still more realistic virtual environment. The contribution presented proposal of an innovative approach of interior designing in virtual reality, using the latest software and hardware ICT virtual reality technologies

  8. Factors and Traits Attributed to the Success of Virtual Managers: A Delphi Study

    ERIC Educational Resources Information Center

    Garrett, Leslie A.

    2012-01-01

    This study explored the factors and traits impacting the success of virtual managers. It can be argued that given technology's role in working virtually, one would deem technology as the most important factor impacting one's work in a virtual environment, however, there are other factors "including support from the organization and one's personal…

  9. Comparative analysis of video processing and 3D rendering for cloud video games using different virtualization technologies

    NASA Astrophysics Data System (ADS)

    Bada, Adedayo; Alcaraz-Calero, Jose M.; Wang, Qi; Grecos, Christos

    2014-05-01

    This paper describes a comprehensive empirical performance evaluation of 3D video processing employing the physical/virtual architecture implemented in a cloud environment. Different virtualization technologies, virtual video cards and various 3D benchmarks tools have been utilized in order to analyse the optimal performance in the context of 3D online gaming applications. This study highlights 3D video rendering performance under each type of hypervisors, and other factors including network I/O, disk I/O and memory usage. Comparisons of these factors under well-known virtual display technologies such as VNC, Spice and Virtual 3D adaptors reveal the strengths and weaknesses of the various hypervisors with respect to 3D video rendering and streaming.

  10. Human Machine Interfaces for Teleoperators and Virtual Environments Conference

    NASA Technical Reports Server (NTRS)

    1990-01-01

    In a teleoperator system the human operator senses, moves within, and operates upon a remote or hazardous environment by means of a slave mechanism (a mechanism often referred to as a teleoperator). In a virtual environment system the interactive human machine interface is retained but the slave mechanism and its environment are replaced by a computer simulation. Video is replaced by computer graphics. The auditory and force sensations imparted to the human operator are similarly computer generated. In contrast to a teleoperator system, where the purpose is to extend the operator's sensorimotor system in a manner that facilitates exploration and manipulation of the physical environment, in a virtual environment system, the purpose is to train, inform, alter, or study the human operator to modify the state of the computer and the information environment. A major application in which the human operator is the target is that of flight simulation. Although flight simulators have been around for more than a decade, they had little impact outside aviation presumably because the application was so specialized and so expensive.

  11. Mathematical defense method of networked servers with controlled remote backups

    NASA Astrophysics Data System (ADS)

    Kim, Song-Kyoo

    2006-05-01

    The networked server defense model is focused on reliability and availability in security respects. The (remote) backup servers are hooked up by VPN (Virtual Private Network) with high-speed optical network and replace broken main severs immediately. The networked server can be represent as "machines" and then the system deals with main unreliable, spare, and auxiliary spare machine. During vacation periods, when the system performs a mandatory routine maintenance, auxiliary machines are being used for back-ups; the information on the system is naturally delayed. Analog of the N-policy to restrict the usage of auxiliary machines to some reasonable quantity. The results are demonstrated in the network architecture by using the stochastic optimization techniques.

  12. Innovative technologies in course Electrical engineering and electronics

    NASA Astrophysics Data System (ADS)

    Kuznetsov, E. V.; Kiselev, V. I.; Kulikova, E. A.

    2017-11-01

    Department of Electrical Engineering and Nondestructive Testing, NRU “MPEI”, has been working on development Electronic Learning Resources (ELRs) in course Electrical Engineering and Electronics for several years. This work have been focused on education intensification and effectiveness while training bachelors in nonelectrical specializations including students from Thermal and Atomic Power Engineering Institute. The developed ELRs are united in a tutorial module consisting of three parts (Electrical Circuits, Electrical Machines, Basics of Electronics): electronic textbook and workbook (ETW); virtual laboratory sessions (VLS); training sessions (ETS); personal tasks (PT); testing system that contains electronic tests in all course subjects and built-in verification of a student’s work results in ETW, VLS, ETS, PT. The report presents samples of different ELRs in html format and MathCAD, MatLAB Simulink applications, copyrighted programs in Java2, Delphi, VB6, C++. The report also contains the experience description, advantages and disadvantages of the new technologies. It is mentioned that ELRs provide new opportunities in course studying.

  13. Occupant detection using support vector machines with a polynomial kernel function

    NASA Astrophysics Data System (ADS)

    Destefanis, Eduardo A.; Kienzle, Eberhard; Canali, Luis R.

    2000-10-01

    The use of air bags in the presence of bad passenger and baby seat positions in car seats can injure or kill these individuals in case of an accident when this device is inflated. A proposed solution is the use of range sensors to detect passenger and baby seat risky positions. Such sensors allow the Airbag inflation to be controlled. This work is concerned with the application of different classification schemes to a real world problem and the optimization of a sensor as a function of the classification performance. The sensor is constructed using a new technology which is called Photo-Mixer-Device (PMD). A systematic analysis of the occupant detection problem was made using real and virtual environments. The challenge is to find the best sensor geometry and to adapt a classification scheme under the current technological constraints. Passenger head position detection is also a desirable issue. A couple of classifiers have been used into a simple configuration to reach this goal. Experiences and results are described.

  14. Overview of the Machine-Tool Task Force

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sutton, G.P.

    1981-06-08

    The Machine Tool Task Force, (MTTF) surveyed the state of the art of machine tool technology for material removal for two and one-half years. This overview gives a brief summary of the approach, specific subjects covered, principal conclusions and some of the key recommendations aimed at improving the technology and advancing the productivity of machine tools. The Task Force consisted of 123 experts from the US and other countries. Their findings are documented in a five-volume report, Technology of Machine Tools.

  15. Research on the tool holder mode in high speed machining

    NASA Astrophysics Data System (ADS)

    Zhenyu, Zhao; Yongquan, Zhou; Houming, Zhou; Xiaomei, Xu; Haibin, Xiao

    2018-03-01

    High speed machining technology can improve the processing efficiency and precision, but also reduce the processing cost. Therefore, the technology is widely regarded in the industry. With the extensive application of high-speed machining technology, high-speed tool system has higher and higher requirements on the tool chuck. At present, in high speed precision machining, several new kinds of clip heads are as long as there are heat shrinkage tool-holder, high-precision spring chuck, hydraulic tool-holder, and the three-rib deformation chuck. Among them, the heat shrinkage tool-holder has the advantages of high precision, high clamping force, high bending rigidity and dynamic balance, etc., which are widely used. Therefore, it is of great significance to research the new requirements of the machining tool system. In order to adapt to the requirement of high speed machining precision machining technology, this paper expounds the common tool holder technology of high precision machining, and proposes how to select correctly tool clamping system in practice. The characteristics and existing problems are analyzed in the tool clamping system.

  16. What Children Should Know about Technology and the Virtual World

    ERIC Educational Resources Information Center

    Zhao, Yong

    2010-01-01

    The dominant view of technology so far has been that it is a tool to help improve the teaching of traditional subjects--knowledge mostly about the local and physical world. But technology has created a new realm: the virtual world. It may not be physical or tangible, but the virtual world is indisputable and has a significant economy. If one…

  17. On Why It Is Impossible to Prove that the BDX90 Dispatcher Implements a Time-sharing System

    NASA Technical Reports Server (NTRS)

    Boyer, R. S.; Moore, J. S.

    1983-01-01

    The Software Implemented Fault Tolerance SIFT system, is written in PASCAL except for about a page of machine code. The SIFT system implements a small time sharing system in which PASCAL programs for separate application tasks are executed according to a schedule with real time constraints. The PASCAL language has no provision for handling the notion of an interrupt such as the B930 clock interrupt. The PASCAL language also lacks the notion of running a PASCAL subroutine for a given amount of time, suspending it, saving away the suspension, and later activating the suspension. Machine code was used to overcome these inadequacies of PASCAL. Code which handles clock interrupts and suspends processes is called a dispatcher. The time sharing/virtual machine idea is completely destroyed by the reconfiguration task. After termination of the reconfiguration task, the tasks run by the dispatcher have no relation to those run before reconfiguration. It is impossible to view the dispatcher as a time-sharing system implementing virtual BDX930s running concurrently when one process can wipe out the others.

  18. A study of factors affecting the adoption of server virtualization technology

    NASA Astrophysics Data System (ADS)

    Lu, Hsin-Ke; Lin, Peng-Chun; Chiang, Chang-Heng; Cho, Chien-An

    2018-04-01

    It has become a trend that worldwide enterprises and organizations apply new technologies to improve their operations; besides, it has higher cost and less flexibility to construct and manage traditional servers, therefore the current mainstream is to use server virtualization technology. However, from these new technology organizations will not necessarily get the expected benefits because each one has its own level of organizational complexity and abilities to accept changes. The researcher investigated key factors affecting the adoption of virtualization technology through two phases. In phase I, the researcher reviewed literature and then applied the dimensions of "Information Systems Success Model" (ISSM) to generalize the factors affecting the adoption of virtualization technology to be the preliminary theoretical framework and develop a questionnaire; in phase II, a three-round Delphi Method was used to integrate the opinions of experts from related fields which were then gradually converged in order to obtain a stable and objective questionnaire of key factors so that these results were expected to provide references for organizations' adoption of server virtualization technology and future studies.

  19. Technology of machine tools. Volume 1. Executive summary

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sutton, G.P.

    1980-10-01

    The Machine Tool Task Force (MTTF) was formed to characterize the state of the art of machine tool technology and to identify promising future directions of this technology. This volume is one of a five-volume series that presents the MTTF findings; reports on various areas of the technology were contributed by experts in those areas.

  20. Exploring Learner Acceptance of the Use of Virtual Reality in Medical Education: A Case Study of Desktop and Projection-Based Display Systems

    ERIC Educational Resources Information Center

    Huang, Hsiu-Mei; Liaw, Shu-Sheng; Lai, Chung-Min

    2016-01-01

    Advanced technologies have been widely applied in medical education, including human-patient simulators, immersive virtual reality Cave Automatic Virtual Environment systems, and video conferencing. Evaluating learner acceptance of such virtual reality (VR) learning environments is a critical issue for ensuring that such technologies are used to…

  1. Organizational Learning Goes Virtual?: A Study of Employees' Learning Achievement in Stereoscopic 3D Virtual Reality

    ERIC Educational Resources Information Center

    Lau, Kung Wong

    2015-01-01

    Purpose: This study aims to deepen understanding of the use of stereoscopic 3D technology (stereo3D) in facilitating organizational learning. The emergence of advanced virtual technologies, in particular to the stereo3D virtual reality, has fundamentally changed the ways in which organizations train their employees. However, in academic or…

  2. Virtual Team Meetings: Reflections on a Class Exercise Exploring Technology Choice

    ERIC Educational Resources Information Center

    Bull Schaefer, Rebecca A.; Erskine, Laura

    2012-01-01

    Students find that choosing the appropriate technology for a virtual team meeting is not as simple as it first appears. The authors describe a class exercise used to demonstrate the benefits and drawbacks of using virtual team meetings by requiring students to replace a face-to-face meeting with a virtual meeting. The exercise challenged students'…

  3. 75 FR 60141 - International Business Machines (IBM), Global Technology Services Delivery Division, Including On...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-29

    ... 25, 2010, applicable to workers of International Business Machines (IBM), Global Technology Services... hereby issued as follows: All workers of International Business Machines (IBM), Global Technology... DEPARTMENT OF LABOR Employment and Training Administration [TA-W-74,164] International Business...

  4. First Steps Toward Incorporating Image Based Diagnostics Into Particle Accelerator Control Systems Using Convolutional Neural Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edelen, A. L.; Biedron, S. G.; Milton, S. V.

    At present, a variety of image-based diagnostics are used in particle accelerator systems. Often times, these are viewed by a human operator who then makes appropriate adjustments to the machine. Given recent advances in using convolutional neural networks (CNNs) for image processing, it should be possible to use image diagnostics directly in control routines (NN-based or otherwise). This is especially appealing for non-intercepting diagnostics that could run continuously during beam operation. Here, we show results of a first step toward implementing such a controller: our trained CNN can predict multiple simulated downstream beam parameters at the Fermilab Accelerator Science andmore » Technology (FAST) facility's low energy beamline using simulated virtual cathode laser images, gun phases, and solenoid strengths.« less

  5. Improving the interactivity and functionality of Web-based radiology teaching files with the Java programming language.

    PubMed

    Eng, J

    1997-01-01

    Java is a programming language that runs on a "virtual machine" built into World Wide Web (WWW)-browsing programs on multiple hardware platforms. Web pages were developed with Java to enable Web-browsing programs to overlay transparent graphics and text on displayed images so that the user could control the display of labels and annotations on the images, a key feature not available with standard Web pages. This feature was extended to include the presentation of normal radiologic anatomy. Java programming was also used to make Web browsers compatible with the Digital Imaging and Communications in Medicine (DICOM) file format. By enhancing the functionality of Web pages, Java technology should provide greater incentive for using a Web-based approach in the development of radiology teaching material.

  6. Use of display technologies for augmented reality enhancement

    NASA Astrophysics Data System (ADS)

    Harding, Kevin

    2016-06-01

    Augmented reality (AR) is seen as an important tool for the future of user interfaces as well as training applications. An important application area for AR is expected to be in the digitization of training and worker instructions used in the Brilliant Factory environment. The transition of work instructions methods from printed pages in a book or taped to a machine to virtual simulations is a long step with many challenges along the way. A variety of augmented reality tools are being explored today for industrial applications that range from simple programmable projections in the work space to 3D displays and head mounted gear. This paper will review where some of these tool are today and some of the pros and cons being considered for the future worker environment.

  7. Utilization of building information modeling in infrastructure’s design and construction

    NASA Astrophysics Data System (ADS)

    Zak, Josef; Macadam, Helen

    2017-09-01

    Building Information Modeling (BIM) is a concept that has gained its place in the design, construction and maintenance of buildings in Czech Republic during recent years. This paper deals with description of usage, applications and potential benefits and disadvantages connected with implementation of BIM principles in the preparation and construction of infrastructure projects. Part of the paper describes the status of BIM implementation in Czech Republic, and there is a review of several virtual design and construction practices in Czech Republic. Examples of best practice are presented from current infrastructure projects. The paper further summarizes experiences with new technologies gained from the application of BIM related workflows. The focus is on the BIM model utilization for the machine control systems on site, quality assurance, quality management and construction management.

  8. Examining Interactivity in Synchronous Virtual Classrooms

    ERIC Educational Resources Information Center

    Martin, Florence; Parker, Michele A.; Deale, Deborah F.

    2012-01-01

    Interaction is crucial to student satisfaction in online courses. Adding synchronous components (virtual classroom technologies) to online courses can facilitate interaction. In this study, interaction within a synchronous virtual classroom was investigated by surveying 21 graduate students in an instructional technology program in the…

  9. Liberating Virtual Machines from Physical Boundaries through Execution Knowledge

    DTIC Science & Technology

    2015-12-01

    trivial infrastructures such as VM distribution networks, clients need to wait for an extended period of time before launching a VM. In cloud settings...hardware support. MobiDesk [28] efficiently supports virtual desktops in mobile environments by decou- pling the user’s workload from host systems and...experiment set-up. VMs are migrated between a pair of source and destination hosts, which are connected through a backend 10 Gbps network for

  10. Mapping, Awareness, and Virtualization Network Administrator Training Tool (MAVNATT) Architecture and Framework

    DTIC Science & Technology

    2015-06-01

    unit may setup and teardown the entire tactical infrastructure multiple times per day. This tactical network administrator training is a critical...language and runs on Linux and Unix based systems. All provisioning is based around the Nagios Core application, a powerful backend solution for network...start up a large number of virtual machines quickly. CORE supports the simulation of fixed and mobile networks. CORE is open-source, written in Python

  11. Humans and machines in space: The vision, the challenge, the payoff; Proceedings of the 29th Goddard Memorial Symposium, Washington, Mar. 14, 15, 1991

    NASA Astrophysics Data System (ADS)

    Johnson, Bradley; May, Gayle L.; Korn, Paula

    The present conference discusses the currently envisioned goals of human-machine systems in spacecraft environments, prospects for human exploration of the solar system, and plausible methods for meeting human needs in space. Also discussed are the problems of human-machine interaction in long-duration space flights, remote medical systems for space exploration, the use of virtual reality for planetary exploration, the alliance between U.S. Antarctic and space programs, and the economic and educational impacts of the U.S. space program.

  12. Fundamental Study about the Landscape Estimation and Analysis by CG

    NASA Astrophysics Data System (ADS)

    Nakashima, Yoshio; Miyagoshi, Takashi; Takamatsu, Mamoru; Sassa, Kazuhiro

    In recent years, the color of advertising signboards or vending machines on the streets should be harmonized with the surrounding landscape. In this study, we investigated how the colors (red and white) of the vending machines virtually installed by CG would affect the traditional landscape. 20 subjects estimated landscape samples in Hida-Furukawa by the SD technique. The result of our experiment shows that the vending machines have great influence on the surrounding landscape. On the other hand, we have confirmed that they can harmonize with the circumference landscape by the color to use.

  13. A Framework for Analyzing the Whole Body Surface Area from a Single View

    PubMed Central

    Doretto, Gianfranco; Adjeroh, Donald

    2017-01-01

    We present a virtual reality (VR) framework for the analysis of whole human body surface area. Usual methods for determining the whole body surface area (WBSA) are based on well known formulae, characterized by large errors when the subject is obese, or belongs to certain subgroups. For these situations, we believe that a computer vision approach can overcome these problems and provide a better estimate of this important body indicator. Unfortunately, using machine learning techniques to design a computer vision system able to provide a new body indicator that goes beyond the use of only body weight and height, entails a long and expensive data acquisition process. A more viable solution is to use a dataset composed of virtual subjects. Generating a virtual dataset allowed us to build a population with different characteristics (obese, underweight, age, gender). However, synthetic data might differ from a real scenario, typical of the physician’s clinic. For this reason we develop a new virtual environment to facilitate the analysis of human subjects in 3D. This framework can simulate the acquisition process of a real camera, making it easy to analyze and to create training data for machine learning algorithms. With this virtual environment, we can easily simulate the real setup of a clinic, where a subject is standing in front of a camera, or may assume a different pose with respect to the camera. We use this newly designated environment to analyze the whole body surface area (WBSA). In particular, we show that we can obtain accurate WBSA estimations with just one view, virtually enabling the possibility to use inexpensive depth sensors (e.g., the Kinect) for large scale quantification of the WBSA from a single view 3D map. PMID:28045895

  14. Virtual Machine Language 2.1

    NASA Technical Reports Server (NTRS)

    Riedel, Joseph E.; Grasso, Christopher A.

    2012-01-01

    VML (Virtual Machine Language) is an advanced computing environment that allows spacecraft to operate using mechanisms ranging from simple, time-oriented sequencing to advanced, multicomponent reactive systems. VML has developed in four evolutionary stages. VML 0 is a core execution capability providing multi-threaded command execution, integer data types, and rudimentary branching. VML 1 added named parameterized procedures, extensive polymorphism, data typing, branching, looping issuance of commands using run-time parameters, and named global variables. VML 2 added for loops, data verification, telemetry reaction, and an open flight adaptation architecture. VML 2.1 contains major advances in control flow capabilities for executable state machines. On the resource requirements front, VML 2.1 features a reduced memory footprint in order to fit more capability into modestly sized flight processors, and endian-neutral data access for compatibility with Intel little-endian processors. Sequence packaging has been improved with object-oriented programming constructs and the use of implicit (rather than explicit) time tags on statements. Sequence event detection has been significantly enhanced with multi-variable waiting, which allows a sequence to detect and react to conditions defined by complex expressions with multiple global variables. This multi-variable waiting serves as the basis for implementing parallel rule checking, which in turn, makes possible executable state machines. The new state machine feature in VML 2.1 allows the creation of sophisticated autonomous reactive systems without the need to develop expensive flight software. Users specify named states and transitions, along with the truth conditions required, before taking transitions. Transitions with the same signal name allow separate state machines to coordinate actions: the conditions distributed across all state machines necessary to arm a particular signal are evaluated, and once found true, that signal is raised. The selected signal then causes all identically named transitions in all present state machines to be taken simultaneously. VML 2.1 has relevance to all potential space missions, both manned and unmanned. It was under consideration for use on Orion.

  15. Virtual Reality and Its Potential Use in Special Education. Identifying Emerging Issues and Trends in Technology for Special Education.

    ERIC Educational Resources Information Center

    Woodward, John

    As part of a 3-year study to identify emerging issues and trends in technology for special education, this paper addresses the possible contributions of virtual reality technology to educational services for students with disabilities. An example of the use of virtual reality in medical imaging introduces the paper and leads to a brief review of…

  16. 3D Technology Selection for a Virtual Learning Environment by Blending ISO 9126 Standard and AHP

    ERIC Educational Resources Information Center

    Cetin, Aydin; Guler, Inan

    2011-01-01

    Web3D presents many opportunities for learners in a virtual world or virtual environment over the web. This is a great opportunity for open-distance education institutions to benefit from web3d technologies to create courses with interactive 3d materials. There are many open source and commercial products offering 3d technologies over the web…

  17. Distributed computing methodology for training neural networks in an image-guided diagnostic application.

    PubMed

    Plagianakos, V P; Magoulas, G D; Vrahatis, M N

    2006-03-01

    Distributed computing is a process through which a set of computers connected by a network is used collectively to solve a single problem. In this paper, we propose a distributed computing methodology for training neural networks for the detection of lesions in colonoscopy. Our approach is based on partitioning the training set across multiple processors using a parallel virtual machine. In this way, interconnected computers of varied architectures can be used for the distributed evaluation of the error function and gradient values, and, thus, training neural networks utilizing various learning methods. The proposed methodology has large granularity and low synchronization, and has been implemented and tested. Our results indicate that the parallel virtual machine implementation of the training algorithms developed leads to considerable speedup, especially when large network architectures and training sets are used.

  18. Performance of machine-learning scoring functions in structure-based virtual screening

    PubMed Central

    Wójcikowski, Maciej; Ballester, Pedro J.; Siedlecki, Pawel

    2017-01-01

    Classical scoring functions have reached a plateau in their performance in virtual screening and binding affinity prediction. Recently, machine-learning scoring functions trained on protein-ligand complexes have shown great promise in small tailored studies. They have also raised controversy, specifically concerning model overfitting and applicability to novel targets. Here we provide a new ready-to-use scoring function (RF-Score-VS) trained on 15 426 active and 893 897 inactive molecules docked to a set of 102 targets. We use the full DUD-E data sets along with three docking tools, five classical and three machine-learning scoring functions for model building and performance assessment. Our results show RF-Score-VS can substantially improve virtual screening performance: RF-Score-VS top 1% provides 55.6% hit rate, whereas that of Vina only 16.2% (for smaller percent the difference is even more encouraging: RF-Score-VS top 0.1% achieves 88.6% hit rate for 27.5% using Vina). In addition, RF-Score-VS provides much better prediction of measured binding affinity than Vina (Pearson correlation of 0.56 and −0.18, respectively). Lastly, we test RF-Score-VS on an independent test set from the DEKOIS benchmark and observed comparable results. We provide full data sets to facilitate further research in this area (http://github.com/oddt/rfscorevs) as well as ready-to-use RF-Score-VS (http://github.com/oddt/rfscorevs_binary). PMID:28440302

  19. Decentralized real-time simulation of forest machines

    NASA Astrophysics Data System (ADS)

    Freund, Eckhard; Adam, Frank; Hoffmann, Katharina; Rossmann, Juergen; Kraemer, Michael; Schluse, Michael

    2000-10-01

    To develop realistic forest machine simulators is a demanding task. A useful simulator has to provide a close- to-reality simulation of the forest environment as well as the simulation of the physics of the vehicle. Customers demand a highly realistic three dimensional forestry landscape and the realistic simulation of the complex motion of the vehicle even in rough terrain in order to be able to use the simulator for operator training under close-to- reality conditions. The realistic simulation of the vehicle, especially with the driver's seat mounted on a motion platform, greatly improves the effect of immersion into the virtual reality of a simulated forest and the achievable level of education of the driver. Thus, the connection of the real control devices of forest machines to the simulation system has to be supported, i.e. the real control devices like the joysticks or the board computer system to control the crane, the aggregate etc. Beyond, the fusion of the board computer system and the simulation system is realized by means of sensors, i.e. digital and analog signals. The decentralized system structure allows several virtual reality systems to evaluate and visualize the information of the control devices and the sensors. So, while the driver is practicing, the instructor can immerse into the same virtual forest to monitor the session from his own viewpoint. In this paper, we are describing the realized structure as well as the necessary software and hardware components and application experiences.

  20. Knowledge Searching and Sharing on Virtual Networks.

    ERIC Educational Resources Information Center

    Helokunnas, Tuija; Herrala, Juha

    2001-01-01

    Describes searching and sharing of knowledge on virtual networks, based on experiences gained when hosting virtual knowledge networks at Tampere University of Technology in Finland. Discusses information and knowledge management studies; role of information technology in knowledge searching and sharing; implementation and experiences of the…

  1. Virtual Reality: A Strategy for Training in Cross-Cultural Communication.

    ERIC Educational Resources Information Center

    Meyer, Catherine; Dunn-Roberts, Richard

    1992-01-01

    Defines virtual reality and explains terminology, theoretical concepts, and enabling technologies. Research and applications are described; limitations of current technology are considered; and future possibilities are discussed, including the use of virtual reality in training for cross-cultural communication. (22 references) (LRW)

  2. Virtual terrain: a security-based representation of a computer network

    NASA Astrophysics Data System (ADS)

    Holsopple, Jared; Yang, Shanchieh; Argauer, Brian

    2008-03-01

    Much research has been put forth towards detection, correlating, and prediction of cyber attacks in recent years. As this set of research progresses, there is an increasing need for contextual information of a computer network to provide an accurate situational assessment. Typical approaches adopt contextual information as needed; yet such ad hoc effort may lead to unnecessary or even conflicting features. The concept of virtual terrain is, therefore, developed and investigated in this work. Virtual terrain is a common representation of crucial information about network vulnerabilities, accessibilities, and criticalities. A virtual terrain model encompasses operating systems, firewall rules, running services, missions, user accounts, and network connectivity. It is defined as connected graphs with arc attributes defining dynamic relationships among vertices modeling network entities, such as services, users, and machines. The virtual terrain representation is designed to allow feasible development and maintenance of the model, as well as efficacy in terms of the use of the model. This paper will describe the considerations in developing the virtual terrain schema, exemplary virtual terrain models, and algorithms utilizing the virtual terrain model for situation and threat assessment.

  3. Blade runner. Blade server and virtualization technology can help hospitals save money--but they are far from silver bullets.

    PubMed

    Lawrence, Daphne

    2009-03-01

    Blade servers and virtualization can reduce infrastructure, maintenance, heating, electric, cooling and equipment costs. Blade server technology is evolving and some elements may become obsolete. There is very little interoperability between blades. Hospitals can virtualize 40 to 60 percent of their servers, and old servers can be reused for testing. Not all applications lend themselves to virtualization--especially those with high memory requirements. CIOs should engage their vendors in virtualization discussions.

  4. [Application of 3D virtual reality technology with multi-modality fusion in resection of glioma located in central sulcus region].

    PubMed

    Chen, T N; Yin, X T; Li, X G; Zhao, J; Wang, L; Mu, N; Ma, K; Huo, K; Liu, D; Gao, B Y; Feng, H; Li, F

    2018-05-08

    Objective: To explore the clinical and teaching application value of virtual reality technology in preoperative planning and intraoperative guide of glioma located in central sulcus region. Method: Ten patients with glioma in the central sulcus region were proposed to surgical treatment. The neuro-imaging data, including CT, CTA, DSA, MRI, fMRI were input to 3dgo sczhry workstation for image fusion and 3D reconstruction. Spatial relationships between the lesions and the surrounding structures on the virtual reality image were obtained. These images were applied to the operative approach design, operation process simulation, intraoperative auxiliary decision and the training of specialist physician. Results: Intraoperative founding of 10 patients were highly consistent with preoperative simulation with virtual reality technology. Preoperative 3D reconstruction virtual reality images improved the feasibility of operation planning and operation accuracy. This technology had not only shown the advantages for neurological function protection and lesion resection during surgery, but also improved the training efficiency and effectiveness of dedicated physician by turning the abstract comprehension to virtual reality. Conclusion: Image fusion and 3D reconstruction based virtual reality technology in glioma resection is helpful for formulating the operation plan, improving the operation safety, increasing the total resection rate, and facilitating the teaching and training of the specialist physician.

  5. 75 FR 70691 - International Game Technology (IGT), Machine Accounting and ABS (Bonusing and BEII), Engineering...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-18

    ... DEPARTMENT OF LABOR Employment and Training Administration [TA-W-73,477] International Game..., applicable to workers of International Game Technology (IGT), Machine Accounting and ABS (Bonusing and BEII... International Game Technology (IGT), Machine Accounting and ABS (Bonusing and BEII), and Engineering. The...

  6. Software Support Measurement and Estimating for Oracle Database Applications Using Mark II Function Points

    DTIC Science & Technology

    1992-12-01

    36 V.33. Coe ncint of De minstioi ........................ 37 V3A. F-Raio .................................... 37 V3.5... de ations. Instructions ae defined as lines of code or card images. Thus, a line containin two or mome souce statements counts as one instruction; a...understand the productivity paradox, recall de concept of virtual machines. When a higher level machine groups ogether many instructm of a lower level

  7. Operator vision aids for space teleoperation assembly and servicing

    NASA Technical Reports Server (NTRS)

    Brooks, Thurston L.; Ince, Ilhan; Lee, Greg

    1992-01-01

    This paper investigates concepts for visual operator aids required for effective telerobotic control. Operator visual aids, as defined here, mean any operational enhancement that improves man-machine control through the visual system. These concepts were derived as part of a study of vision issues for space teleoperation. Extensive literature on teleoperation, robotics, and human factors was surveyed to definitively specify appropriate requirements. This paper presents these visual aids in three general categories of camera/lighting functions, display enhancements, and operator cues. In the area of camera/lighting functions concepts are discussed for: (1) automatic end effector or task tracking; (2) novel camera designs; (3) computer-generated virtual camera views; (4) computer assisted camera/lighting placement; and (5) voice control. In the technology area of display aids, concepts are presented for: (1) zone displays, such as imminent collision or indexing limits; (2) predictive displays for temporal and spatial location; (3) stimulus-response reconciliation displays; (4) graphical display of depth cues such as 2-D symbolic depth, virtual views, and perspective depth; and (5) view enhancements through image processing and symbolic representations. Finally, operator visual cues (e.g., targets) that help identify size, distance, shape, orientation and location are discussed.

  8. Virtual Team Governance: Addressing the Governance Mechanisms and Virtual Team Performance

    NASA Astrophysics Data System (ADS)

    Zhan, Yihong; Bai, Yu; Liu, Ziheng

    As technology has improved and collaborative software has been developed, virtual teams with geographically dispersed members spread across diverse physical locations have become increasingly prominent. Virtual team is supported by advancing communication technologies, which makes virtual teams able to largely transcend time and space. Virtual teams have changed the corporate landscape, which are more complex and dynamic than traditional teams since the members of virtual teams are spread on diverse geographical locations and their roles in the virtual team are different. Therefore, how to realize good governance of virtual team and arrive at good virtual team performance is becoming critical and challenging. Good virtual team governance is essential for a high-performance virtual team. This paper explores the performance and the governance mechanism of virtual team. It establishes a model to explain the relationship between the performance and the governance mechanisms in virtual teams. This paper is focusing on managing virtual teams. It aims to find the strategies to help business organizations to improve the performance of their virtual teams and arrive at the objectives of good virtual team management.

  9. Nanomedicine: Promising Tiny Machine for the Healthcare in Future-A Review

    PubMed Central

    Saha, Moni

    2009-01-01

    One of the 21st century’s most promising technologies is nanotechnology. Nanomedicine, an offshoot of nanotechnology, refers to highly specific medical intervention at the molecular scale for curing disease or repairing damaged tissues, such as bone, muscle, or nerve. Nanotechnology is a collective term referring to technological developments on the nanometer scale, usually 0.1-100 nm. A nanometer is one-billionth of a meter, too small to be seen with a conventional laboratory microscope. It is at this size scale - about 100 nanometers or less - that biological molecules and structures inside living cells operate. Therefore, nanotechnology is engineering and manufacturing at the molecular scale. Utilities of nanotechnology to biomedical sciences imply creation of materials and devices designed to interact with the body at sub-cellular scales with a high degree of specificity. This could be potentially translated into targeted cellular and tissue-specific clinical applications aimed at maximal therapeutic effects with very limited adverse-effects. Nanomedicine can offer impressive resolutions for various life threatening diseases. Disease areas which can be expected to benefit most from nanotechnology within the next few years are cancer, diseases of the cardiovascular system, the lungs, blood, neurological (especially neurodegenerative) diseases, diabetes, inflammatory/infectious diseases, Parkinson’s or Alzheimer’s disease and orthopaedic problems. In the first half of the 21st century, nanomedicine should eliminate virtually all common diseases of the 20th century, and virtually all medical pain. This article presents an overview of some of the applications of nanotechnology in nanomedicine. PMID:22216376

  10. Virtual Technologies Trends in Education

    ERIC Educational Resources Information Center

    Martín-Gutiérrez, Jorge; Mora, Carlos Efrén; Añorbe-Díaz, Beatriz; González-Marrero, Antonio

    2017-01-01

    Virtual reality captures people's attention. This technology has been applied in many sectors such as medicine, industry, education, video games, or tourism. Perhaps its biggest area of interest has been leisure and entertainment. Regardless the sector, the introduction of virtual or augmented reality had several constraints: it was expensive, it…

  11. Curricular Implications of Virtual World Technology: A Review of Business Applications

    ERIC Educational Resources Information Center

    Cyphert, Dale; Wurtz, M. Susan; Duclos, Leslie K.

    2013-01-01

    As business organizations grow increasingly virtual, traditional principles of organizational communication require examination and modification. This article considers the curricular implications of the growing business uses of virtual world technology through three different lenses--students as employee-users, students as strategic designers and…

  12. Virtual Team Effectiveness: An Empirical Study Using SEM

    ERIC Educational Resources Information Center

    Bhat, Swati Kaul; Pande, Neerja; Ahuja, Vandana

    2016-01-01

    Advances in communication and information technology create new opportunities for organizations to build and manage virtual teams. Virtual teams have become a norm for organizations whose members work across disparate geographical locations, relying primarily or exclusively, on the usage of Information and Communications Technology (ICT) for the…

  13. Simulation-Based Probabilistic Seismic Hazard Assessment Using System-Level, Physics-Based Models: Assembling Virtual California

    NASA Astrophysics Data System (ADS)

    Rundle, P. B.; Rundle, J. B.; Morein, G.; Donnellan, A.; Turcotte, D.; Klein, W.

    2004-12-01

    The research community is rapidly moving towards the development of an earthquake forecast technology based on the use of complex, system-level earthquake fault system simulations. Using these topologically and dynamically realistic simulations, it is possible to develop ensemble forecasting methods similar to that used in weather and climate research. To effectively carry out such a program, one needs 1) a topologically realistic model to simulate the fault system; 2) data sets to constrain the model parameters through a systematic program of data assimilation; 3) a computational technology making use of modern paradigms of high performance and parallel computing systems; and 4) software to visualize and analyze the results. In particular, we focus attention on a new version of our code Virtual California (version 2001) in which we model all of the major strike slip faults in California, from the Mexico-California border to the Mendocino Triple Junction. Virtual California is a "backslip model", meaning that the long term rate of slip on each fault segment in the model is matched to the observed rate. We use the historic data set of earthquakes larger than magnitude M > 6 to define the frictional properties of 650 fault segments (degrees of freedom) in the model. To compute the dynamics and the associated surface deformation, we use message passing as implemented in the MPICH standard distribution on a Beowulf clusters consisting of >10 cpus. We also will report results from implementing the code on significantly larger machines so that we can begin to examine much finer spatial scales of resolution, and to assess scaling properties of the code. We present results of simulations both as static images and as mpeg movies, so that the dynamical aspects of the computation can be assessed by the viewer. We compute a variety of statistics from the simulations, including magnitude-frequency relations, and compare these with data from real fault systems. We report recent results on use of Virtual California for probabilistic earthquake forecasting for several sub-groups of major faults in California. These methods have the advantage that system-level fault interactions are explicitly included, as well as laboratory-based friction laws.

  14. Information integration and diagnosis analysis of equipment status and production quality for machining process

    NASA Astrophysics Data System (ADS)

    Zan, Tao; Wang, Min; Hu, Jianzhong

    2010-12-01

    Machining status monitoring technique by multi-sensors can acquire and analyze the machining process information to implement abnormity diagnosis and fault warning. Statistical quality control technique is normally used to distinguish abnormal fluctuations from normal fluctuations through statistical method. In this paper by comparing the advantages and disadvantages of the two methods, the necessity and feasibility of integration and fusion is introduced. Then an approach that integrates multi-sensors status monitoring and statistical process control based on artificial intelligent technique, internet technique and database technique is brought forward. Based on virtual instrument technique the author developed the machining quality assurance system - MoniSysOnline, which has been used to monitoring the grinding machining process. By analyzing the quality data and AE signal information of wheel dressing process the reason of machining quality fluctuation has been obtained. The experiment result indicates that the approach is suitable for the status monitoring and analyzing of machining process.

  15. Emerging technologies in education and training: applications for the laboratory animal science community.

    PubMed

    Ketelhut, Diane Jass; Niemi, Steven M

    2007-01-01

    This article examines several new and exciting communication technologies. Many of the technologies were developed by the entertainment industry; however, other industries are adopting and modifying them for their own needs. These new technologies allow people to collaborate across distance and time and to learn in simulated work contexts. The article explores the potential utility of these technologies for advancing laboratory animal care and use through better education and training. Descriptions include emerging technologies such as augmented reality and multi-user virtual environments, which offer new approaches with different capabilities. Augmented reality interfaces, characterized by the use of handheld computers to infuse the virtual world into the real one, result in deeply immersive simulations. In these simulations, users can access virtual resources and communicate with real and virtual participants. Multi-user virtual environments enable multiple participants to simultaneously access computer-based three-dimensional virtual spaces, called "worlds," and to interact with digital tools. They allow for authentic experiences that promote collaboration, mentoring, and communication. Because individuals may learn or train differently, it is advantageous to combine the capabilities of these technologies and applications with more traditional methods to increase the number of students who are served by using current methods alone. The use of these technologies in animal care and use programs can create detailed training and education environments that allow students to learn the procedures more effectively, teachers to assess their progress more objectively, and researchers to gain insights into animal care.

  16. Telemanipulation, telepresence, and virtual reality for surgery in the year 2000

    NASA Astrophysics Data System (ADS)

    Satava, Richard M.

    1995-12-01

    The new technologic revolution in medicine is based upon information technologies, and telemanipulation, telepresence and virtual reality are essential components. Telepresence surgery returns the look and feel of `open surgery' to the surgeon and promises enhancement of physical capabilities above normal human performance. Virtual reality provides basic medical education, simulation of surgical procedures, medical forces and disaster medicine practice, and virtual prototyping of medical equipment.

  17. Implementation of NASTRAN on the IBM/370 CMS operating system

    NASA Technical Reports Server (NTRS)

    Britten, S. S.; Schumacker, B.

    1980-01-01

    The NASA Structural Analysis (NASTRAN) computer program is operational on the IBM 360/370 series computers. While execution of NASTRAN has been described and implemented under the virtual storage operating systems of the IBM 370 models, the IBM 370/168 computer can also operate in a time-sharing mode under the virtual machine operating system using the Conversational Monitor System (CMS) subset. The changes required to make NASTRAN operational under the CMS operating system are described.

  18. Virtual reality hardware for use in interactive 3D data fusion and visualization

    NASA Astrophysics Data System (ADS)

    Gourley, Christopher S.; Abidi, Mongi A.

    1997-09-01

    Virtual reality has become a tool for use in many areas of research. We have designed and built a VR system for use in range data fusion and visualization. One major VR tool is the CAVE. This is the ultimate visualization tool, but comes with a large price tag. Our design uses a unique CAVE whose graphics are powered by a desktop computer instead of a larger rack machine making it much less costly. The system consists of a screen eight feet tall by twenty-seven feet wide giving a variable field-of-view currently set at 160 degrees. A silicon graphics Indigo2 MaxImpact with the impact channel option is used for display. This gives the capability to drive three projectors at a resolution of 640 by 480 for use in displaying the virtual environment and one 640 by 480 display for a user control interface. This machine is also the first desktop package which has built-in hardware texture mapping. This feature allows us to quickly fuse the range and intensity data and other multi-sensory data. The final goal is a complete 3D texture mapped model of the environment. A dataglove, magnetic tracker, and spaceball are to be used for manipulation of the data and navigation through the virtual environment. This system gives several users the ability to interactively create 3D models from multiple range images.

  19. Selecting a Virtual World Platform for Learning

    ERIC Educational Resources Information Center

    Robbins, Russell W.; Butler, Brian S.

    2009-01-01

    Like any infrastructure technology, Virtual World (VW) platforms provide affordances that facilitate some activities and hinder others. Although it is theoretically possible for a VW platform to support all types of activities, designers make choices that lead technologies to be more or less suited for different learning objectives. Virtual World…

  20. The World's the Limit in the Virtual High School.

    ERIC Educational Resources Information Center

    Berman, Sheldon; Tinker, Robert

    1997-01-01

    Assisted by a U.S. Department of Education Technology Innovation Challenge Grant, the Hudson (Massachusetts) Public Schools, the Concord Consortium Educational Technology Lab, and 30 collaborating high schools across the nation have developed a virtual high school over the Internet. Through Internet-based courses, Virtual High School significantly…

  1. Virtual Teams and Synchronous Presentations: An Online Class Experience

    ERIC Educational Resources Information Center

    Adkins, Joni K.

    2013-01-01

    Global expansion, cost containment, and technology advances have all played a role in the increase of virtual teams in today's workplace. Virtual teams in an online graduate information technology management class prepared and presented synchronous presentations over a business or non-profit sector case. This paper includes a brief literature…

  2. Are We Ready for the Virtual Library? Technology Push, Market Pull and Organisational Response.

    ERIC Educational Resources Information Center

    Gilbert, J. D.

    1993-01-01

    Discusses virtual libraries, i.e., library services available to users via personal computers; considers the issues of technological development, user demands, and organizational response; and describes progress toward virtual libraries in the Netherlands, including networks, online systems, navigation tools, subject classification, coordination…

  3. Virtual Field Sites: Losses and Gains in Authenticity with Semantic Technologies

    ERIC Educational Resources Information Center

    Litherland, Kate; Stott, Tim A.

    2012-01-01

    The authors investigate the potential of semantic web technologies to enhance "Virtual Fieldwork" resources and learning activities in the Geosciences. They consider the difficulties inherent in the concept of Virtual Fieldwork and how these might be reconciled with the desire to provide students with "authentic" tools for…

  4. Integrating 3D facial scanning in a digital workflow to CAD/CAM design and fabricate complete dentures for immediate total mouth rehabilitation

    PubMed Central

    Greven, Marcus; Wismeijer, Daniel

    2017-01-01

    PURPOSE To integrate extra-oral facial scanning information with CAD/CAM complete dentures to immediately rehabilitate terminal dentition. MATERIALS AND METHODS Ten patients with terminal dentition scheduled for total extraction and immediate denture placement were recruited for this study. The patients were submitted to a facial scanning procedure using the in-office PritiMirror scanner with bite registration records in-situ. Definitive stone cast models and bite records were subsequently submitted to a lab scanning procedure using the lab scanner (iSeries DWOS; Dental Wings). The scanned models were used to create a virtual teeth setup of a complete denture. Using the intra-oral bite records as a reference, the virtual setup was incorporated in the facial scan thereby facilitating a virtual clinical evaluation (teeth try-in) phase. After applying necessary adjustments, the virtual setup was submitted to a CAM procedure where a 5-axis industrial milling machine (M7 CNC; Darton AG General) was used to fabricate a full-milled PMMA immediate provisional prosthesis. RESULTS Total extractions were performed, the dentures were immediately inserted, and subjective clinical fit was evaluated. The immediate provisional prostheses were inserted and clinical fit, occlusion/articulation, and esthetics were subjectively assessed; the results were deemed satisfactory. All provisional prostheses remained three months in function with no notable technical complications. CONCLUSION Ten patients with terminal dentition were treated using a complete digital approach to fabricate complete dentures using CAD/CAM technology. The proposed technique has the potential to accelerate the rehabilitation procedure starting from immediate denture to final implant-supported prosthesis leading to more predictable functional and aesthetics outcomes. PMID:29142646

  5. Using virtual reality technology for aircraft visual inspection training: presence and comparison studies.

    PubMed

    Vora, Jeenal; Nair, Santosh; Gramopadhye, Anand K; Duchowski, Andrew T; Melloy, Brian J; Kanki, Barbara

    2002-11-01

    The aircraft maintenance industry is a complex system consisting of several interrelated human and machine components. Recognizing this, the Federal Aviation Administration (FAA) has pursued human factors related research. In the maintenance arena the research has focused on the aircraft inspection process and the aircraft inspector. Training has been identified as the primary intervention strategy to improve the quality and reliability of aircraft inspection. If training is to be successful, it is critical that we provide aircraft inspectors with appropriate training tools and environment. In response to this need, the paper outlines the development of a virtual reality (VR) system for aircraft inspection training. VR has generated much excitement but little formal proof that it is useful. However, since VR interfaces are difficult and expensive to build, the computer graphics community needs to be able to predict which applications will benefit from VR. To address this important issue, this research measured the degree of immersion and presence felt by subjects in a virtual environment simulator. Specifically, it conducted two controlled studies using the VR system developed for visual inspection task of an aft-cargo bay at the VR Lab of Clemson University. Beyond assembling the visual inspection virtual environment, a significant goal of this project was to explore subjective presence as it affects task performance. The results of this study indicated that the system scored high on the issues related to the degree of presence felt by the subjects. As a next logical step, this study, then, compared VR to an existing PC-based aircraft inspection simulator. The results showed that the VR system was better and preferred over the PC-based training tool.

  6. 6 CFR 37.19 - Machine readable technology on the driver's license or identification card.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ..., States must use the ISO/IEC 15438:2006(E) Information Technology—Automatic identification and data... 6 Domestic Security 1 2011-01-01 2011-01-01 false Machine readable technology on the driver's..., Verification, and Card Issuance Requirements § 37.19 Machine readable technology on the driver's license or...

  7. 6 CFR 37.19 - Machine readable technology on the driver's license or identification card.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ..., States must use the ISO/IEC 15438:2006(E) Information Technology—Automatic identification and data... 6 Domestic Security 1 2010-01-01 2010-01-01 false Machine readable technology on the driver's..., Verification, and Card Issuance Requirements § 37.19 Machine readable technology on the driver's license or...

  8. Emerging CAE technologies and their role in Future Ambient Intelligence Environments

    NASA Astrophysics Data System (ADS)

    Noor, Ahmed K.

    2011-03-01

    Dramatic improvements are on the horizon in Computer Aided Engineering (CAE) and various simulation technologies. The improvements are due, in part, to the developments in a number of leading-edge technologies and their synergistic combinations/convergence. The technologies include ubiquitous, cloud, and petascale computing; ultra high-bandwidth networks, pervasive wireless communication; knowledge based engineering; networked immersive virtual environments and virtual worlds; novel human-computer interfaces; and powerful game engines and facilities. This paper describes the frontiers and emerging simulation technologies, and their role in the future virtual product creation and learning/training environments. The environments will be ambient intelligence environments, incorporating a synergistic combination of novel agent-supported visual simulations (with cognitive learning and understanding abilities); immersive 3D virtual world facilities; development chain management systems and facilities (incorporating a synergistic combination of intelligent engineering and management tools); nontraditional methods; intelligent, multimodal and human-like interfaces; and mobile wireless devices. The Virtual product creation environment will significantly enhance the productivity and will stimulate creativity and innovation in future global virtual collaborative enterprises. The facilities in the learning/training environment will provide timely, engaging, personalized/collaborative and tailored visual learning.

  9. Mission Exploitation Platform PROBA-V

    NASA Astrophysics Data System (ADS)

    Goor, Erwin

    2016-04-01

    VITO and partners developed an end-to-end solution to drastically improve the exploitation of the PROBA-V EO-data archive (http://proba-v.vgt.vito.be/), the past mission SPOT-VEGETATION and derived vegetation parameters by researchers, service providers and end-users. The analysis of time series of data (+1PB) is addressed, as well as the large scale on-demand processing of near real-time data. From November 2015 an operational Mission Exploitation Platform (MEP) PROBA-V, as an ESA pathfinder project, will be gradually deployed at the VITO data center with direct access to the complete data archive. Several applications will be released to the users, e.g. - A time series viewer, showing the evolution of PROBA-V bands and derived vegetation parameters for any area of interest. - Full-resolution viewing services for the complete data archive. - On-demand processing chains e.g. for the calculation of N-daily composites. - A Virtual Machine will be provided with access to the data archive and tools to work with this data, e.g. various toolboxes and support for R and Python. After an initial release in January 2016, a research platform will gradually be deployed allowing users to design, debug and test applications on the platform. From the MEP PROBA-V, access to Sentinel-2 and landsat data will be addressed as well, e.g. to support the Cal/Val activities of the users. Users can make use of powerful Web based tools and can self-manage virtual machines to perform their work on the infrastructure at VITO with access to the complete data archive. To realise this, private cloud technology (openStack) is used and a distributed processing environment is built based on Hadoop. The Hadoop ecosystem offers a lot of technologies (Spark, Yarn, Accumulo, etc.) which we integrate with several open-source components. The impact of this MEP on the user community will be high and will completely change the way of working with the data and hence open the large time series to a larger community of users. The presentation will address these benefits for the users and discuss on the technical challenges in implementing this MEP.

  10. Fire protection for launch facilities using machine vision fire detection

    NASA Astrophysics Data System (ADS)

    Schwartz, Douglas B.

    1993-02-01

    Fire protection of critical space assets, including launch and fueling facilities and manned flight hardware, demands automatic sensors for continuous monitoring, and in certain high-threat areas, fast-reacting automatic suppression systems. Perhaps the most essential characteristic for these fire detection and suppression systems is high reliability; in other words, fire detectors should alarm only on actual fires and not be falsely activated by extraneous sources. Existing types of fire detectors have been greatly improved in the past decade; however, fundamental limitations of their method of operation leaves open a significant possibility of false alarms and restricts their usefulness. At the Civil Engineering Laboratory at Tyndall Air Force Base in Florida, a new type of fire detector is under development which 'sees' a fire visually, like a human being, and makes a reliable decision based on known visual characteristics of flames. Hardware prototypes of the Machine Vision (MV) Fire Detection System have undergone live fire tests and demonstrated extremely high accuracy in discriminating actual fires from false alarm sources. In fact, this technology promises to virtually eliminate false activations. This detector could be used to monitor fueling facilities, launch towers, clean rooms, and other high-value and high-risk areas. Applications can extend to space station and in-flight shuttle operations as well; fiber optics and remote camera heads enable the system to see around obstructed areas and crew compartments. The capability of the technology to distinguish fires means that fire detection can be provided even during maintenance operations, such as welding.

  11. A simple ERP method for quantitative analysis of cognitive workload in myoelectric prosthesis control and human-machine interaction.

    PubMed

    Deeny, Sean; Chicoine, Caitlin; Hargrove, Levi; Parrish, Todd; Jayaraman, Arun

    2014-01-01

    Common goals in the development of human-machine interface (HMI) technology are to reduce cognitive workload and increase function. However, objective and quantitative outcome measures assessing cognitive workload have not been standardized for HMI research. The present study examines the efficacy of a simple event-related potential (ERP) measure of cortical effort during myoelectric control of a virtual limb for use as an outcome tool. Participants trained and tested on two methods of control, direct control (DC) and pattern recognition control (PRC), while electroencephalographic (EEG) activity was recorded. Eighteen healthy participants with intact limbs were tested using DC and PRC under three conditions: passive viewing, easy, and hard. Novel auditory probes were presented at random intervals during testing, and significant task-difficulty effects were observed in the P200, P300, and a late positive potential (LPP), supporting the efficacy of ERPs as a cognitive workload measure in HMI tasks. LPP amplitude distinguished DC from PRC in the hard condition with higher amplitude in PRC, consistent with lower cognitive workload in PRC relative to DC for complex movements. Participants completed trials faster in the easy condition using DC relative to PRC, but completed trials more slowly using DC relative to PRC in the hard condition. The results provide promising support for ERPs as an outcome measure for cognitive workload in HMI research such as prosthetics, exoskeletons, and other assistive devices, and can be used to evaluate and guide new technologies for more intuitive HMI control.

  12. Fire protection for launch facilities using machine vision fire detection

    NASA Technical Reports Server (NTRS)

    Schwartz, Douglas B.

    1993-01-01

    Fire protection of critical space assets, including launch and fueling facilities and manned flight hardware, demands automatic sensors for continuous monitoring, and in certain high-threat areas, fast-reacting automatic suppression systems. Perhaps the most essential characteristic for these fire detection and suppression systems is high reliability; in other words, fire detectors should alarm only on actual fires and not be falsely activated by extraneous sources. Existing types of fire detectors have been greatly improved in the past decade; however, fundamental limitations of their method of operation leaves open a significant possibility of false alarms and restricts their usefulness. At the Civil Engineering Laboratory at Tyndall Air Force Base in Florida, a new type of fire detector is under development which 'sees' a fire visually, like a human being, and makes a reliable decision based on known visual characteristics of flames. Hardware prototypes of the Machine Vision (MV) Fire Detection System have undergone live fire tests and demonstrated extremely high accuracy in discriminating actual fires from false alarm sources. In fact, this technology promises to virtually eliminate false activations. This detector could be used to monitor fueling facilities, launch towers, clean rooms, and other high-value and high-risk areas. Applications can extend to space station and in-flight shuttle operations as well; fiber optics and remote camera heads enable the system to see around obstructed areas and crew compartments. The capability of the technology to distinguish fires means that fire detection can be provided even during maintenance operations, such as welding.

  13. Seminar for High School Students “Practice on Manufacturing Technology by Advanced Machine Tools”

    NASA Astrophysics Data System (ADS)

    Marui, Etsuo; Yamawaki, Masao; Taga, Yuken; Omoto, Ken'ichi; Miyaji, Reiji; Ogura, Takahiro; Tsubata, Yoko; Sakai, Toshimasa

    The seminar ‘Practice on Manufacturing Technology by Advanced Machine Tools’ for high school students was held at the supporting center for technology education of Gifu University, under the sponsorship of the Japan Society of Mechanical Engineers. This seminar was held, hoping that many students become interested in manufacturing through the experience of the seminar. Operating CNC milling machine and CNC wire-cut electric discharge machine, they made original nameplates. Participants made the program to control CNC machine tools themselves. In this report, some valuable results obtained through such experience are explained.

  14. An Internet Application To Relieve Constraints in the Flow of Technical Information--The Virtual Technology Market (VTM).

    ERIC Educational Resources Information Center

    Beverly, James E.; Xue, Lan; Lee, Chung-Shing

    1996-01-01

    Reports on the use of the Internet and World Wide Web as a virtual technology market (VTM) for information and technology transfer. The project focuses on creating awareness of technology demand (problems) and linking it to technology supply (solutions) in the field of particle technology and multiphase processes in the chemical industry. Benefits…

  15. Suitability of virtual prototypes to support human factors/ergonomics evaluation during the design.

    PubMed

    Aromaa, Susanna; Väänänen, Kaisa

    2016-09-01

    In recent years, the use of virtual prototyping has increased in product development processes, especially in the assessment of complex systems targeted at end-users. The purpose of this study was to evaluate the suitability of virtual prototyping to support human factors/ergonomics evaluation (HFE) during the design phase. Two different virtual prototypes were used: augmented reality (AR) and virtual environment (VE) prototypes of a maintenance platform of a rock crushing machine. Nineteen designers and other stakeholders were asked to assess the suitability of the prototype for HFE evaluation. Results indicate that the system model characteristics and user interface affect the experienced suitability. The VE system was valued as being more suitable to support the assessment of visibility, reach, and the use of tools than the AR system. The findings of this study can be used as a guidance for the implementing virtual prototypes in the product development process. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Design of virtual SCADA simulation system for pressurized water reactor

    NASA Astrophysics Data System (ADS)

    Wijaksono, Umar; Abdullah, Ade Gafar; Hakim, Dadang Lukman

    2016-02-01

    The Virtual SCADA system is a software-based Human-Machine Interface that can visualize the process of a plant. This paper described the results of the virtual SCADA system design that aims to recognize the principle of the Nuclear Power Plant type Pressurized Water Reactor. This simulation uses technical data of the Nuclear Power Plant Unit Olkiluoto 3 in Finland. This device was developed using Wonderware Intouch, which is equipped with manual books for each component, animation links, alarm systems, real time and historical trending, and security system. The results showed that in general this device can demonstrate clearly the principles of energy flow and energy conversion processes in Pressurized Water Reactors. This virtual SCADA simulation system can be used as instructional media to recognize the principle of Pressurized Water Reactor.

  17. Simulation based energy-resource efficient manufacturing integrated with in-process virtual management

    NASA Astrophysics Data System (ADS)

    Katchasuwanmanee, Kanet; Cheng, Kai; Bateman, Richard

    2016-09-01

    As energy efficiency is one of the key essentials towards sustainability, the development of an energy-resource efficient manufacturing system is among the great challenges facing the current industry. Meanwhile, the availability of advanced technological innovation has created more complex manufacturing systems that involve a large variety of processes and machines serving different functions. To extend the limited knowledge on energy-efficient scheduling, the research presented in this paper attempts to model the production schedule at an operation process by considering the balance of energy consumption reduction in production, production work flow (productivity) and quality. An innovative systematic approach to manufacturing energy-resource efficiency is proposed with the virtual simulation as a predictive modelling enabler, which provides real-time manufacturing monitoring, virtual displays and decision-makings and consequentially an analytical and multidimensional correlation analysis on interdependent relationships among energy consumption, work flow and quality errors. The regression analysis results demonstrate positive relationships between the work flow and quality errors and the work flow and energy consumption. When production scheduling is controlled through optimization of work flow, quality errors and overall energy consumption, the energy-resource efficiency can be achieved in the production. Together, this proposed multidimensional modelling and analysis approach provides optimal conditions for the production scheduling at the manufacturing system by taking account of production quality, energy consumption and resource efficiency, which can lead to the key competitive advantages and sustainability of the system operations in the industry.

  18. Educational Uses of Virtual Reality Technology.

    DTIC Science & Technology

    1998-01-01

    technology. It is affordable in that a basic level of technology can be achieved on most existing personal computers at either no cost or some minimal...actually present in a virtual environment is termed "presence" and is an artifact of being visually immersed in the computer -generated virtual world...Carolina University, VREL Teachers 1996 onward £ CO ■3 u VR in Education University of Illinois, National Center for Super- computing Applications

  19. Utilization of virtual learning environments in the allied health professions.

    PubMed

    Butina, Michelle; Brooks, Donna; Dominguez, Paul J; Mahon, Gwendolyn M

    2013-01-01

    Multiple technology based tools have been used to enhance skill development in allied health education, which now includes virtual learning environments. The purpose of this study was to explore whether, and how, this latest instructional technology is being adapted in allied health education. An online survey was circulated to all Association of Schools of Allied Health Professions (ASAHP) member institutions and focused on three broad areas of virtual learning environments: the uses of, the perceived pros and cons of, and the outcomes of utilizing them. Results show 40% (17 of 42) of the respondent use some form of the technology. The use of virtual learning technology in other healthcare professions (e.g., medicine) demonstrates the potential benefits to allied health education.

  20. Multivariate statistical analysis software technologies for astrophysical research involving large data bases

    NASA Technical Reports Server (NTRS)

    Djorgovski, S. George

    1994-01-01

    We developed a package to process and analyze the data from the digital version of the Second Palomar Sky Survey. This system, called SKICAT, incorporates the latest in machine learning and expert systems software technology, in order to classify the detected objects objectively and uniformly, and facilitate handling of the enormous data sets from digital sky surveys and other sources. The system provides a powerful, integrated environment for the manipulation and scientific investigation of catalogs from virtually any source. It serves three principal functions: image catalog construction, catalog management, and catalog analysis. Through use of the GID3* Decision Tree artificial induction software, SKICAT automates the process of classifying objects within CCD and digitized plate images. To exploit these catalogs, the system also provides tools to merge them into a large, complete database which may be easily queried and modified when new data or better methods of calibrating or classifying become available. The most innovative feature of SKICAT is the facility it provides to experiment with and apply the latest in machine learning technology to the tasks of catalog construction and analysis. SKICAT provides a unique environment for implementing these tools for any number of future scientific purposes. Initial scientific verification and performance tests have been made using galaxy counts and measurements of galaxy clustering from small subsets of the survey data, and a search for very high redshift quasars. All of the tests were successful, and produced new and interesting scientific results. Attachments to this report give detailed accounts of the technical aspects for multivariate statistical analysis of small and moderate-size data sets, called STATPROG. The package was tested extensively on a number of real scientific applications, and has produced real, published results.

  1. Energy Consumption Management of Virtual Cloud Computing Platform

    NASA Astrophysics Data System (ADS)

    Li, Lin

    2017-11-01

    For energy consumption management research on virtual cloud computing platforms, energy consumption management of virtual computers and cloud computing platform should be understood deeper. Only in this way can problems faced by energy consumption management be solved. In solving problems, the key to solutions points to data centers with high energy consumption, so people are in great need to use a new scientific technique. Virtualization technology and cloud computing have become powerful tools in people’s real life, work and production because they have strong strength and many advantages. Virtualization technology and cloud computing now is in a rapid developing trend. It has very high resource utilization rate. In this way, the presence of virtualization and cloud computing technologies is very necessary in the constantly developing information age. This paper has summarized, explained and further analyzed energy consumption management questions of the virtual cloud computing platform. It eventually gives people a clearer understanding of energy consumption management of virtual cloud computing platform and brings more help to various aspects of people’s live, work and son on.

  2. Automation and robotics technology for intelligent mining systems

    NASA Technical Reports Server (NTRS)

    Welsh, Jeffrey H.

    1989-01-01

    The U.S. Bureau of Mines is approaching the problems of accidents and efficiency in the mining industry through the application of automation and robotics to mining systems. This technology can increase safety by removing workers from hazardous areas of the mines or from performing hazardous tasks. The short-term goal of the Automation and Robotics program is to develop technology that can be implemented in the form of an autonomous mining machine using current continuous mining machine equipment. In the longer term, the goal is to conduct research that will lead to new intelligent mining systems that capitalize on the capabilities of robotics. The Bureau of Mines Automation and Robotics program has been structured to produce the technology required for the short- and long-term goals. The short-term goal of application of automation and robotics to an existing mining machine, resulting in autonomous operation, is expected to be accomplished within five years. Key technology elements required for an autonomous continuous mining machine are well underway and include machine navigation systems, coal-rock interface detectors, machine condition monitoring, and intelligent computer systems. The Bureau of Mines program is described, including status of key technology elements for an autonomous continuous mining machine, the program schedule, and future work. Although the program is directed toward underground mining, much of the technology being developed may have applications for space systems or mining on the Moon or other planets.

  3. Realizing a partial general quantum cloning machine with superconducting quantum-interference devices in a cavity QED

    NASA Astrophysics Data System (ADS)

    Fang, Bao-Long; Yang, Zhen; Ye, Liu

    2009-05-01

    We propose a scheme for implementing a partial general quantum cloning machine with superconducting quantum-interference devices coupled to a nonresonant cavity. By regulating the time parameters, our system can perform optimal symmetric (asymmetric) universal quantum cloning, optimal symmetric (asymmetric) phase-covariant cloning, and optimal symmetric economical phase-covariant cloning. In the scheme the cavity is only virtually excited, thus, the cavity decay is suppressed during the cloning operations.

  4. A Unified Access Model for Interconnecting Heterogeneous Wireless Networks

    DTIC Science & Technology

    2015-05-01

    Defined Networking, OpenFlow, WiFi, LTE 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT UU 18. NUMBER OF PAGES 18 19a. NAME OF...Machine Configurations with WiFi and LTE 4 2.3 Three Virtual Machine Configurations with WiFi and LTE 5 3. Results and Discussion 5 4. Summary and...WiFi and long-term evolution ( LTE ), and created a communication pathway between them via a central controller node. Our simulation serves as a

  5. The Users' Views on Different Types of Instructional Materials Provided in Virtual Reality Technologies

    ERIC Educational Resources Information Center

    Yildirim, Gürkan

    2017-01-01

    Today, it is seen that developing technologies are tried to be used continuously in the learning environments. These technologies have rapidly been diversifying and changing. Recently, virtual reality technology has become one of the technologies that experts have often been dwelling on. The present research tries to determine users' opinions and…

  6. Virtual reality in surgery and medicine.

    PubMed

    Chinnock, C

    1994-01-01

    This report documents the state of development of enhanced and virtual reality-based systems in medicine. Virtual reality systems seek to simulate a surgical procedure in a computer-generated world in order to improve training. Enhanced reality systems seek to augment or enhance reality by providing improved imaging alternatives for specific patient data. Virtual reality represents a paradigm shift in the way we teach and evaluate the skills of medical personnel. Driving the development of virtual reality-based simulators is laparoscopic abdominal surgery, where there is a perceived need for better training techniques; within a year, systems will be fielded for second-year residency students. Further refinements over perhaps the next five years should allow surgeons to evaluate and practice new techniques in a simulator before using them on patients. Technical developments are rapidly improving the realism of these machines to an amazing degree, as well as bringing the price down to affordable levels. In the next five years, many new anatomical models, procedures, and skills are likely to become available on simulators. Enhanced reality systems are generally being developed to improve visualization of specific patient data. Three-dimensional (3-D) stereovision systems for endoscopic applications, head-mounted displays, and stereotactic image navigation systems are being fielded now, with neurosurgery and laparoscopic surgery being major driving influences. Over perhaps the next five years, enhanced and virtual reality systems are likely to merge. This will permit patient-specific images to be used on virtual reality simulators or computer-generated landscapes to be input into surgical visualization instruments. Percolating all around these activities are developments in robotics and telesurgery. An advanced information infrastructure eventually will permit remote physicians to share video, audio, medical records, and imaging data with local physicians in real time. Surgical robots are likely to be deployed for specific tasks in the operating room (OR) and to support telesurgery applications. Technical developments in robotics and motion control are key components of many virtual reality systems. Since almost all of the virtual reality and enhanced reality systems will be digitally based, they are also capable of being put "on-line" for tele-training, consulting, and even surgery. Advancements in virtual and enhanced reality systems will be driven in part by consumer applications of this technology. Many of the companies that will supply systems for medical applications are also working on commercial products. A big consumer hit can benefit the entire industry by increasing volumes and bringing down costs.(ABSTRACT TRUNCATED AT 400 WORDS)

  7. Understanding dental CAD/CAM for restorations - dental milling machines from a mechanical engineering viewpoint. Part A: chairside milling machines.

    PubMed

    Lebon, Nicolas; Tapie, Laurent; Duret, Francois; Attal, Jean-Pierre

    2016-01-01

    The dental milling machine is an important device in the dental CAD/CAM chain. Nowadays, dental numerical controlled (NC) milling machines are available for dental surgeries (chairside solution). This article provides a mechanical engineering approach to NC milling machines to help dentists understand the involvement of technology in digital dentistry practice. First, some technical concepts and definitions associated with NC milling machines are described from a mechanical engineering viewpoint. The technical and economic criteria of four chairside dental NC milling machines that are available on the market are then described. The technical criteria are focused on the capacities of the embedded technologies of these milling machines to mill both prosthetic materials and types of shape restorations. The economic criteria are focused on investment costs and interoperability with third-party software. The clinical relevance of the technology is assessed in terms of the accuracy and integrity of the restoration.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bates, Robert; McConnell, Elizabeth

    Machining methods across many industries generally require multiple operations to machine and process advanced materials, features with micron precision, and complex shapes. The resulting multiple machining platforms can significantly affect manufacturing cycle time and the precision of the final parts, with a resultant increase in cost and energy consumption. Ultrafast lasers represent a transformative and disruptive technology that removes material with micron precision and in a single step manufacturing process. Such precision results from athermal ablation without modification or damage to the remaining material which is the key differentiator between ultrafast laser technologies and traditional laser technologies or mechanical processes.more » Athermal ablation without modification or damage to the material eliminates post-processing or multiple manufacturing steps. Combined with the appropriate technology to control the motion of the work piece, ultrafast lasers are excellent candidates to provide breakthrough machining capability for difficult-to-machine materials. At the project onset in early 2012, the project team recognized that substantial effort was necessary to improve the application of ultrafast laser and precise motion control technologies (for micromachining difficult-to-machine materials) to further the aggregate throughput and yield improvements over conventional machining methods. The project described in this report advanced these leading-edge technologies thru the development and verification of two platforms: a hybrid enhanced laser chassis and a multi-application testbed.« less

  9. Virtual reality in a children's hospital.

    PubMed

    Nihei, K; Shirakawa, K; Isshiki, N; Hirose, M; Iwata, H; Kobayashi, N

    1999-01-01

    We used virtual reality technology to improve the quality of life and amenity of in-patients in a children's hospital. Children in the hospital could enjoy a zoo, amusement park, and aquarium, in virtual. They played soccer, skiing and horse riding in virtual. They could communicate with persons who were out of the hospital and attend the school which they had gone to before entering hospital. They played music with children who had been admitted to other children's hospitals. By using this virtual technology, the quality of life of children who suffered from psychological and physiological stress in the hospital greatly improved. It is not only useful for their QOL but also for the healing of illness. However, these methods are very rare. Our systemic in our children's hospital is the first to be reported in Japan both software and hardware of virtual reality technology to increase the QOL of sick children need further development.

  10. Robotic Technology: An Assessment and Forecast,

    DTIC Science & Technology

    1984-07-01

    Research Associates# Inc. Dr. Roger Nagel# Lehigh University Dr. Charles Rosen# Machine Intelligence Corporations and Mr. Jack Thornton# Robot Insider...amr (Subcontractors: systems for assembly and Adopt Technology# inspection Stanford University. SRI) AFSC MANTECH o McDonnell Douglas o Machine ...supervisory controls man- machine interaction and system integration. - .. _ - Foreign R& The U.S. faces a strong technological challenge in robotics from

  11. Machine Tool Advanced Skills Technology (MAST). Common Ground: Toward a Standards-Based Training System for the U.S. Machine Tool and Metal Related Industries. Volume 1: Executive Summary, of a 15-Volume Set of Skills Standards and Curriculum Training Materials for the Precision Manufacturing Industry.

    ERIC Educational Resources Information Center

    Texas State Technical Coll., Waco.

    The Machine Tool Advanced Skills Technology (MAST) consortium was formed to address the shortage of skilled workers for the machine tools and metals-related industries. Featuring six of the nation's leading advanced technology centers, the MAST consortium developed, tested, and disseminated industry-specific skill standards and model curricula for…

  12. Plan for conducting an international machine tool task force

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sutton, G.P.; McClure, E.R.; Schuman, J.F.

    1978-08-28

    The basic objectives of the Machine Tool Task Force (MTTF) are to characterize and summarize the state of the art of cutting machine tool technology and to identify promising areas of future R and D. These goals will be accomplished with a series of multidisciplinary teams of prominent experts and individuals experienced in the specialized technologies of machine tools or in the management of machine tool operations. Experts will be drawn from all areas of the machine tool community: machine tool users or buyer organizations, builders, and R and D establishments including universities and government laboratories, both domestic and foreign.more » A plan for accomplishing this task is presented. The area of machine tool technology has been divided into about two dozen technology subjects on which teams of one or more experts will work. These teams are, in turn, organized into four principal working groups dealing, respectively, with machine tool accuracy, mechanics, control, and management systems/utilization. Details are presented on specific subjects to be covered, the organization of the Task Force and its four working groups, and the basic approach to determining the state of the art of technology and the future directions of this technology. The planned review procedure, the potential benefits, our management approach, and the schedule, as well as the key participating personnel and their background are discussed. The initial meeting of MTTF members will be held at a plenary session on October 16 and 17, 1978, in Scottsdale, AZ. The MTTF study will culminate in a conference on September 1, 1980, in Chicago, IL, immediately preceeding the 1980 International Machine Tool Show. At this time, our results will be released to the public; a series of reports will be published in late 1980.« less

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garzoglio, Gabriele

    The Fermilab Grid and Cloud Computing Department and the KISTI Global Science experimental Data hub Center are working on a multi-year Collaborative Research and Development Agreement.With the knowledge developed in the first year on how to provision and manage a federation of virtual machines through Cloud management systems. In this second year, we expanded the work on provisioning and federation, increasing both scale and diversity of solutions, and we started to build on-demand services on the established fabric, introducing the paradigm of Platform as a Service to assist with the execution of scientific workflows. We have enabled scientific workflows ofmore » stakeholders to run on multiple cloud resources at the scale of 1,000 concurrent machines. The demonstrations have been in the areas of (a) Virtual Infrastructure Automation and Provisioning, (b) Interoperability and Federation of Cloud Resources, and (c) On-demand Services for ScientificWorkflows.« less

  14. Learning Motion Features for Example-Based Finger Motion Estimation for Virtual Characters

    NASA Astrophysics Data System (ADS)

    Mousas, Christos; Anagnostopoulos, Christos-Nikolaos

    2017-09-01

    This paper presents a methodology for estimating the motion of a character's fingers based on the use of motion features provided by a virtual character's hand. In the presented methodology, firstly, the motion data is segmented into discrete phases. Then, a number of motion features are computed for each motion segment of a character's hand. The motion features are pre-processed using restricted Boltzmann machines, and by using the different variations of semantically similar finger gestures in a support vector machine learning mechanism, the optimal weights for each feature assigned to a metric are computed. The advantages of the presented methodology in comparison to previous solutions are the following: First, we automate the computation of optimal weights that are assigned to each motion feature counted in our metric. Second, the presented methodology achieves an increase (about 17%) in correctly estimated finger gestures in comparison to a previous method.

  15. A Cloud-based Approach to Medical NLP

    PubMed Central

    Chard, Kyle; Russell, Michael; Lussier, Yves A.; Mendonça, Eneida A; Silverstein, Jonathan C.

    2011-01-01

    Natural Language Processing (NLP) enables access to deep content embedded in medical texts. To date, NLP has not fulfilled its promise of enabling robust clinical encoding, clinical use, quality improvement, and research. We submit that this is in part due to poor accessibility, scalability, and flexibility of NLP systems. We describe here an approach and system which leverages cloud-based approaches such as virtual machines and Representational State Transfer (REST) to extract, process, synthesize, mine, compare/contrast, explore, and manage medical text data in a flexibly secure and scalable architecture. Available architectures in which our Smntx (pronounced as semantics) system can be deployed include: virtual machines in a HIPAA-protected hospital environment, brought up to run analysis over bulk data and destroyed in a local cloud; a commercial cloud for a large complex multi-institutional trial; and within other architectures such as caGrid, i2b2, or NHIN. PMID:22195072

  16. A cloud-based approach to medical NLP.

    PubMed

    Chard, Kyle; Russell, Michael; Lussier, Yves A; Mendonça, Eneida A; Silverstein, Jonathan C

    2011-01-01

    Natural Language Processing (NLP) enables access to deep content embedded in medical texts. To date, NLP has not fulfilled its promise of enabling robust clinical encoding, clinical use, quality improvement, and research. We submit that this is in part due to poor accessibility, scalability, and flexibility of NLP systems. We describe here an approach and system which leverages cloud-based approaches such as virtual machines and Representational State Transfer (REST) to extract, process, synthesize, mine, compare/contrast, explore, and manage medical text data in a flexibly secure and scalable architecture. Available architectures in which our Smntx (pronounced as semantics) system can be deployed include: virtual machines in a HIPAA-protected hospital environment, brought up to run analysis over bulk data and destroyed in a local cloud; a commercial cloud for a large complex multi-institutional trial; and within other architectures such as caGrid, i2b2, or NHIN.

  17. Hardware/software codesign for embedded RISC core

    NASA Astrophysics Data System (ADS)

    Liu, Peng

    2001-12-01

    This paper describes hardware/software codesign method of the extendible embedded RISC core VIRGO, which based on MIPS-I instruction set architecture. VIRGO is described by Verilog hardware description language that has five-stage pipeline with shared 32-bit cache/memory interface, and it is controlled by distributed control scheme. Every pipeline stage has one small controller, which controls the pipeline stage status and cooperation among the pipeline phase. Since description use high level language and structure is distributed, VIRGO core has highly extension that can meet the requirements of application. We take look at the high-definition television MPEG2 MPHL decoder chip, constructed the hardware/software codesign virtual prototyping machine that can research on VIRGO core instruction set architecture, and system on chip memory size requirements, and system on chip software, etc. We also can evaluate the system on chip design and RISC instruction set based on the virtual prototyping machine platform.

  18. Interactive Virtual and Physical Manipulatives for Improving Students' Spatial Skills

    ERIC Educational Resources Information Center

    Ha, Oai; Fang, Ning

    2018-01-01

    An innovative educational technology called interactive virtual and physical manipulatives (VPM) is developed to improve students' spatial skills. With VPM technology, not only can students touch and play with real-world physical manipulatives in their hands but also they can see how the corresponding virtual manipulatives (i.e., computer…

  19. Application of Virtual World Technologies to Undersea Warfare Learning

    DTIC Science & Technology

    2009-08-20

    Prescribed by ANSI Std Z39-18 Virtual World Technologies (VWTs) “Using Virtual Worlds To Shape the Future” by Dr. Susan U. Stucky, IBM Almaden Research...JUAN HSI US MIL ANDS AIR FORCE USS SKIPJACK11 Beavertail Lighthouse iTP TP ObsF ObsE Coordinated Military Presence NAVY AIR FORCE ARMYMARINES & OTHER

  20. Virtual reality and physical rehabilitation: a new toy or a new research and rehabilitation tool?

    PubMed Central

    Keshner, Emily A

    2004-01-01

    Virtual reality (VR) technology is rapidly becoming a popular application for physical rehabilitation and motor control research. But questions remain about whether this technology really extends our ability to influence the nervous system or whether moving within a virtual environment just motivates the individual to perform. I served as guest editor of this month's issue of the Journal of NeuroEngineering and Rehabilitation (JNER) for a group of papers on augmented and virtual reality in rehabilitation. These papers demonstrate a variety of approaches taken for applying VR technology to physical rehabilitation. The papers by Kenyon et al. and Sparto et al. address critical questions about how this technology can be applied to physical rehabilitation and research. The papers by Sveistrup and Viau et al. explore whether action within a virtual environment is equivalent to motor performance within the physical environment. Finally, papers by Riva et al. and Weiss et al. discuss the important characteristics of a virtual environment that will be most effective for obtaining changes in the motor system. PMID:15679943

  1. Human intestinal transporter database: QSAR modeling and virtual profiling of drug uptake, efflux and interactions.

    PubMed

    Sedykh, Alexander; Fourches, Denis; Duan, Jianmin; Hucke, Oliver; Garneau, Michel; Zhu, Hao; Bonneau, Pierre; Tropsha, Alexander

    2013-04-01

    Membrane transporters mediate many biological effects of chemicals and play a major role in pharmacokinetics and drug resistance. The selection of viable drug candidates among biologically active compounds requires the assessment of their transporter interaction profiles. Using public sources, we have assembled and curated the largest, to our knowledge, human intestinal transporter database (>5,000 interaction entries for >3,700 molecules). This data was used to develop thoroughly validated classification Quantitative Structure-Activity Relationship (QSAR) models of transport and/or inhibition of several major transporters including MDR1, BCRP, MRP1-4, PEPT1, ASBT, OATP2B1, OCT1, and MCT1. QSAR models have been developed with advanced machine learning techniques such as Support Vector Machines, Random Forest, and k Nearest Neighbors using Dragon and MOE chemical descriptors. These models afforded high external prediction accuracies of 71-100% estimated by 5-fold external validation, and showed hit retrieval rates with up to 20-fold enrichment in the virtual screening of DrugBank compounds. The compendium of predictive QSAR models developed in this study can be used for virtual profiling of drug candidates and/or environmental agents with the optimal transporter profiles.

  2. A Distributed Parallel Genetic Algorithm of Placement Strategy for Virtual Machines Deployment on Cloud Platform

    PubMed Central

    Dong, Yu-Shuang; Xu, Gao-Chao; Fu, Xiao-Dong

    2014-01-01

    The cloud platform provides various services to users. More and more cloud centers provide infrastructure as the main way of operating. To improve the utilization rate of the cloud center and to decrease the operating cost, the cloud center provides services according to requirements of users by sharding the resources with virtualization. Considering both QoS for users and cost saving for cloud computing providers, we try to maximize performance and minimize energy cost as well. In this paper, we propose a distributed parallel genetic algorithm (DPGA) of placement strategy for virtual machines deployment on cloud platform. It executes the genetic algorithm parallelly and distributedly on several selected physical hosts in the first stage. Then it continues to execute the genetic algorithm of the second stage with solutions obtained from the first stage as the initial population. The solution calculated by the genetic algorithm of the second stage is the optimal one of the proposed approach. The experimental results show that the proposed placement strategy of VM deployment can ensure QoS for users and it is more effective and more energy efficient than other placement strategies on the cloud platform. PMID:25097872

  3. Implementation of Grid Tier 2 and Tier 3 facilities on a Distributed OpenStack Cloud

    NASA Astrophysics Data System (ADS)

    Limosani, Antonio; Boland, Lucien; Coddington, Paul; Crosby, Sean; Huang, Joanna; Sevior, Martin; Wilson, Ross; Zhang, Shunde

    2014-06-01

    The Australian Government is making a AUD 100 million investment in Compute and Storage for the academic community. The Compute facilities are provided in the form of 30,000 CPU cores located at 8 nodes around Australia in a distributed virtualized Infrastructure as a Service facility based on OpenStack. The storage will eventually consist of over 100 petabytes located at 6 nodes. All will be linked via a 100 Gb/s network. This proceeding describes the development of a fully connected WLCG Tier-2 grid site as well as a general purpose Tier-3 computing cluster based on this architecture. The facility employs an extension to Torque to enable dynamic allocations of virtual machine instances. A base Scientific Linux virtual machine (VM) image is deployed in the OpenStack cloud and automatically configured as required using Puppet. Custom scripts are used to launch multiple VMs, integrate them into the dynamic Torque cluster and to mount remote file systems. We report on our experience in developing this nation-wide ATLAS and Belle II Tier 2 and Tier 3 computing infrastructure using the national Research Cloud and storage facilities.

  4. A distributed parallel genetic algorithm of placement strategy for virtual machines deployment on cloud platform.

    PubMed

    Dong, Yu-Shuang; Xu, Gao-Chao; Fu, Xiao-Dong

    2014-01-01

    The cloud platform provides various services to users. More and more cloud centers provide infrastructure as the main way of operating. To improve the utilization rate of the cloud center and to decrease the operating cost, the cloud center provides services according to requirements of users by sharding the resources with virtualization. Considering both QoS for users and cost saving for cloud computing providers, we try to maximize performance and minimize energy cost as well. In this paper, we propose a distributed parallel genetic algorithm (DPGA) of placement strategy for virtual machines deployment on cloud platform. It executes the genetic algorithm parallelly and distributedly on several selected physical hosts in the first stage. Then it continues to execute the genetic algorithm of the second stage with solutions obtained from the first stage as the initial population. The solution calculated by the genetic algorithm of the second stage is the optimal one of the proposed approach. The experimental results show that the proposed placement strategy of VM deployment can ensure QoS for users and it is more effective and more energy efficient than other placement strategies on the cloud platform.

  5. Virtual reality and neuropsychological assessment: The reliability of a virtual kitchen to assess daily-life activities in victims of traumatic brain injury.

    PubMed

    Besnard, Jeremy; Richard, Paul; Banville, Frederic; Nolin, Pierre; Aubin, Ghislaine; Le Gall, Didier; Richard, Isabelle; Allain, Phillippe

    2016-01-01

    Traumatic brain injury (TBI) causes impairments affecting instrumental activities of daily living (IADL). However, few studies have considered virtual reality as an ecologically valid tool for the assessment of IADL in patients who have sustained a TBI. The main objective of the present study was to examine the use of the Nonimmersive Virtual Coffee Task (NI-VCT) for IADL assessment in patients with TBI. We analyzed the performance of 19 adults suffering from TBI and 19 healthy controls (HCs) in the real and virtual tasks of making coffee with a coffee machine, as well as in global IQ and executive functions. Patients performed worse than HCs on both real and virtual tasks and on all tests of executive functions. Correlation analyses revealed that NI-VCT scores were related to scores on the real task. Moreover, regression analyses demonstrated that performance on NI-VCT matched real-task performance. Our results support the idea that the virtual kitchen is a valid tool for IADL assessment in patients who have sustained a TBI.

  6. Virtualization - A Key Cost Saver in NASA Multi-Mission Ground System Architecture

    NASA Technical Reports Server (NTRS)

    Swenson, Paul; Kreisler, Stephen; Sager, Jennifer A.; Smith, Dan

    2014-01-01

    With science team budgets being slashed, and a lack of adequate facilities for science payload teams to operate their instruments, there is a strong need for innovative new ground systems that are able to provide necessary levels of capability processing power, system availability and redundancy while maintaining a small footprint in terms of physical space, power utilization and cooling.The ground system architecture being presented is based off of heritage from several other projects currently in development or operations at Goddard, but was designed and built specifically to meet the needs of the Science and Planetary Operations Control Center (SPOCC) as a low-cost payload command, control, planning and analysis operations center. However, this SPOCC architecture was designed to be generic enough to be re-used partially or in whole by other labs and missions (since its inception that has already happened in several cases!)The SPOCC architecture leverages a highly available VMware-based virtualization cluster with shared SAS Direct-Attached Storage (DAS) to provide an extremely high-performing, low-power-utilization and small-footprint compute environment that provides Virtual Machine resources shared among the various tenant missions in the SPOCC. The storage is also expandable, allowing future missions to chain up to 7 additional 2U chassis of storage at an extremely competitive cost if they require additional archive or virtual machine storage space.The software architecture provides a fully-redundant GMSEC-based message bus architecture based on the ActiveMQ middleware to track all health and safety status within the SPOCC ground system. All virtual machines utilize the GMSEC system agents to report system host health over the GMSEC bus, and spacecraft payload health is monitored using the Hammers Integrated Test and Operations System (ITOS) Galaxy Telemetry and Command (TC) system, which performs near-real-time limit checking and data processing on the downlinked data stream and injects messages into the GMSEC bus that are monitored to automatically page the on-call operator or Systems Administrator (SA) when an off-nominal condition is detected. This architecture, like the LTSP thin clients, are shared across all tenant missions.Other required IT security controls are implemented at the ground system level, including physical access controls, logical system-level authentication authorization management, auditing and reporting, network management and a NIST 800-53 FISMA-Moderate IT Security plan Risk Assessment Contingency Plan, helping multiple missions share the cost of compliance with agency-mandated directives.The SPOCC architecture provides science payload control centers and backup mission operations centers with a cost-effective, standardized approach to virtualizing and monitoring resources that were traditionally multiple racks full of physical machines. The increased agility in deploying new virtual systems and thin client workstations can provide significant savings in personnel costs for maintaining the ground system. The cost savings in procurement, power, rack footprint and cooling as well as the shared multi-mission design greatly reduces upfront cost for missions moving into the facility. Overall, the authors hope that this architecture will become a model for how future NASA operations centers are constructed!

  7. [The urologist of the future and new technologies.

    PubMed

    Peinado, Francois; Fernández, Atanasio; Teba, Fernando; Celada, Guillermo; Acosta, Marco Antonio

    2018-01-01

    The last 25 years have brought about revolutionary changes for medicine and in particular for urology: internet was only in its infancy, medical records were written on paper, searches for medical information were done in the hospital library, medical articles were photocopied and our relationship with patients only existed face to face. Social networks had not yet appeared and even Google did not exist. Just imagine what might happen during the next 25 years, we're going to see even more radical changes. The urologist of the future is going to see the arrival of artificial intelligence, collaborative medicine, telemedicine, machine learning, the Internet of Things and personalized robotics; in the meantime, social media will continue to transform the interaction between physician and patient. The training of urologists will also be different thanks to new learning technologies such as virtual reality or augmented reality. IBM Watson Health through its system of artificial intelligence and its learning algorithms will become our essential travel companion. The urologist of the future, as well as physician, will have to acquire the necessary technological skills in order to use all these new tools which are already on the horizon.

  8. Media-Augmented Exercise Machines

    NASA Astrophysics Data System (ADS)

    Krueger, T.

    2002-01-01

    Cardio-vascular exercise has been used to mitigate the muscle and cardiac atrophy associated with adaptation to micro-gravity environments. Several hours per day may be required. In confined spaces and long duration missions this kind of exercise is inevitably repetitive and rapidly becomes uninteresting. At the same time, there are pressures to accomplish as much as possible given the cost- per-hour for humans occupying orbiting or interplanetary. Media augmentation provides a the means to overlap activities in time by supplementing the exercise with social, recreational, training or collaborative activities and thereby reducing time pressures. In addition, the machine functions as an interface to a wide range of digital environments allowing for spatial variety in an otherwise confined environment. We hypothesize that the adoption of media augmented exercise machines will have a positive effect on psycho-social well-being on long duration missions. By organizing and supplementing exercise machines, data acquisition hardware, computers and displays into an interacting system this proposal increases functionality with limited additional mass. This paper reviews preliminary work on a project to augment exercise equipment in a manner that addresses these issues and at the same time opens possibilities for additional benefits. A testbed augmented exercise machine uses a specialty built cycle trainer as both input to a virtual environment and as an output device from it using spatialized sound, and visual displays, vibration transducers and variable resistance. The resulting interactivity increases a sense of engagement in the exercise, provides a rich experience of the digital environments. Activities in the virtual environment and accompanying physiological and psychological indicators may be correlated to track and evaluate the health of the crew.

  9. Factors to keep in mind when introducing virtual microscopy.

    PubMed

    Glatz-Krieger, Katharina; Spornitz, Udo; Spatz, Alain; Mihatsch, Michael J; Glatz, Dieter

    2006-03-01

    Digitization of glass slides and delivery of so-called virtual slides (VS) emulating a real microscope over the Internet have become reality due to recent improvements in technology. We have implemented a virtual microscope for instruction of medical students and for continuing medical education. Up to 30,000 images per slide are captured using a microscope with an automated stage. The images are post-processed and then served by a plain hypertext transfer protocol (http)-server. A virtual slide client (vMic) based on Macromedia's Flash MX, a highly accepted technology available on every modern Web browser, has been developed. All necessary virtual slide parameters are stored in an XML file together with the image. Evaluation of the courses by questionnaire indicated that most students and many but not all pathologists regard virtual slides as an adequate replacement for traditional slides. All our virtual slides are publicly accessible over the World Wide Web (WWW) at http://vmic.unibas.ch . Recently, several commercially available virtual slide acquisition systems (VSAS) have been developed that use various technologies to acquire and distribute virtual slides. These systems differ in speed, image quality, compatibility, viewer functionalities and price. This paper gives an overview of the factors to keep in mind when introducing virtual microscopy.

  10. Medical robotics.

    PubMed

    Ferrigno, Giancarlo; Baroni, Guido; Casolo, Federico; De Momi, Elena; Gini, Giuseppina; Matteucci, Matteo; Pedrocchi, Alessandra

    2011-01-01

    Information and communication technology (ICT) and mechatronics play a basic role in medical robotics and computer-aided therapy. In the last three decades, in fact, ICT technology has strongly entered the health-care field, bringing in new techniques to support therapy and rehabilitation. In this frame, medical robotics is an expansion of the service and professional robotics as well as other technologies, as surgical navigation has been introduced especially in minimally invasive surgery. Localization systems also provide treatments in radiotherapy and radiosurgery with high precision. Virtual or augmented reality plays a role for both surgical training and planning and for safe rehabilitation in the first stage of the recovery from neurological diseases. Also, in the chronic phase of motor diseases, robotics helps with special assistive devices and prostheses. Although, in the past, the actual need and advantage of navigation, localization, and robotics in surgery and therapy has been in doubt, today, the availability of better hardware (e.g., microrobots) and more sophisticated algorithms(e.g., machine learning and other cognitive approaches)has largely increased the field of applications of these technologies,making it more likely that, in the near future, their presence will be dramatically increased, taking advantage of the generational change of the end users and the increasing request of quality in health-care delivery and management.

  11. Social energy: mining energy from the society

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jun Jason; Gao, David Wenzhong; Zhang, Yingchen

    The inherent nature of energy, i.e., physicality, sociality and informatization, implies the inevitable and intensive interaction between energy systems and social systems. From this perspective, we define 'social energy' as a complex sociotechnical system of energy systems, social systems and the derived artificial virtual systems which characterize the intense intersystem and intra-system interactions. The recent advancement in intelligent technology, including artificial intelligence and machine learning technologies, sensing and communication in Internet of Things technologies, and massive high performance computing and extreme-scale data analytics technologies, enables the possibility of substantial advancement in socio-technical system optimization, scheduling, control and management. In thismore » paper, we provide a discussion on the nature of energy, and then propose the concept and intention of social energy systems for electrical power. A general methodology of establishing and investigating social energy is proposed, which is based on the ACP approach, i.e., 'artificial systems' (A), 'computational experiments' (C) and 'parallel execution' (P), and parallel system methodology. A case study on the University of Denver (DU) campus grid is provided and studied to demonstrate the social energy concept. In the concluding remarks, we discuss the technical pathway, in both social and nature sciences, to social energy, and our vision on its future.« less

  12. Reducing Time to Science: Unidata and JupyterHub Technology Using the Jetstream Cloud

    NASA Astrophysics Data System (ADS)

    Chastang, J.; Signell, R. P.; Fischer, J. L.

    2017-12-01

    Cloud computing can accelerate scientific workflows, discovery, and collaborations by reducing research and data friction. We describe the deployment of Unidata and JupyterHub technologies on the NSF-funded XSEDE Jetstream cloud. With the aid of virtual machines and Docker technology, we deploy a Unidata JupyterHub server co-located with a Local Data Manager (LDM), THREDDS data server (TDS), and RAMADDA geoscience content management system. We provide Jupyter Notebooks and the pre-built Python environments needed to run them. The notebooks can be used for instruction and as templates for scientific experimentation and discovery. We also supply a large quantity of NCEP forecast model results to allow data-proximate analysis and visualization. In addition, users can transfer data using Globus command line tools, and perform their own data-proximate analysis and visualization with Notebook technology. These data can be shared with others via a dedicated TDS server for scientific distribution and collaboration. There are many benefits of this approach. Not only is the cloud computing environment fast, reliable and scalable, but scientists can analyze, visualize, and share data using only their web browser. No local specialized desktop software or a fast internet connection is required. This environment will enable scientists to spend less time managing their software and more time doing science.

  13. MO-FG-202-09: Virtual IMRT QA Using Machine Learning: A Multi-Institutional Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valdes, G; Scheuermann, R; Solberg, T

    Purpose: To validate a machine learning approach to Virtual IMRT QA for accurately predicting gamma passing rates using different QA devices at different institutions. Methods: A Virtual IMRT QA was constructed using a machine learning algorithm based on 416 IMRT plans, in which QA measurements were performed using diode-array detectors and a 3%local/3mm with 10% threshold. An independent set of 139 IMRT measurements from a different institution, with QA data based on portal dosimetry using the same gamma index and 10% threshold, was used to further test the algorithm. Plans were characterized by 90 different complexity metrics. A weighted poisonmore » regression with Lasso regularization was trained to predict passing rates using the complexity metrics as input. Results: In addition to predicting passing rates with 3% accuracy for all composite plans using diode-array detectors, passing rates for portal dosimetry on per-beam basis were predicted with an error <3.5% for 120 IMRT measurements. The remaining measurements (19) had large areas of low CU, where portal dosimetry has larger disagreement with the calculated dose and, as such, large errors were expected. These beams need to be further modeled to correct the under-response in low dose regions. Important features selected by Lasso to predict gamma passing rates were: complete irradiated area outline (CIAO) area, jaw position, fraction of MLC leafs with gaps smaller than 20 mm or 5mm, fraction of area receiving less than 50% of the total CU, fraction of the area receiving dose from penumbra, weighted Average Irregularity Factor, duty cycle among others. Conclusion: We have demonstrated that the Virtual IMRT QA can predict passing rates using different QA devices and across multiple institutions. Prediction of QA passing rates could have profound implications on the current IMRT process.« less

  14. Virtualization and cloud computing in dentistry.

    PubMed

    Chow, Frank; Muftu, Ali; Shorter, Richard

    2014-01-01

    The use of virtualization and cloud computing has changed the way we use computers. Virtualization is a method of placing software called a hypervisor on the hardware of a computer or a host operating system. It allows a guest operating system to run on top of the physical computer with a virtual machine (i.e., virtual computer). Virtualization allows multiple virtual computers to run on top of one physical computer and to share its hardware resources, such as printers, scanners, and modems. This increases the efficient use of the computer by decreasing costs (e.g., hardware, electricity administration, and management) since only one physical computer is needed and running. This virtualization platform is the basis for cloud computing. It has expanded into areas of server and storage virtualization. One of the commonly used dental storage systems is cloud storage. Patient information is encrypted as required by the Health Insurance Portability and Accountability Act (HIPAA) and stored on off-site private cloud services for a monthly service fee. As computer costs continue to increase, so too will the need for more storage and processing power. Virtual and cloud computing will be a method for dentists to minimize costs and maximize computer efficiency in the near future. This article will provide some useful information on current uses of cloud computing.

  15. Using SecondLife Online Virtual World Technology to Introduce Educators to the Digital Culture

    NASA Technical Reports Server (NTRS)

    Jamison, John

    2008-01-01

    The rapidly changing culture resulting from new technologies and digital gaming has created an increasing language gap between traditional educators and today's learners (Natkin, 2006; Seely-Brown, 2000). This study seeks to use the online virtual world of SecondLife.com as a tool to introduce educators to this new environment for learning. This study observes the activities and perceptions of a group of educators given unscripted access to this virtual environment. The results 'suggest that although serious technology limitations do currently exist, the potential of this virtual world environment as a learning experience for educators is strong.

  16. Application of virtual surgical planning with computer assisted design and manufacturing technology to cranio-maxillofacial surgery.

    PubMed

    Zhao, Linping; Patel, Pravin K; Cohen, Mimis

    2012-07-01

    Computer aided design and manufacturing (CAD/CAM) technology today is the standard in manufacturing industry. The application of the CAD/CAM technology, together with the emerging 3D medical images based virtual surgical planning (VSP) technology, to craniomaxillofacial reconstruction has been gaining increasing attention to reconstructive surgeons. This article illustrates the components, system and clinical management of the VSP and CAD/CAM technology including: data acquisition, virtual surgical and treatment planning, individual implant design and fabrication, and outcome assessment. It focuses primarily on the technical aspects of the VSP and CAD/CAM system to improve the predictability of the planning and outcome.

  17. Virtual reality and planetary exploration

    NASA Technical Reports Server (NTRS)

    Mcgreevy, Michael W.

    1992-01-01

    NASA-Ames is intensively developing virtual-reality (VR) capabilities that can extend and augment computer-generated and remote spatial environments. VR is envisioned not only as a basis for improving human/machine interactions involved in planetary exploration, but also as a medium for the more widespread sharing of the experience of exploration, thereby broadening the support-base for the lunar and planetary-exploration endeavors. Imagery representative of Mars are being gathered for VR presentation at such terrestrial sites as Antarctica and Death Valley.

  18. Is there a superior simulator for human anatomy education? How virtual dissection can overcome the anatomic and pedagogic limitations of cadaveric dissection.

    PubMed

    Darras, Kathryn E; de Bruin, Anique B H; Nicolaou, Savvas; Dahlström, Nils; Persson, Anders; van Merriënboer, Jeroen; Forster, Bruce B

    2018-03-23

    Educators must select the best tools to teach anatomy to future physicians and traditionally, cadavers have always been considered the "gold standard" simulator for living anatomy. However, new advances in technology and radiology have created new teaching tools, such as virtual dissection, which provide students with new learning opportunities. Virtual dissection is a novel way of studying human anatomy through patient computed tomography (CT) scans. Through touchscreen technology, students can work together in groups to "virtually dissect" the CT scans to better understand complex anatomic relationships. This article presents the anatomic and pedagogic limitations of cadaveric dissection and explains what virtual dissection is and how this new technology may be used to overcome these limitations.

  19. Impact of Collaborative Work on Technology Acceptance: A Case Study from Virtual Computing

    ERIC Educational Resources Information Center

    Konak, Abdullah; Kulturel-Konak, Sadan; Nasereddin, Mahdi; Bartolacci, Michael R.

    2017-01-01

    Aim/Purpose: This paper utilizes the Technology Acceptance Model (TAM) to examine the extent to which acceptance of Remote Virtual Computer Laboratories (RVCLs) is affected by students' technological backgrounds and the role of collaborative work. Background: RVCLs are widely used in information technology and cyber security education to provide…

  20. Optoelectronics technologies for Virtual Reality systems

    NASA Astrophysics Data System (ADS)

    Piszczek, Marek; Maciejewski, Marcin; Pomianek, Mateusz; Szustakowski, Mieczysław

    2017-08-01

    Solutions in the field of virtual reality are very strongly associated with optoelectronic technologies. This applies to both process design and operation of VR applications. Technologies such as 360 cameras and 3D scanners significantly improve the design work. What is more, HMD displays with high field of view or optoelectronic Motion Capture systems and 3D cameras guarantee an extraordinary experience in immersive VR applications. This article reviews selected technologies from the perspective of their use in a broadly defined process of creating and implementing solutions for virtual reality. There is also the ability to create, modify and adapt new approaches that show team own work (SteamVR tracker). Most of the introduced examples are effectively used by authors to create different VR applications. The use of optoelectronic technology in virtual reality is presented in terms of design and operation of the system as well as referring to specific applications. Designers and users of VR systems should take a close look on new optoelectronics solutions, as they can significantly contribute to increased work efficiency and offer completely new opportunities for virtual world reception.

  1. Industrial Inspection with Open Eyes: Advance with Machine Vision Technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Zheng; Ukida, H.; Niel, Kurt

    Machine vision systems have evolved significantly with the technology advances to tackle the challenges from modern manufacturing industry. A wide range of industrial inspection applications for quality control are benefiting from visual information captured by different types of cameras variously configured in a machine vision system. This chapter screens the state of the art in machine vision technologies in the light of hardware, software tools, and major algorithm advances for industrial inspection. The inspection beyond visual spectrum offers a significant complementary to the visual inspection. The combination with multiple technologies makes it possible for the inspection to achieve a bettermore » performance and efficiency in varied applications. The diversity of the applications demonstrates the great potential of machine vision systems for industry.« less

  2. Mediating Data and Building Community for Informed, Intelligent Decision Making for the Polar Regions

    NASA Astrophysics Data System (ADS)

    Pulsifer, P. L.; Stieglitz, M.

    2017-12-01

    Much has been written about the state of data and related systems for the polar regions, however work remains to be done to achieve an envisioned integrated and well-defined pan-Arctic observing and data network that enables access to high quality data, expertise and information in support of scientific understanding, stakeholder needs, and agency operations. In this paper we argue that priorities for establishing such a network are in the area of machine-enhanced data mediation and the human aspects of community building. The authors have engaged in a U.S.-based, multi-agency process with the goal of applying modern cyberinfrastructure to improve capabilities for integrating data. A particular case-study focuses on establishing a carbon budget for the Arctic region. This effort contributes to broader global efforts aimed at establishing an international observing and data network. Results are based on a series meetings, workshops, systems design activities, and publications. Analysis reveals that there are a large number of polar data resources interacting in a network that functions as a data ecosystem. Given the size and complexity of the network, achieving broad data discovery and access and meaningful data integration (i.e. developing a carbon budget) will require advanced techniques including machine learning, semantic mediation, and the use of highly connected virtual research environments. To achieve the aforementioned goal will require a community of engaged researchers, technologists, and stakeholders to establish requirements and the social and organizational context needed for effective machine-based approaches. The results imply that: i) the polar research and application community must be more aware of advances in technology; ii) funders must adopt a long-term, sustainable infrastructure approach to systems development; iii) the community must work together to enable interoperability; iv) we must recognize that the challenge is socio-technical and that social and community-building aspects are as important as technology. Establishing cyberinfrastructure for informed, intelligent decision making for the polar regions will require an innovative combination of emerging technologies and community-building across stakeholders.

  3. Human factors issues for interstellar spacecraft

    NASA Technical Reports Server (NTRS)

    Cohen, Marc M.; Brody, Adam R.

    1991-01-01

    Developments in research on space human factors are reviewed in the context of a self-sustaining interstellar spacecraft based on the notion of traveling space settlements. Assumptions about interstellar travel are set forth addressing costs, mission durations, and the need for multigenerational space colonies. The model of human motivation by Maslow (1970) is examined and directly related to the design of space habitat architecture. Human-factors technology issues encompass the human-machine interface, crew selection and training, and the development of spaceship infrastructure during transtellar flight. A scenario for feasible instellar travel is based on a speed of 0.5c, a timeframe of about 100 yr, and an expandable multigenerational crew of about 100 members. Crew training is identified as a critical human-factors issue requiring the development of perceptual and cognitive aids such as expert systems and virtual reality.

  4. Volunteer Clouds and Citizen Cyberscience for LHC Physics

    NASA Astrophysics Data System (ADS)

    Aguado Sanchez, Carlos; Blomer, Jakob; Buncic, Predrag; Chen, Gang; Ellis, John; Garcia Quintas, David; Harutyunyan, Artem; Grey, Francois; Lombrana Gonzalez, Daniel; Marquina, Miguel; Mato, Pere; Rantala, Jarno; Schulz, Holger; Segal, Ben; Sharma, Archana; Skands, Peter; Weir, David; Wu, Jie; Wu, Wenjing; Yadav, Rohit

    2011-12-01

    Computing for the LHC, and for HEP more generally, is traditionally viewed as requiring specialized infrastructure and software environments, and therefore not compatible with the recent trend in "volunteer computing", where volunteers supply free processing time on ordinary PCs and laptops via standard Internet connections. In this paper, we demonstrate that with the use of virtual machine technology, at least some standard LHC computing tasks can be tackled with volunteer computing resources. Specifically, by presenting volunteer computing resources to HEP scientists as a "volunteer cloud", essentially identical to a Grid or dedicated cluster from a job submission perspective, LHC simulations can be processed effectively. This article outlines both the technical steps required for such a solution and the implications for LHC computing as well as for LHC public outreach and for participation by scientists from developing regions in LHC research.

  5. Automated Inference of Chemical Discriminants of Biological Activity.

    PubMed

    Raschka, Sebastian; Scott, Anne M; Huertas, Mar; Li, Weiming; Kuhn, Leslie A

    2018-01-01

    Ligand-based virtual screening has become a standard technique for the efficient discovery of bioactive small molecules. Following assays to determine the activity of compounds selected by virtual screening, or other approaches in which dozens to thousands of molecules have been tested, machine learning techniques make it straightforward to discover the patterns of chemical groups that correlate with the desired biological activity. Defining the chemical features that generate activity can be used to guide the selection of molecules for subsequent rounds of screening and assaying, as well as help design new, more active molecules for organic synthesis.The quantitative structure-activity relationship machine learning protocols we describe here, using decision trees, random forests, and sequential feature selection, take as input the chemical structure of a single, known active small molecule (e.g., an inhibitor, agonist, or substrate) for comparison with the structure of each tested molecule. Knowledge of the atomic structure of the protein target and its interactions with the active compound are not required. These protocols can be modified and applied to any data set that consists of a series of measured structural, chemical, or other features for each tested molecule, along with the experimentally measured value of the response variable you would like to predict or optimize for your project, for instance, inhibitory activity in a biological assay or ΔG binding . To illustrate the use of different machine learning algorithms, we step through the analysis of a dataset of inhibitor candidates from virtual screening that were tested recently for their ability to inhibit GPCR-mediated signaling in a vertebrate.

  6. Designing communication and remote controlling of virtual instrument network system

    NASA Astrophysics Data System (ADS)

    Lei, Lin; Wang, Houjun; Zhou, Xue; Zhou, Wenjian

    2005-01-01

    In this paper, a virtual instrument network through the LAN and finally remote control of virtual instruments is realized based on virtual instrument and LabWindows/CVI software platform. The virtual instrument network system is made up of three subsystems. There are server subsystem, telnet client subsystem and local instrument control subsystem. This paper introduced virtual instrument network structure in detail based on LabWindows. Application procedure design of virtual instrument network communication, the Client/the programming mode of the server, remote PC and server communication far realizing, the control power of the workstation is transmitted, server program and so on essential technical were introduced. And virtual instruments network may connect to entire Internet on. Above-mentioned technology, through measuring the application in the electronic measurement virtual instrument network that is already built up, has verified the actual using value of the technology. Experiment and application validate that this design is resultful.

  7. Health care globalization: a need for virtual leadership.

    PubMed

    Holland, J Brian; Malvey, Donna; Fottler, Myron D

    2009-01-01

    As health care organizations expand and move into global markets, they face many leadership challenges, including the difficulty of leading individuals who are geographically dispersed. This article provides global managers with guidelines for leading and motivating individuals or teams from a distance while overcoming the typical challenges that "virtual leaders" and "virtual teams" face: employee isolation, confusion, language barriers, cultural differences, and technological breakdowns. Fortunately, technological advances in communications have provided various methods to accommodate geographically dispersed or "global virtual teams." Health care leaders now have the ability to lead global teams from afar by becoming "virtual leaders" with a responsibility to lead a "virtual team." Three models of globalization presented and discussed are outsourcing of health care services, medical tourism, and telerobotics. These models require global managers to lead virtually, and a positive relationship between the virtual leader and the virtual team member is vital in the success of global health care organizations.

  8. Conception et mise au point d'un emulateur de machine Synchrone trapezoidale a aimants permanents

    NASA Astrophysics Data System (ADS)

    Lessard, Francois

    The development of technology leads inevitably to higher systems' complexity faced by engineers. Over time, tools are often developed in parallel with the main systems to ensure their sustainability. The work presented in this document provides a new tool for testing motor drives. In general, this project refers to active loads, which are complex dynamic loads emulated electronically with a static converter. Specifically, this document proposes and implements a system whose purpose is to recreate the behaviour of a trapezoidal permanent magnets synchronous machine. The ultimate goal is to connect a motor drive to the three terminal of the motor emulator, as it would with a real motor. The emulator's response then obtained, when subjected to disturbances of the motor drive, is ideally identical to the one of a real motor. The motor emulator led to a significant versatility of a test bench because the electrical and mechanical parameters of the application can be easily modified. The work is divided into two main parts: the static converter and real-rime. Overall, these two entities form a PHIL (Power Hardware-in-the-loop) real-time simulation. The static converter enables the exchange of real power between the drive motor and the real-time simulation. The latter gives the application the intelligence needed to interact with the motor drive in a way which the desired behaviour is recreated. The main partner of this project, Opal-RT, ensures this development. Keywords: virtual machine, PHIL, real-time simulation, electronic load

  9. Electro-chemical grinding

    NASA Technical Reports Server (NTRS)

    Feagans, P. L.

    1972-01-01

    Electro-chemical grinding technique has rotation speed control, constant feed rates, and contour control. Hypersonic engine parts of nickel alloys can be almost 100% machined, keeping tool pressure at virtual zero. Technique eliminates galling and permits constant surface finish and burr-free interrupted cutting.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simpson, Wayne; Borders, Tammie

    INL successfully developed a proof of concept for "Software Defined Anything" by emulating the laboratory's business applications that run on Virtual Machines. The work INL conducted demonstrates to industry on how this methodology can be used to improve security, automate and repeat processes, and improve consistency.

  11. An Examination of Usability of a Virtual Environment for Students Enrolled in a College of Agriculture

    ERIC Educational Resources Information Center

    Murphrey, Theresa Pesl; Rutherford, Tracy A.; Doerfert, David L.; Edgar, Leslie D.; Edgar, Don W.

    2014-01-01

    Educational technology continues to expand with multi-user virtual environments (e.g., Second Life™) being the latest technology. Understanding a virtual environment's usability can enhance educational planning and effective use. Usability includes the interaction quality between an individual and the item being assessed. The purpose was to assess…

  12. Virtual Classroom Instruction and Academic Performance of Educational Technology Students in Distance Education, Enugu State

    ERIC Educational Resources Information Center

    Akpan, Sylvester J.; Etim, Paulinus J.; Udom, Stella Ogechi

    2016-01-01

    The virtual classroom and distance education have created new teaching pedagogy. This study was carried out to investigate Virtual Classroom Instruction on Academic Performance of Educational Technology Students in Distance Education, Enugu State. The population for this study was limited to the Students in National Open University, Enugu study…

  13. The Blended Classroom Revolution: Virtual Technology Goes to School

    ERIC Educational Resources Information Center

    Weil, Marty

    2009-01-01

    While virtual schools, which currently serve only a tiny fraction of the nation's 48 million K-12 students, get all the buzz, a much bigger, largely untold story of online learning is unfolding in America's brick-and-mortar classrooms: a simple yet profound merger of virtual-school technology and the traditional classroom is taking place. This…

  14. Knowledge Navigation for Virtual Vehicles

    NASA Technical Reports Server (NTRS)

    Gomez, Julian E.

    2004-01-01

    A virtual vehicle is a digital model of the knowledge surrounding a potentially real vehicle. Knowledge consists not only of the tangible information, such as CAD, but also what is known about the knowledge - its metadata. This paper is an overview of technologies relevant to building a virtual vehicle, and an assessment of how to bring those technologies together.

  15. Virtual Worlds to Support Patient Group Communication? A Questionnaire Study Investigating Potential for Virtual World Focus Group Use by Respiratory Patients

    ERIC Educational Resources Information Center

    Taylor, Michael J.; Taylor, Dave; Vlaev, Ivo; Elkin, Sarah

    2017-01-01

    Recent advances in communication technologies enable potential provision of remote education for patients using computer-generated environments known as virtual worlds. Previous research has revealed highly variable levels of patient receptiveness to using information technologies for healthcare-related purposes. This preliminary study involved…

  16. Develop, Build, and Test a Virtual Lab to Support a Vulnerability Training System

    DTIC Science & Technology

    2004-09-01

    docs.us.dell.com/support/edocs/systems/pe1650/ en /it/index.htm> (20 August 2004) “HOWTO: Installing Web Services with Linux /Tomcat/Apache/Struts...configured as host machines with VMware and VNC running on a Linux RedHat 9 Kernel. An Apache-Tomcat web server was configured as the external interface to...1650, dual processor, blade servers were configured as host machines with VMware and VNC running on a Linux RedHat 9 Kernel. An Apache-Tomcat web

  17. Increasing Realism in Virtual Marksmanship Simulators

    DTIC Science & Technology

    2012-12-01

    M16 5.56 mm service rifle M2 .50-caliber machine gun M240 7.62 mm machine gun M9 9 mm Berretta MPI Mean Point of Impact NHQC Navy Handgun...Corps 14 Concepts in Programs, 2008, p. 214). ISMT has the capability to use a wide variety of weapons, including the .50cal. machinegun ( M2 ), 9...a time. ISMT has the unique capability to “provide immediate feedback to the instructor and trainee on weapon trigger pull, cant position, barrel

  18. Mega-Amp Opening Switch with Nested Electrodes/Pulsed Generator of Ion and Ion Cluster Beams

    DTIC Science & Technology

    1987-07-30

    The use of a plasma focus as a mega-amp opening switch has been demonstrated by two modes of operation: (a) Single shot mode; (b) Repetitive Mode...energy level and under the same voltage and filling-pressure conditions but without field distortion elements. Misfirings of the plasma focus machine...are also virtually eliminated by using FDE at the coaxial electrode breech. The tests (based on about 10000 shots and five plasma focus machines

  19. [THE VIRTUAL CYTOLOGIC SLIDES FOR EXTERNAL EVALUATION OF QUALITY OF IMPLEMENTATION OF CYTOLOGIC ANALYSES IN CLINICAL DIAGNOSTIC LABORATORIES: POSSIBILITIES AND PERSPECTIVES].

    PubMed

    Djangirova, T V; Shabalova, I P; Pronichev, A N; Polyakov, E V

    2015-08-01

    The article considers application of technology of analysis of cytological slides in external quality control of clinical diagnostic laboratories. The advantages of virtual slides are demonstrated against other applied technologies of external evaluation of quality i.e. slide plate and digital micro-photography. The conditions of formation of virtual slides for external evaluation of quality of clinical diagnostic laboratories. The technology of their application is described. The success of practical application of considered technology in the Federal system of external evaluation of quality is emphasized.

  20. 'Putting it on the table': direct-manipulative interaction and multi-user display technologies for semi-immersive environments and augmented reality applications.

    PubMed

    Encarnação, L Miguel; Bimber, Oliver

    2002-01-01

    Collaborative virtual environments for diagnosis and treatment planning are increasingly gaining importance in our global society. Virtual and Augmented Reality approaches promised to provide valuable means for the involved interactive data analysis, but the underlying technologies still create a cumbersome work environment that is inadequate for clinical employment. This paper addresses two of the shortcomings of such technology: Intuitive interaction with multi-dimensional data in immersive and semi-immersive environments as well as stereoscopic multi-user displays combining the advantages of Virtual and Augmented Reality technology.

Top