Computer Associates International, CA-ACF2/VM Release 3.1
1987-09-09
Associates CA-ACF2/VM Bibliography International Business Machines Corporation, IBM Virtual Machine/Directory Maintenance Program Logic Manual...publication number LY20-0889 International Business Machines International Business Machines Corporation, IBM System/370 Principles of Operation...publication number GA22-7000 International Business Machines Corporation, IBM Virtual Machine/Directory Maintenance Installation and System Administrator’s
LHCb experience with running jobs in virtual machines
NASA Astrophysics Data System (ADS)
McNab, A.; Stagni, F.; Luzzi, C.
2015-12-01
The LHCb experiment has been running production jobs in virtual machines since 2013 as part of its DIRAC-based infrastructure. We describe the architecture of these virtual machines and the steps taken to replicate the WLCG worker node environment expected by user and production jobs. This relies on the uCernVM system for providing root images for virtual machines. We use the CernVM-FS distributed filesystem to supply the root partition files, the LHCb software stack, and the bootstrapping scripts necessary to configure the virtual machines for us. Using this approach, we have been able to minimise the amount of contextualisation which must be provided by the virtual machine managers. We explain the process by which the virtual machine is able to receive payload jobs submitted to DIRAC by users and production managers, and how this differs from payloads executed within conventional DIRAC pilot jobs on batch queue based sites. We describe our operational experiences in running production on VM based sites managed using Vcycle/OpenStack, Vac, and HTCondor Vacuum. Finally we show how our use of these resources is monitored using Ganglia and DIRAC.
NASA Astrophysics Data System (ADS)
Berzano, D.; Blomer, J.; Buncic, P.; Charalampidis, I.; Ganis, G.; Meusel, R.
2015-12-01
Cloud resources nowadays contribute an essential share of resources for computing in high-energy physics. Such resources can be either provided by private or public IaaS clouds (e.g. OpenStack, Amazon EC2, Google Compute Engine) or by volunteers computers (e.g. LHC@Home 2.0). In any case, experiments need to prepare a virtual machine image that provides the execution environment for the physics application at hand. The CernVM virtual machine since version 3 is a minimal and versatile virtual machine image capable of booting different operating systems. The virtual machine image is less than 20 megabyte in size. The actual operating system is delivered on demand by the CernVM File System. CernVM 3 has matured from a prototype to a production environment. It is used, for instance, to run LHC applications in the cloud, to tune event generators using a network of volunteer computers, and as a container for the historic Scientific Linux 5 and Scientific Linux 4 based software environments in the course of long-term data preservation efforts of the ALICE, CMS, and ALEPH experiments. We present experience and lessons learned from the use of CernVM at scale. We also provide an outlook on the upcoming developments. These developments include adding support for Scientific Linux 7, the use of container virtualization, such as provided by Docker, and the streamlining of virtual machine contextualization towards the cloud-init industry standard.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoginath, Srikanth B; Perumalla, Kalyan S; Henz, Brian J
2012-01-01
In prior work (Yoginath and Perumalla, 2011; Yoginath, Perumalla and Henz, 2012), the motivation, challenges and issues were articulated in favor of virtual time ordering of Virtual Machines (VMs) in network simulations hosted on multi-core machines. Two major components in the overall virtualization challenge are (1) virtual timeline establishment and scheduling of VMs, and (2) virtualization of inter-VM communication. Here, we extend prior work by presenting scaling results for the first component, with experiment results on up to 128 VMs scheduled in virtual time order on a single 12-core host. We also explore the solution space of design alternatives formore » the second component, and present performance results from a multi-threaded, multi-queue implementation of inter-VM network control for synchronized execution with VM scheduling, incorporated in our NetWarp simulation system.« less
CFCC: A Covert Flows Confinement Mechanism for Virtual Machine Coalitions
NASA Astrophysics Data System (ADS)
Cheng, Ge; Jin, Hai; Zou, Deqing; Shi, Lei; Ohoussou, Alex K.
Normally, virtualization technology is adopted to construct the infrastructure of cloud computing environment. Resources are managed and organized dynamically through virtual machine (VM) coalitions in accordance with the requirements of applications. Enforcing mandatory access control (MAC) on the VM coalitions will greatly improve the security of VM-based cloud computing. However, the existing MAC models lack the mechanism to confine the covert flows and are hard to eliminate the convert channels. In this paper, we propose a covert flows confinement mechanism for virtual machine coalitions (CFCC), which introduces dynamic conflicts of interest based on the activity history of VMs, each of which is attached with a label. The proposed mechanism can be used to confine the covert flows between VMs in different coalitions. We implement a prototype system, evaluate its performance, and show that our mechanism is practical.
A Cooperative Approach to Virtual Machine Based Fault Injection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Naughton III, Thomas J; Engelmann, Christian; Vallee, Geoffroy R
Resilience investigations often employ fault injection (FI) tools to study the effects of simulated errors on a target system. It is important to keep the target system under test (SUT) isolated from the controlling environment in order to maintain control of the experiement. Virtual machines (VMs) have been used to aid these investigations due to the strong isolation properties of system-level virtualization. A key challenge in fault injection tools is to gain proper insight and context about the SUT. In VM-based FI tools, this challenge of target con- text is increased due to the separation between host and guest (VM).more » We discuss an approach to VM-based FI that leverages virtual machine introspection (VMI) methods to gain insight into the target s context running within the VM. The key to this environment is the ability to provide basic information to the FI system that can be used to create a map of the target environment. We describe a proof- of-concept implementation and a demonstration of its use to introduce simulated soft errors into an iterative solver benchmark running in user-space of a guest VM.« less
Optimized Hypervisor Scheduler for Parallel Discrete Event Simulations on Virtual Machine Platforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoginath, Srikanth B; Perumalla, Kalyan S
2013-01-01
With the advent of virtual machine (VM)-based platforms for parallel computing, it is now possible to execute parallel discrete event simulations (PDES) over multiple virtual machines, in contrast to executing in native mode directly over hardware as is traditionally done over the past decades. While mature VM-based parallel systems now offer new, compelling benefits such as serviceability, dynamic reconfigurability and overall cost effectiveness, the runtime performance of parallel applications can be significantly affected. In particular, most VM-based platforms are optimized for general workloads, but PDES execution exhibits unique dynamics significantly different from other workloads. Here we first present results frommore » experiments that highlight the gross deterioration of the runtime performance of VM-based PDES simulations when executed using traditional VM schedulers, quantitatively showing the bad scaling properties of the scheduler as the number of VMs is increased. The mismatch is fundamental in nature in the sense that any fairness-based VM scheduler implementation would exhibit this mismatch with PDES runs. We also present a new scheduler optimized specifically for PDES applications, and describe its design and implementation. Experimental results obtained from running PDES benchmarks (PHOLD and vehicular traffic simulations) over VMs show over an order of magnitude improvement in the run time of the PDES-optimized scheduler relative to the regular VM scheduler, with over 20 reduction in run time of simulations using up to 64 VMs. The observations and results are timely in the context of emerging systems such as cloud platforms and VM-based high performance computing installations, highlighting to the community the need for PDES-specific support, and the feasibility of significantly reducing the runtime overhead for scalable PDES on VM platforms.« less
Global Detection of Live Virtual Machine Migration Based on Cellular Neural Networks
Xie, Kang; Yang, Yixian; Zhang, Ling; Jing, Maohua; Xin, Yang; Li, Zhongxian
2014-01-01
In order to meet the demands of operation monitoring of large scale, autoscaling, and heterogeneous virtual resources in the existing cloud computing, a new method of live virtual machine (VM) migration detection algorithm based on the cellular neural networks (CNNs), is presented. Through analyzing the detection process, the parameter relationship of CNN is mapped as an optimization problem, in which improved particle swarm optimization algorithm based on bubble sort is used to solve the problem. Experimental results demonstrate that the proposed method can display the VM migration processing intuitively. Compared with the best fit heuristic algorithm, this approach reduces the processing time, and emerging evidence has indicated that this new approach is affordable to parallelism and analog very large scale integration (VLSI) implementation allowing the VM migration detection to be performed better. PMID:24959631
Global detection of live virtual machine migration based on cellular neural networks.
Xie, Kang; Yang, Yixian; Zhang, Ling; Jing, Maohua; Xin, Yang; Li, Zhongxian
2014-01-01
In order to meet the demands of operation monitoring of large scale, autoscaling, and heterogeneous virtual resources in the existing cloud computing, a new method of live virtual machine (VM) migration detection algorithm based on the cellular neural networks (CNNs), is presented. Through analyzing the detection process, the parameter relationship of CNN is mapped as an optimization problem, in which improved particle swarm optimization algorithm based on bubble sort is used to solve the problem. Experimental results demonstrate that the proposed method can display the VM migration processing intuitively. Compared with the best fit heuristic algorithm, this approach reduces the processing time, and emerging evidence has indicated that this new approach is affordable to parallelism and analog very large scale integration (VLSI) implementation allowing the VM migration detection to be performed better.
ERIC Educational Resources Information Center
Sukwong, Orathai
2013-01-01
Virtualization enables the ability to consolidate multiple servers on a single physical machine, increasing the infrastructure utilization. Maximizing the ratio of server virtual machines (VMs) to physical machines, namely the consolidation ratio, becomes an important goal toward infrastructure cost saving in a cloud. However, the consolidation…
Modeling the Virtual Machine Launching Overhead under Fermicloud
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garzoglio, Gabriele; Wu, Hao; Ren, Shangping
FermiCloud is a private cloud developed by the Fermi National Accelerator Laboratory for scientific workflows. The Cloud Bursting module of the FermiCloud enables the FermiCloud, when more computational resources are needed, to automatically launch virtual machines to available resources such as public clouds. One of the main challenges in developing the cloud bursting module is to decide when and where to launch a VM so that all resources are most effectively and efficiently utilized and the system performance is optimized. However, based on FermiCloud’s system operational data, the VM launching overhead is not a constant. It varies with physical resourcemore » (CPU, memory, I/O device) utilization at the time when a VM is launched. Hence, to make judicious decisions as to when and where a VM should be launched, a VM launch overhead reference model is needed. The paper is to develop a VM launch overhead reference model based on operational data we have obtained on FermiCloud and uses the reference model to guide the cloud bursting process.« less
ERIC Educational Resources Information Center
Srinivasan, Deepa
2013-01-01
Recent rapid malware growth has exposed the limitations of traditional in-host malware-defense systems and motivated the development of secure virtualization-based solutions. By running vulnerable systems as virtual machines (VMs) and moving security software from inside VMs to the outside, the out-of-VM solutions securely isolate the anti-malware…
A Novel Artificial Bee Colony Approach of Live Virtual Machine Migration Policy Using Bayes Theorem
Xu, Gaochao; Hu, Liang; Fu, Xiaodong
2013-01-01
Green cloud data center has become a research hotspot of virtualized cloud computing architecture. Since live virtual machine (VM) migration technology is widely used and studied in cloud computing, we have focused on the VM placement selection of live migration for power saving. We present a novel heuristic approach which is called PS-ABC. Its algorithm includes two parts. One is that it combines the artificial bee colony (ABC) idea with the uniform random initialization idea, the binary search idea, and Boltzmann selection policy to achieve an improved ABC-based approach with better global exploration's ability and local exploitation's ability. The other one is that it uses the Bayes theorem to further optimize the improved ABC-based process to faster get the final optimal solution. As a result, the whole approach achieves a longer-term efficient optimization for power saving. The experimental results demonstrate that PS-ABC evidently reduces the total incremental power consumption and better protects the performance of VM running and migrating compared with the existing research. It makes the result of live VM migration more high-effective and meaningful. PMID:24385877
A novel artificial bee colony approach of live virtual machine migration policy using Bayes theorem.
Xu, Gaochao; Ding, Yan; Zhao, Jia; Hu, Liang; Fu, Xiaodong
2013-01-01
Green cloud data center has become a research hotspot of virtualized cloud computing architecture. Since live virtual machine (VM) migration technology is widely used and studied in cloud computing, we have focused on the VM placement selection of live migration for power saving. We present a novel heuristic approach which is called PS-ABC. Its algorithm includes two parts. One is that it combines the artificial bee colony (ABC) idea with the uniform random initialization idea, the binary search idea, and Boltzmann selection policy to achieve an improved ABC-based approach with better global exploration's ability and local exploitation's ability. The other one is that it uses the Bayes theorem to further optimize the improved ABC-based process to faster get the final optimal solution. As a result, the whole approach achieves a longer-term efficient optimization for power saving. The experimental results demonstrate that PS-ABC evidently reduces the total incremental power consumption and better protects the performance of VM running and migrating compared with the existing research. It makes the result of live VM migration more high-effective and meaningful.
Angiuoli, Samuel V; Matalka, Malcolm; Gussman, Aaron; Galens, Kevin; Vangala, Mahesh; Riley, David R; Arze, Cesar; White, James R; White, Owen; Fricke, W Florian
2011-08-30
Next-generation sequencing technologies have decentralized sequence acquisition, increasing the demand for new bioinformatics tools that are easy to use, portable across multiple platforms, and scalable for high-throughput applications. Cloud computing platforms provide on-demand access to computing infrastructure over the Internet and can be used in combination with custom built virtual machines to distribute pre-packaged with pre-configured software. We describe the Cloud Virtual Resource, CloVR, a new desktop application for push-button automated sequence analysis that can utilize cloud computing resources. CloVR is implemented as a single portable virtual machine (VM) that provides several automated analysis pipelines for microbial genomics, including 16S, whole genome and metagenome sequence analysis. The CloVR VM runs on a personal computer, utilizes local computer resources and requires minimal installation, addressing key challenges in deploying bioinformatics workflows. In addition CloVR supports use of remote cloud computing resources to improve performance for large-scale sequence processing. In a case study, we demonstrate the use of CloVR to automatically process next-generation sequencing data on multiple cloud computing platforms. The CloVR VM and associated architecture lowers the barrier of entry for utilizing complex analysis protocols on both local single- and multi-core computers and cloud systems for high throughput data processing.
Efficiently Scheduling Multi-core Guest Virtual Machines on Multi-core Hosts in Network Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoginath, Srikanth B; Perumalla, Kalyan S
2011-01-01
Virtual machine (VM)-based simulation is a method used by network simulators to incorporate realistic application behaviors by executing actual VMs as high-fidelity surrogates for simulated end-hosts. A critical requirement in such a method is the simulation time-ordered scheduling and execution of the VMs. Prior approaches such as time dilation are less efficient due to the high degree of multiplexing possible when multiple multi-core VMs are simulated on multi-core host systems. We present a new simulation time-ordered scheduler to efficiently schedule multi-core VMs on multi-core real hosts, with a virtual clock realized on each virtual core. The distinguishing features of ourmore » approach are: (1) customizable granularity of the VM scheduling time unit on the simulation time axis, (2) ability to take arbitrary leaps in virtual time by VMs to maximize the utilization of host (real) cores when guest virtual cores idle, and (3) empirically determinable optimality in the tradeoff between total execution (real) time and time-ordering accuracy levels. Experiments show that it is possible to get nearly perfect time-ordered execution, with a slight cost in total run time, relative to optimized non-simulation VM schedulers. Interestingly, with our time-ordered scheduler, it is also possible to reduce the time-ordering error from over 50% of non-simulation scheduler to less than 1% realized by our scheduler, with almost the same run time efficiency as that of the highly efficient non-simulation VM schedulers.« less
Virtual Machine-level Software Transactional Memory: Principles, Techniques, and Implementation
2015-08-13
against non-VM STMs, with the same algorithm inside the VM versus “outside”it. Our competitor non-VM STMs include Deuce, ObjectFabric, Multiverse , and...TL2 ByteSTM/NOrec Non-VM/NOrec Deuce/TL2 Object Fabric Multiverse JVSTM (a) 20% writes. (b) 80% writes. Fig. 1 Throughput for Linked-List. Higher is...When Scalability Meets Consistency: Genuine Multiversion Update-Serializable Partial Data Replication. In ICDCS, pages 455–465, 2012. [34] D
Liberating Virtual Machines from Physical Boundaries through Execution Knowledge
2015-12-01
trivial infrastructures such as VM distribution networks, clients need to wait for an extended period of time before launching a VM. In cloud settings...hardware support. MobiDesk [28] efficiently supports virtual desktops in mobile environments by decou- pling the user’s workload from host systems and...experiment set-up. VMs are migrated between a pair of source and destination hosts, which are connected through a backend 10 Gbps network for
Feasibility of Virtual Machine and Cloud Computing Technologies for High Performance Computing
2014-05-01
Hat Enterprise Linux SaaS software as a service VM virtual machine vNUMA virtual non-uniform memory access WRF weather research and forecasting...previously mentioned in Chapter I Section B1 of this paper, which is used to run the weather research and forecasting ( WRF ) model in their experiments...against a VMware virtualization solution of WRF . The experiment consisted of running WRF in a standard configuration between the D-VTM and VMware while
Kim, Dong Seong; Park, Jong Sou
2014-01-01
It is important to assess availability of virtualized systems in IT business infrastructures. Previous work on availability modeling and analysis of the virtualized systems used a simplified configuration and assumption in which only one virtual machine (VM) runs on a virtual machine monitor (VMM) hosted on a physical server. In this paper, we show a comprehensive availability model using stochastic reward nets (SRN). The model takes into account (i) the detailed failures and recovery behaviors of multiple VMs, (ii) various other failure modes and corresponding recovery behaviors (e.g., hardware faults, failure and recovery due to Mandelbugs and aging-related bugs), and (iii) dependency between different subcomponents (e.g., between physical host failure and VMM, etc.) in a virtualized servers system. We also show numerical analysis on steady state availability, downtime in hours per year, transaction loss, and sensitivity analysis. This model provides a new finding on how to increase system availability by combining both software rejuvenations at VM and VMM in a wise manner. PMID:25165732
2011-01-01
Background Next-generation sequencing technologies have decentralized sequence acquisition, increasing the demand for new bioinformatics tools that are easy to use, portable across multiple platforms, and scalable for high-throughput applications. Cloud computing platforms provide on-demand access to computing infrastructure over the Internet and can be used in combination with custom built virtual machines to distribute pre-packaged with pre-configured software. Results We describe the Cloud Virtual Resource, CloVR, a new desktop application for push-button automated sequence analysis that can utilize cloud computing resources. CloVR is implemented as a single portable virtual machine (VM) that provides several automated analysis pipelines for microbial genomics, including 16S, whole genome and metagenome sequence analysis. The CloVR VM runs on a personal computer, utilizes local computer resources and requires minimal installation, addressing key challenges in deploying bioinformatics workflows. In addition CloVR supports use of remote cloud computing resources to improve performance for large-scale sequence processing. In a case study, we demonstrate the use of CloVR to automatically process next-generation sequencing data on multiple cloud computing platforms. Conclusion The CloVR VM and associated architecture lowers the barrier of entry for utilizing complex analysis protocols on both local single- and multi-core computers and cloud systems for high throughput data processing. PMID:21878105
2015-09-28
the performance of log-and- replay can degrade significantly for VMs configured with multiple virtual CPUs, since the shared memory communication...whether based on checkpoint replication or log-and- replay , existing HA ap- proaches use in- memory backups. The backup VM sits in the memory of a...efficiently. 15. SUBJECT TERMS High-availability virtual machines, live migration, memory and traffic overheads, application suspension, Java
myChEMBL: a virtual machine implementation of open data and cheminformatics tools.
Ochoa, Rodrigo; Davies, Mark; Papadatos, George; Atkinson, Francis; Overington, John P
2014-01-15
myChEMBL is a completely open platform, which combines public domain bioactivity data with open source database and cheminformatics technologies. myChEMBL consists of a Linux (Ubuntu) Virtual Machine featuring a PostgreSQL schema with the latest version of the ChEMBL database, as well as the latest RDKit cheminformatics libraries. In addition, a self-contained web interface is available, which can be modified and improved according to user specifications. The VM is available at: ftp://ftp.ebi.ac.uk/pub/databases/chembl/VM/myChEMBL/current. The web interface and web services code is available at: https://github.com/rochoa85/myChEMBL.
Zhao, Jia; Hu, Liang; Ding, Yan; Xu, Gaochao; Hu, Ming
2014-01-01
The field of live VM (virtual machine) migration has been a hotspot problem in green cloud computing. Live VM migration problem is divided into two research aspects: live VM migration mechanism and live VM migration policy. In the meanwhile, with the development of energy-aware computing, we have focused on the VM placement selection of live migration, namely live VM migration policy for energy saving. In this paper, a novel heuristic approach PS-ES is presented. Its main idea includes two parts. One is that it combines the PSO (particle swarm optimization) idea with the SA (simulated annealing) idea to achieve an improved PSO-based approach with the better global search's ability. The other one is that it uses the Probability Theory and Mathematical Statistics and once again utilizes the SA idea to deal with the data obtained from the improved PSO-based process to get the final solution. And thus the whole approach achieves a long-term optimization for energy saving as it has considered not only the optimization of the current problem scenario but also that of the future problem. The experimental results demonstrate that PS-ES evidently reduces the total incremental energy consumption and better protects the performance of VM running and migrating compared with randomly migrating and optimally migrating. As a result, the proposed PS-ES approach has capabilities to make the result of live VM migration events more high-effective and valuable. PMID:25251339
Zhao, Jia; Hu, Liang; Ding, Yan; Xu, Gaochao; Hu, Ming
2014-01-01
The field of live VM (virtual machine) migration has been a hotspot problem in green cloud computing. Live VM migration problem is divided into two research aspects: live VM migration mechanism and live VM migration policy. In the meanwhile, with the development of energy-aware computing, we have focused on the VM placement selection of live migration, namely live VM migration policy for energy saving. In this paper, a novel heuristic approach PS-ES is presented. Its main idea includes two parts. One is that it combines the PSO (particle swarm optimization) idea with the SA (simulated annealing) idea to achieve an improved PSO-based approach with the better global search's ability. The other one is that it uses the Probability Theory and Mathematical Statistics and once again utilizes the SA idea to deal with the data obtained from the improved PSO-based process to get the final solution. And thus the whole approach achieves a long-term optimization for energy saving as it has considered not only the optimization of the current problem scenario but also that of the future problem. The experimental results demonstrate that PS-ES evidently reduces the total incremental energy consumption and better protects the performance of VM running and migrating compared with randomly migrating and optimally migrating. As a result, the proposed PS-ES approach has capabilities to make the result of live VM migration events more high-effective and valuable.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Priedhorsky, Reid; Randles, Tim
Charliecloud is a set of scripts to let users run a virtual cluster of virtual machines (VMs) on a desktop or supercomputer. Key functions include: 1. Creating (typically by installing an operating system from vendor media) and updating VM images; 2. Running a single VM; 3. Running multiple VMs in a virtual cluster. The virtual machines can talk to one another over the network and (in some cases) the outside world. This is accomplished by calling external programs such as QEMU and the Virtual Distributed Ethernet (VDE) suite. The goal is to let users have a virtual cluster containing nodesmore » where they have privileged access, while isolating that privilege within the virtual cluster so it cannot affect the physical compute resources. Host configuration enforces security; this is not included in Charliecloud, though security guidelines are included in its documentation and Charliecloud is designed to facilitate such configuration. Charliecloud manages passing information from host computers into and out of the virtual machines, such as parameters of the virtual cluster, input data specified by the user, output data from virtual compute jobs, VM console display, and network connections (e.g., SSH or X11). Parameters for the virtual cluster (number of VMs, RAM and disk per VM, etc.) are specified by the user or gathered from the environment (e.g., SLURM environment variables). Example job scripts are included. These include computation examples (such as a "hello world" MPI job) as well as performance tests. They also include a security test script to verify that the virtual cluster is appropriately sandboxed. Tests include: 1. Pinging hosts inside and outside the virtual cluster to explore connectivity; 2. Port scans (again inside and outside) to see what services are available; 3. Sniffing tests to see what traffic is visible to running VMs; 4. IP address spoofing to test network functionality in this case; 5. File access tests to make sure host access permissions are enforced. This test script is not a comprehensive scanner and does not test for specific vulnerabilities. Importantly, no information about physical hosts or network topology is included in this script (or any of Charliecloud); while part of a sensible test, such information is specified by the user when the test is run. That is, one cannot learn anything about the LANL network or computing infrastructure by examining Charliecloud code.« less
A location selection policy of live virtual machine migration for power saving and load balancing.
Zhao, Jia; Ding, Yan; Xu, Gaochao; Hu, Liang; Dong, Yushuang; Fu, Xiaodong
2013-01-01
Green cloud data center has become a research hotspot of virtualized cloud computing architecture. And load balancing has also been one of the most important goals in cloud data centers. Since live virtual machine (VM) migration technology is widely used and studied in cloud computing, we have focused on location selection (migration policy) of live VM migration for power saving and load balancing. We propose a novel approach MOGA-LS, which is a heuristic and self-adaptive multiobjective optimization algorithm based on the improved genetic algorithm (GA). This paper has presented the specific design and implementation of MOGA-LS such as the design of the genetic operators, fitness values, and elitism. We have introduced the Pareto dominance theory and the simulated annealing (SA) idea into MOGA-LS and have presented the specific process to get the final solution, and thus, the whole approach achieves a long-term efficient optimization for power saving and load balancing. The experimental results demonstrate that MOGA-LS evidently reduces the total incremental power consumption and better protects the performance of VM migration and achieves the balancing of system load compared with the existing research. It makes the result of live VM migration more high-effective and meaningful.
A Location Selection Policy of Live Virtual Machine Migration for Power Saving and Load Balancing
Xu, Gaochao; Hu, Liang; Dong, Yushuang; Fu, Xiaodong
2013-01-01
Green cloud data center has become a research hotspot of virtualized cloud computing architecture. And load balancing has also been one of the most important goals in cloud data centers. Since live virtual machine (VM) migration technology is widely used and studied in cloud computing, we have focused on location selection (migration policy) of live VM migration for power saving and load balancing. We propose a novel approach MOGA-LS, which is a heuristic and self-adaptive multiobjective optimization algorithm based on the improved genetic algorithm (GA). This paper has presented the specific design and implementation of MOGA-LS such as the design of the genetic operators, fitness values, and elitism. We have introduced the Pareto dominance theory and the simulated annealing (SA) idea into MOGA-LS and have presented the specific process to get the final solution, and thus, the whole approach achieves a long-term efficient optimization for power saving and load balancing. The experimental results demonstrate that MOGA-LS evidently reduces the total incremental power consumption and better protects the performance of VM migration and achieves the balancing of system load compared with the existing research. It makes the result of live VM migration more high-effective and meaningful. PMID:24348165
Security model for VM in cloud
NASA Astrophysics Data System (ADS)
Kanaparti, Venkataramana; Naveen K., R.; Rajani, S.; Padmvathamma, M.; Anitha, C.
2013-03-01
Cloud computing is a new approach emerged to meet ever-increasing demand for computing resources and to reduce operational costs and Capital Expenditure for IT services. As this new way of computation allows data and applications to be stored away from own corporate server, it brings more issues in security such as virtualization security, distributed computing, application security, identity management, access control and authentication. Even though Virtualization forms the basis for cloud computing it poses many threats in securing cloud. As most of Security threats lies at Virtualization layer in cloud we proposed this new Security Model for Virtual Machine in Cloud (SMVC) in which every process is authenticated by Trusted-Agent (TA) in Hypervisor as well as in VM. Our proposed model is designed to with-stand attacks by unauthorized process that pose threat to applications related to Data Mining, OLAP systems, Image processing which requires huge resources in cloud deployed on one or more VM's.
sRNAtoolboxVM: Small RNA Analysis in a Virtual Machine.
Gómez-Martín, Cristina; Lebrón, Ricardo; Rueda, Antonio; Oliver, José L; Hackenberg, Michael
2017-01-01
High-throughput sequencing (HTS) data for small RNAs (noncoding RNA molecules that are 20-250 nucleotides in length) can now be routinely generated by minimally equipped wet laboratories; however, the bottleneck in HTS-based research has shifted now to the analysis of such huge amount of data. One of the reasons is that many analysis types require a Linux environment but computers, system administrators, and bioinformaticians suppose additional costs that often cannot be afforded by small to mid-sized groups or laboratories. Web servers are an alternative that can be used if the data is not subjected to privacy issues (what very often is an important issue with medical data). However, in any case they are less flexible than stand-alone programs limiting the number of workflows and analysis types that can be carried out.We show in this protocol how virtual machines can be used to overcome those problems and limitations. sRNAtoolboxVM is a virtual machine that can be executed on all common operating systems through virtualization programs like VirtualBox or VMware, providing the user with a high number of preinstalled programs like sRNAbench for small RNA analysis without the need to maintain additional servers and/or operating systems.
Traffic-Sensitive Live Migration of Virtual Machines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deshpande, Umesh; Keahey, Kate
2015-01-01
In this paper we address the problem of network contention between the migration traffic and the VM application traffic for the live migration of co-located Virtual Machines (VMs). When VMs are migrated with pre-copy, they run at the source host during the migration. Therefore the VM applications with predominantly outbound traffic contend with the outgoing migration traffic at the source host. Similarly, during post-copy migration, the VMs run at the destination host. Therefore the VM applications with predominantly inbound traffic contend with the incoming migration traffic at the destination host. Such a contention increases the total migration time of themore » VMs and degrades the performance of VM application. Here, we propose traffic-sensitive live VM migration technique to reduce the contention of migration traffic with the VM application traffic. It uses a combination of pre-copy and post-copy techniques for the migration of the co-located VMs, instead of relying upon any single pre-determined technique for the migration of all the VMs. We base the selection of migration techniques on VMs' network traffic profiles so that the direction of migration traffic complements the direction of the most VM application traffic. We have implemented a prototype of traffic-sensitive migration on the KVM/QEMU platform. In the evaluation, we compare traffic-sensitive migration against the approaches that use only pre-copy or only post-copy for VM migration. We show that our approach minimizes the network contention for migration, thus reducing the total migration time and the application degradation.« less
An Analysis of Hardware-Assisted Virtual Machine Based Rootkits
2014-06-01
certain aspects of TPM implementation just to name a few. HyperWall is an architecture proposed by Szefer and Lee to protect guest VMs from...DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words) The use of virtual machine (VM) technology has expanded rapidly since AMD and Intel implemented ...Intel VT-x implementations of Blue Pill to identify commonalities in the respective versions’ attack methodologies from both a functional and technical
Prediction based proactive thermal virtual machine scheduling in green clouds.
Kinger, Supriya; Kumar, Rajesh; Sharma, Anju
2014-01-01
Cloud computing has rapidly emerged as a widely accepted computing paradigm, but the research on Cloud computing is still at an early stage. Cloud computing provides many advanced features but it still has some shortcomings such as relatively high operating cost and environmental hazards like increasing carbon footprints. These hazards can be reduced up to some extent by efficient scheduling of Cloud resources. Working temperature on which a machine is currently running can be taken as a criterion for Virtual Machine (VM) scheduling. This paper proposes a new proactive technique that considers current and maximum threshold temperature of Server Machines (SMs) before making scheduling decisions with the help of a temperature predictor, so that maximum temperature is never reached. Different workload scenarios have been taken into consideration. The results obtained show that the proposed system is better than existing systems of VM scheduling, which does not consider current temperature of nodes before making scheduling decisions. Thus, a reduction in need of cooling systems for a Cloud environment has been obtained and validated.
USDA-ARS?s Scientific Manuscript database
Infrastructure-as-a-service (IaaS) clouds provide a new medium for deployment of environmental modeling applications. Harnessing advancements in virtualization, IaaS clouds can provide dynamic scalable infrastructure to better support scientific modeling computational demands. Providing scientific m...
NASA Astrophysics Data System (ADS)
Bamiah, Mervat Adib; Brohi, Sarfraz Nawaz; Chuprat, Suriayati
2012-01-01
Virtualization is one of the hottest research topics nowadays. Several academic researchers and developers from IT industry are designing approaches for solving security and manageability issues of Virtual Machines (VMs) residing on virtualized cloud infrastructures. Moving the application from a physical to a virtual platform increases the efficiency, flexibility and reduces management cost as well as effort. Cloud computing is adopting the paradigm of virtualization, using this technique, memory, CPU and computational power is provided to clients' VMs by utilizing the underlying physical hardware. Beside these advantages there are few challenges faced by adopting virtualization such as management of VMs and network traffic, unexpected additional cost and resource allocation. Virtual Machine Monitor (VMM) or hypervisor is the tool used by cloud providers to manage the VMs on cloud. There are several heterogeneous hypervisors provided by various vendors that include VMware, Hyper-V, Xen and Kernel Virtual Machine (KVM). Considering the challenge of VM management, this paper describes several techniques to monitor and manage virtualized cloud infrastructures.
CernVM WebAPI - Controlling Virtual Machines from the Web
NASA Astrophysics Data System (ADS)
Charalampidis, I.; Berzano, D.; Blomer, J.; Buncic, P.; Ganis, G.; Meusel, R.; Segal, B.
2015-12-01
Lately, there is a trend in scientific projects to look for computing resources in the volunteering community. In addition, to reduce the development effort required to port the scientific software stack to all the known platforms, the use of Virtual Machines (VMs)u is becoming increasingly popular. Unfortunately their use further complicates the software installation and operation, restricting the volunteer audience to sufficiently expert people. CernVM WebAPI is a software solution addressing this specific case in a way that opens wide new application opportunities. It offers a very simple API for setting-up, controlling and interfacing with a VM instance in the users computer, while in the same time offloading the user from all the burden of downloading, installing and configuring the hypervisor. WebAPI comes with a lightweight javascript library that guides the user through the application installation process. Malicious usage is prohibited by offering a per-domain PKI validation mechanism. In this contribution we will overview this new technology, discuss its security features and examine some test cases where it is already in use.
Prediction Based Proactive Thermal Virtual Machine Scheduling in Green Clouds
Kinger, Supriya; Kumar, Rajesh; Sharma, Anju
2014-01-01
Cloud computing has rapidly emerged as a widely accepted computing paradigm, but the research on Cloud computing is still at an early stage. Cloud computing provides many advanced features but it still has some shortcomings such as relatively high operating cost and environmental hazards like increasing carbon footprints. These hazards can be reduced up to some extent by efficient scheduling of Cloud resources. Working temperature on which a machine is currently running can be taken as a criterion for Virtual Machine (VM) scheduling. This paper proposes a new proactive technique that considers current and maximum threshold temperature of Server Machines (SMs) before making scheduling decisions with the help of a temperature predictor, so that maximum temperature is never reached. Different workload scenarios have been taken into consideration. The results obtained show that the proposed system is better than existing systems of VM scheduling, which does not consider current temperature of nodes before making scheduling decisions. Thus, a reduction in need of cooling systems for a Cloud environment has been obtained and validated. PMID:24737962
A Secure and Robust Approach to Software Tamper Resistance
NASA Astrophysics Data System (ADS)
Ghosh, Sudeep; Hiser, Jason D.; Davidson, Jack W.
Software tamper-resistance mechanisms have increasingly assumed significance as a technique to prevent unintended uses of software. Closely related to anti-tampering techniques are obfuscation techniques, which make code difficult to understand or analyze and therefore, challenging to modify meaningfully. This paper describes a secure and robust approach to software tamper resistance and obfuscation using process-level virtualization. The proposed techniques involve novel uses of software check summing guards and encryption to protect an application. In particular, a virtual machine (VM) is assembled with the application at software build time such that the application cannot run without the VM. The VM provides just-in-time decryption of the program and dynamism for the application's code. The application's code is used to protect the VM to ensure a level of circular protection. Finally, to prevent the attacker from obtaining an analyzable snapshot of the code, the VM periodically discards all decrypted code. We describe a prototype implementation of these techniques and evaluate the run-time performance of applications using our system. We also discuss how our system provides stronger protection against tampering attacks than previously described tamper-resistance approaches.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoginath, Srikanth B; Perumalla, Kalyan S
2013-01-01
Virtual machine (VM) technologies, especially those offered via Cloud platforms, present new dimensions with respect to performance and cost in executing parallel discrete event simulation (PDES) applications. Due to the introduction of overall cost as a metric, the choice of the highest-end computing configuration is no longer the most economical one. Moreover, runtime dynamics unique to VM platforms introduce new performance characteristics, and the variety of possible VM configurations give rise to a range of choices for hosting a PDES run. Here, an empirical study of these issues is undertaken to guide an understanding of the dynamics, trends and trade-offsmore » in executing PDES on VM/Cloud platforms. Performance results and cost measures are obtained from actual execution of a range of scenarios in two PDES benchmark applications on the Amazon Cloud offerings and on a high-end VM host machine. The data reveals interesting insights into the new VM-PDES dynamics that come into play and also leads to counter-intuitive guidelines with respect to choosing the best and second-best configurations when overall cost of execution is considered. In particular, it is found that choosing the highest-end VM configuration guarantees neither the best runtime nor the least cost. Interestingly, choosing a (suitably scaled) low-end VM configuration provides the least overall cost without adversely affecting the total runtime.« less
An efficient approach for improving virtual machine placement in cloud computing environment
NASA Astrophysics Data System (ADS)
Ghobaei-Arani, Mostafa; Shamsi, Mahboubeh; Rahmanian, Ali A.
2017-11-01
The ever increasing demand for the cloud services requires more data centres. The power consumption in the data centres is a challenging problem for cloud computing, which has not been considered properly by the data centre developer companies. Especially, large data centres struggle with the power cost and the Greenhouse gases production. Hence, employing the power efficient mechanisms are necessary to optimise the mentioned effects. Moreover, virtual machine (VM) placement can be used as an effective method to reduce the power consumption in data centres. In this paper by grouping both virtual and physical machines, and taking into account the maximum absolute deviation during the VM placement, the power consumption as well as the service level agreement (SLA) deviation in data centres are reduced. To this end, the best-fit decreasing algorithm is utilised in the simulation to reduce the power consumption by about 5% compared to the modified best-fit decreasing algorithm, and at the same time, the SLA violation is improved by 6%. Finally, the learning automata are used to a trade-off between power consumption reduction from one side, and SLA violation percentage from the other side.
PyGirl: Generating Whole-System VMs from High-Level Prototypes Using PyPy
NASA Astrophysics Data System (ADS)
Bruni, Camillo; Verwaest, Toon
Virtual machines (VMs) emulating hardware devices are generally implemented in low-level languages for performance reasons. This results in unmaintainable systems that are difficult to understand. In this paper we report on our experience using the PyPy toolchain to improve the portability and reduce the complexity of whole-system VM implementations. As a case study we implement a VM prototype for a Nintendo Game Boy, called PyGirl, in which the high-level model is separated from low-level VM implementation issues. We shed light on the process of refactoring from a low-level VM implementation in Java to a high-level model in RPython. We show that our whole-system VM written with PyPy is significantly less complex than standard implementations, without substantial loss in performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Hao; Garzoglio, Gabriele; Ren, Shangping
FermiCloud is a private cloud developed in Fermi National Accelerator Laboratory to provide elastic and on-demand resources for different scientific research experiments. The design goal of the FermiCloud is to automatically allocate resources for different scientific applications so that the QoS required by these applications is met and the operational cost of the FermiCloud is minimized. Our earlier research shows that VM launching overhead has large variations. If such variations are not taken into consideration when making resource allocation decisions, it may lead to poor performance and resource waste. In this paper, we show how we may use an VMmore » launching overhead reference model to minimize VM launching overhead. In particular, we first present a training algorithm that automatically tunes a given refer- ence model to accurately reflect FermiCloud environment. Based on the tuned reference model for virtual machine launching overhead, we develop an overhead-aware-best-fit resource allocation algorithm that decides where and when to allocate resources so that the average virtual machine launching overhead is minimized. The experimental results indicate that the developed overhead-aware-best-fit resource allocation algorithm can significantly improved the VM launching time when large number of VMs are simultaneously launched.« less
An Efficient Virtual Machine Consolidation Scheme for Multimedia Cloud Computing.
Han, Guangjie; Que, Wenhui; Jia, Gangyong; Shu, Lei
2016-02-18
Cloud computing has innovated the IT industry in recent years, as it can delivery subscription-based services to users in the pay-as-you-go model. Meanwhile, multimedia cloud computing is emerging based on cloud computing to provide a variety of media services on the Internet. However, with the growing popularity of multimedia cloud computing, its large energy consumption cannot only contribute to greenhouse gas emissions, but also result in the rising of cloud users' costs. Therefore, the multimedia cloud providers should try to minimize its energy consumption as much as possible while satisfying the consumers' resource requirements and guaranteeing quality of service (QoS). In this paper, we have proposed a remaining utilization-aware (RUA) algorithm for virtual machine (VM) placement, and a power-aware algorithm (PA) is proposed to find proper hosts to shut down for energy saving. These two algorithms have been combined and applied to cloud data centers for completing the process of VM consolidation. Simulation results have shown that there exists a trade-off between the cloud data center's energy consumption and service-level agreement (SLA) violations. Besides, the RUA algorithm is able to deal with variable workload to prevent hosts from overloading after VM placement and to reduce the SLA violations dramatically.
An Efficient Virtual Machine Consolidation Scheme for Multimedia Cloud Computing
Han, Guangjie; Que, Wenhui; Jia, Gangyong; Shu, Lei
2016-01-01
Cloud computing has innovated the IT industry in recent years, as it can delivery subscription-based services to users in the pay-as-you-go model. Meanwhile, multimedia cloud computing is emerging based on cloud computing to provide a variety of media services on the Internet. However, with the growing popularity of multimedia cloud computing, its large energy consumption cannot only contribute to greenhouse gas emissions, but also result in the rising of cloud users’ costs. Therefore, the multimedia cloud providers should try to minimize its energy consumption as much as possible while satisfying the consumers’ resource requirements and guaranteeing quality of service (QoS). In this paper, we have proposed a remaining utilization-aware (RUA) algorithm for virtual machine (VM) placement, and a power-aware algorithm (PA) is proposed to find proper hosts to shut down for energy saving. These two algorithms have been combined and applied to cloud data centers for completing the process of VM consolidation. Simulation results have shown that there exists a trade-off between the cloud data center’s energy consumption and service-level agreement (SLA) violations. Besides, the RUA algorithm is able to deal with variable workload to prevent hosts from overloading after VM placement and to reduce the SLA violations dramatically. PMID:26901201
"Tactic": Traffic Aware Cloud for Tiered Infrastructure Consolidation
ERIC Educational Resources Information Center
Sangpetch, Akkarit
2013-01-01
Large-scale enterprise applications are deployed as distributed applications. These applications consist of many inter-connected components with heterogeneous roles and complex dependencies. Each component typically consumes 5-15% of the server capacity. Deploying each component as a separate virtual machine (VM) allows us to consolidate the…
Cloud services for the Fermilab scientific stakeholders
Timm, S.; Garzoglio, G.; Mhashilkar, P.; ...
2015-12-23
As part of the Fermilab/KISTI cooperative research project, Fermilab has successfully run an experimental simulation workflow at scale on a federation of Amazon Web Services (AWS), FermiCloud, and local FermiGrid resources. We used the CernVM-FS (CVMFS) file system to deliver the application software. We established Squid caching servers in AWS as well, using the Shoal system to let each individual virtual machine find the closest squid server. We also developed an automatic virtual machine conversion system so that we could transition virtual machines made on FermiCloud to Amazon Web Services. We used this system to successfully run a cosmic raymore » simulation of the NOvA detector at Fermilab, making use of both AWS spot pricing and network bandwidth discounts to minimize the cost. On FermiCloud we also were able to run the workflow at the scale of 1000 virtual machines, using a private network routable inside of Fermilab. As a result, we present in detail the technological improvements that were used to make this work a reality.« less
Cloud services for the Fermilab scientific stakeholders
DOE Office of Scientific and Technical Information (OSTI.GOV)
Timm, S.; Garzoglio, G.; Mhashilkar, P.
As part of the Fermilab/KISTI cooperative research project, Fermilab has successfully run an experimental simulation workflow at scale on a federation of Amazon Web Services (AWS), FermiCloud, and local FermiGrid resources. We used the CernVM-FS (CVMFS) file system to deliver the application software. We established Squid caching servers in AWS as well, using the Shoal system to let each individual virtual machine find the closest squid server. We also developed an automatic virtual machine conversion system so that we could transition virtual machines made on FermiCloud to Amazon Web Services. We used this system to successfully run a cosmic raymore » simulation of the NOvA detector at Fermilab, making use of both AWS spot pricing and network bandwidth discounts to minimize the cost. On FermiCloud we also were able to run the workflow at the scale of 1000 virtual machines, using a private network routable inside of Fermilab. As a result, we present in detail the technological improvements that were used to make this work a reality.« less
On localization attacks against cloud infrastructure
NASA Astrophysics Data System (ADS)
Ge, Linqiang; Yu, Wei; Sistani, Mohammad Ali
2013-05-01
One of the key characteristics of cloud computing is the device and location independence that enables the user to access systems regardless of their location. Because cloud computing is heavily based on sharing resource, it is vulnerable to cyber attacks. In this paper, we investigate a localization attack that enables the adversary to leverage central processing unit (CPU) resources to localize the physical location of server used by victims. By increasing and reducing CPU usage through the malicious virtual machine (VM), the response time from the victim VM will increase and decrease correspondingly. In this way, by embedding the probing signal into the CPU usage and correlating the same pattern in the response time from the victim VM, the adversary can find the location of victim VM. To determine attack accuracy, we investigate features in both the time and frequency domains. We conduct both theoretical and experimental study to demonstrate the effectiveness of such an attack.
Efficient Checkpointing of Virtual Machines using Virtual Machine Introspection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aderholdt, Ferrol; Han, Fang; Scott, Stephen L
Cloud Computing environments rely heavily on system-level virtualization. This is due to the inherent benefits of virtualization including fault tolerance through checkpoint/restart (C/R) mechanisms. Because clouds are the abstraction of large data centers and large data centers have a higher potential for failure, it is imperative that a C/R mechanism for such an environment provide minimal latency as well as a small checkpoint file size. Recently, there has been much research into C/R with respect to virtual machines (VM) providing excellent solutions to reduce either checkpoint latency or checkpoint file size. However, these approaches do not provide both. This papermore » presents a method of checkpointing VMs by utilizing virtual machine introspection (VMI). Through the usage of VMI, we are able to determine which pages of memory within the guest are used or free and are better able to reduce the amount of pages written to disk during a checkpoint. We have validated this work by using various benchmarks to measure the latency along with the checkpoint size. With respect to checkpoint file size, our approach results in file sizes within 24% or less of the actual used memory within the guest. Additionally, the checkpoint latency of our approach is up to 52% faster than KVM s default method.« less
A performance study of live VM migration technologies: VMotion vs XenMotion
NASA Astrophysics Data System (ADS)
Feng, Xiujie; Tang, Jianxiong; Luo, Xuan; Jin, Yaohui
2011-12-01
Due to the growing demand of flexible resource management for cloud computing services, researches on live virtual machine migration have attained more and more attention. Live migration of virtual machine across different hosts has been a powerful tool to facilitate system maintenance, load balancing, fault tolerance and so on. In this paper, we use a measurement-based approach to compare the performance of two major live migration technologies under certain network conditions, i.e., VMotion and XenMotion. The results show that VMotion generates much less data transferred than XenMotion when migrating identical VMs. However, in network with moderate packet loss and delay, which are typical in a VPN (virtual private network) scenario used to connect the data centers, XenMotion outperforms VMotion in total migration time. We hope that this study can be helpful in choosing suitable virtualization environments for data center administrators and optimizing existing live migration mechanisms.
Multiplexing Low and High QoS Workloads in Virtual Environments
NASA Astrophysics Data System (ADS)
Verboven, Sam; Vanmechelen, Kurt; Broeckhove, Jan
Virtualization technology has introduced new ways for managing IT infrastructure. The flexible deployment of applications through self-contained virtual machine images has removed the barriers for multiplexing, suspending and migrating applications with their entire execution environment, allowing for a more efficient use of the infrastructure. These developments have given rise to an important challenge regarding the optimal scheduling of virtual machine workloads. In this paper, we specifically address the VM scheduling problem in which workloads that require guaranteed levels of CPU performance are mixed with workloads that do not require such guarantees. We introduce a framework to analyze this scheduling problem and evaluate to what extent such mixed service delivery is beneficial for a provider of virtualized IT infrastructure. Traditionally providers offer IT resources under a guaranteed and fixed performance profile, which can lead to underutilization. The findings of our simulation study show that through proper tuning of a limited set of parameters, the proposed scheduling algorithm allows for a significant increase in utilization without sacrificing on performance dependability.
Automatic Configuration of Programmable Logic Controller Emulators
2015-03-01
25 11 Example tree generated using UPGMA [Edw13] . . . . . . . . . . . . . . . . . . . . 33 12 Example sequence alignment for two... UPGMA Unweighted Pair Group Method with Arithmetic Mean URL uniform resource locator VM virtual machine XML Extensible Markup Language xx List of...appearance in the ses- sion, and then they are clustered again using Unweighted Pair Group Method with Arithmetic Mean ( UPGMA ) with a distance matrix based
The Integration of CloudStack and OCCI/OpenNebula with DIRAC
NASA Astrophysics Data System (ADS)
Méndez Muñoz, Víctor; Fernández Albor, Víctor; Graciani Diaz, Ricardo; Casajús Ramo, Adriàn; Fernández Pena, Tomás; Merino Arévalo, Gonzalo; José Saborido Silva, Juan
2012-12-01
The increasing availability of Cloud resources is arising as a realistic alternative to the Grid as a paradigm for enabling scientific communities to access large distributed computing resources. The DIRAC framework for distributed computing is an easy way to efficiently access to resources from both systems. This paper explains the integration of DIRAC with two open-source Cloud Managers: OpenNebula (taking advantage of the OCCI standard) and CloudStack. These are computing tools to manage the complexity and heterogeneity of distributed data center infrastructures, allowing to create virtual clusters on demand, including public, private and hybrid clouds. This approach has required to develop an extension to the previous DIRAC Virtual Machine engine, which was developed for Amazon EC2, allowing the connection with these new cloud managers. In the OpenNebula case, the development has been based on the CernVM Virtual Software Appliance with appropriate contextualization, while in the case of CloudStack, the infrastructure has been kept more general, which permits other Virtual Machine sources and operating systems being used. In both cases, CernVM File System has been used to facilitate software distribution to the computing nodes. With the resulting infrastructure, the cloud resources are transparent to the users through a friendly interface, like the DIRAC Web Portal. The main purpose of this integration is to get a system that can manage cloud and grid resources at the same time. This particular feature pushes DIRAC to a new conceptual denomination as interware, integrating different middleware. Users from different communities do not need to care about the installation of the standard software that is available at the nodes, nor the operating system of the host machine which is transparent to the user. This paper presents an analysis of the overhead of the virtual layer, doing some tests to compare the proposed approach with the existing Grid solution. License Notice: Published under licence in Journal of Physics: Conference Series by IOP Publishing Ltd.
A VM-shared desktop virtualization system based on OpenStack
NASA Astrophysics Data System (ADS)
Liu, Xi; Zhu, Mingfa; Xiao, Limin; Jiang, Yuanjie
2018-04-01
With the increasing popularity of cloud computing, desktop virtualization is rising in recent years as a branch of virtualization technology. However, existing desktop virtualization systems are mostly designed as a one-to-one mode, which one VM can only be accessed by one user. Meanwhile, previous desktop virtualization systems perform weakly in terms of response time and cost saving. This paper proposes a novel VM-Shared desktop virtualization system based on OpenStack platform. The paper modified the connecting process and the display data transmission process of the remote display protocol SPICE to support VM-Shared function. On the other hand, we propose a server-push display mode to improve user interactive experience. The experimental results show that our system performs well in response time and achieves a low CPU consumption.
1986-04-29
COMPILER VALIDATION SUMMARY REPORT: International Business Machines Corporation IBM Development System for the Ada Language for VM/CMS, Version 1.0 IBM 4381...tested using command scripts provided by International Business Machines Corporation. These scripts were reviewed by the validation team. Test.s were run...s): IBM 4381 (System/370) Operating System: VM/CMS, release 3.6 International Business Machines Corporation has made no deliberate extensions to the
CAGE IIIA Distributed Simulation Design Methodology
2014-05-01
2 VHF Very High Frequency VLC Video LAN Codec – an Open-source cross-platform multimedia player and framework VM Virtual Machine VOIP Voice Over...Implementing Defence Experimentation (GUIDEx). The key challenges for this methodology are with understanding how to: • design it o define the...operation and to be available in the other nation’s simulations. The challenge for the CAGE campaign of experiments is to continue to build upon this
2015-06-01
GEOINT geospatial intelligence GFC ground force commander GPS global positioning system GUI graphical user interface HA/DR humanitarian...transport stream UAS unmanned aerial system . See UAV. UAV unmanned aerial vehicle. See UAS. VM virtual machine VMU Marine Unmanned Aerial Vehicle... Unmanned Air Systems (UASs). Current programs promise to dramatically increase the number of FMV feeds in the near future. However, there are too
A three phase optimization method for precopy based VM live migration.
Sharma, Sangeeta; Chawla, Meenu
2016-01-01
Virtual machine live migration is a method of moving virtual machine across hosts within a virtualized datacenter. It provides significant benefits for administrator to manage datacenter efficiently. It reduces service interruption by transferring the virtual machine without stopping at source. Transfer of large number of virtual machine memory pages results in long migration time as well as downtime, which also affects the overall system performance. This situation becomes unbearable when migration takes place over slower network or a long distance migration within a cloud. In this paper, precopy based virtual machine live migration method is thoroughly analyzed to trace out the issues responsible for its performance drops. In order to address these issues, this paper proposes three phase optimization (TPO) method. It works in three phases as follows: (i) reduce the transfer of memory pages in first phase, (ii) reduce the transfer of duplicate pages by classifying frequently and non-frequently updated pages, and (iii) reduce the data sent in last iteration of migration by applying the simple RLE compression technique. As a result, each phase significantly reduces total pages transferred, total migration time and downtime respectively. The proposed TPO method is evaluated using different representative workloads on a Xen virtualized environment. Experimental results show that TPO method reduces total pages transferred by 71 %, total migration time by 70 %, downtime by 3 % for higher workload, and it does not impose significant overhead as compared to traditional precopy method. Comparison of TPO method with other methods is also done for supporting and showing its effectiveness. TPO method and precopy methods are also tested at different number of iterations. The TPO method gives better performance even with less number of iterations.
Taming Wild Horses: The Need for Virtual Time-based Scheduling of VMs in Network Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoginath, Srikanth B; Perumalla, Kalyan S; Henz, Brian J
2012-01-01
The next generation of scalable network simulators employ virtual machines (VMs) to act as high-fidelity models of traffic producer/consumer nodes in simulated networks. However, network simulations could be inaccurate if VMs are not scheduled according to virtual time, especially when many VMs are hosted per simulator core in a multi-core simulator environment. Since VMs are by default free-running, on the outset, it is not clear if, and to what extent, their untamed execution affects the results in simulated scenarios. Here, we provide the first quantitative basis for establishing the need for generalized virtual time scheduling of VMs in network simulators,more » based on an actual prototyped implementations. To exercise breadth, our system is tested with multiple disparate applications: (a) a set of message passing parallel programs, (b) a computer worm propagation phenomenon, and (c) a mobile ad-hoc wireless network simulation. We define and use error metrics and benchmarks in scaled tests to empirically report the poor match of traditional, fairness-based VM scheduling to VM-based network simulation, and also clearly show the better performance of our simulation-specific scheduler, with up to 64 VMs hosted on a 12-core simulator node.« less
Virtual machine-based simulation platform for mobile ad-hoc network-based cyber infrastructure
Yoginath, Srikanth B.; Perumalla, Kayla S.; Henz, Brian J.
2015-09-29
In modeling and simulating complex systems such as mobile ad-hoc networks (MANETs) in de-fense communications, it is a major challenge to reconcile multiple important considerations: the rapidity of unavoidable changes to the software (network layers and applications), the difficulty of modeling the critical, implementation-dependent behavioral effects, the need to sustain larger scale scenarios, and the desire for faster simulations. Here we present our approach in success-fully reconciling them using a virtual time-synchronized virtual machine(VM)-based parallel ex-ecution framework that accurately lifts both the devices as well as the network communications to a virtual time plane while retaining full fidelity. At themore » core of our framework is a scheduling engine that operates at the level of a hypervisor scheduler, offering a unique ability to execute multi-core guest nodes over multi-core host nodes in an accurate, virtual time-synchronized manner. In contrast to other related approaches that suffer from either speed or accuracy issues, our framework provides MANET node-wise scalability, high fidelity of software behaviors, and time-ordering accuracy. The design and development of this framework is presented, and an ac-tual implementation based on the widely used Xen hypervisor system is described. Benchmarks with synthetic and actual applications are used to identify the benefits of our approach. The time inaccuracy of traditional emulation methods is demonstrated, in comparison with the accurate execution of our framework verified by theoretically correct results expected from analytical models of the same scenarios. In the largest high fidelity tests, we are able to perform virtual time-synchronized simulation of 64-node VM-based full-stack, actual software behaviors of MANETs containing a mix of static and mobile (unmanned airborne vehicle) nodes, hosted on a 32-core host, with full fidelity of unmodified ad-hoc routing protocols, unmodified application executables, and user-controllable physical layer effects including inter-device wireless signal strength, reachability, and connectivity.« less
Virtual machine-based simulation platform for mobile ad-hoc network-based cyber infrastructure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoginath, Srikanth B.; Perumalla, Kayla S.; Henz, Brian J.
In modeling and simulating complex systems such as mobile ad-hoc networks (MANETs) in de-fense communications, it is a major challenge to reconcile multiple important considerations: the rapidity of unavoidable changes to the software (network layers and applications), the difficulty of modeling the critical, implementation-dependent behavioral effects, the need to sustain larger scale scenarios, and the desire for faster simulations. Here we present our approach in success-fully reconciling them using a virtual time-synchronized virtual machine(VM)-based parallel ex-ecution framework that accurately lifts both the devices as well as the network communications to a virtual time plane while retaining full fidelity. At themore » core of our framework is a scheduling engine that operates at the level of a hypervisor scheduler, offering a unique ability to execute multi-core guest nodes over multi-core host nodes in an accurate, virtual time-synchronized manner. In contrast to other related approaches that suffer from either speed or accuracy issues, our framework provides MANET node-wise scalability, high fidelity of software behaviors, and time-ordering accuracy. The design and development of this framework is presented, and an ac-tual implementation based on the widely used Xen hypervisor system is described. Benchmarks with synthetic and actual applications are used to identify the benefits of our approach. The time inaccuracy of traditional emulation methods is demonstrated, in comparison with the accurate execution of our framework verified by theoretically correct results expected from analytical models of the same scenarios. In the largest high fidelity tests, we are able to perform virtual time-synchronized simulation of 64-node VM-based full-stack, actual software behaviors of MANETs containing a mix of static and mobile (unmanned airborne vehicle) nodes, hosted on a 32-core host, with full fidelity of unmodified ad-hoc routing protocols, unmodified application executables, and user-controllable physical layer effects including inter-device wireless signal strength, reachability, and connectivity.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Hao; Ren, Shangping; Garzoglio, Gabriele
Cloud bursting is one of the key research topics in the cloud computing communities. A well designed cloud bursting module enables private clouds to automatically launch virtual machines (VMs) to public clouds when more resources are needed. One of the main challenges in developing a cloud bursting module is to decide when and where to launch a VM so that all resources are most effectively and efficiently utilized and the system performance is optimized. However, based on system operational data obtained from FermiCloud, a private cloud developed by the Fermi National Accelerator Laboratory for scientific workflows, the VM launching overheadmore » is not a constant. It varies with physical resource utilization, such as CPU and I/O device utilizations, at the time when a VM is launched. Hence, to make judicious decisions as to when and where a VM should be launched, a VM launching overhead reference model is needed. In this paper, we first develop a VM launching overhead reference model based on operational data we have obtained on FermiCloud. Second, we apply the developed reference model on FermiCloud and compare calculated VM launching overhead values based on the model with measured overhead values on FermiCloud. Our empirical results on FermiCloud indicate that the developed reference model is accurate. We believe, with the guidance of the developed reference model, efficient resource allocation algorithms can be developed for cloud bursting process to minimize the operational cost and resource waste.« less
A nested virtualization tool for information technology practical education.
Pérez, Carlos; Orduña, Juan M; Soriano, Francisco R
2016-01-01
A common problem of some information technology courses is the difficulty of providing practical exercises. Although different approaches have been followed to solve this problem, it is still an open issue, specially in security and computer network courses. This paper proposes NETinVM, a tool based on nested virtualization that includes a fully functional lab, comprising several computers and networks, in a single virtual machine. It also analyzes and evaluates how it has been used in different teaching environments. The results show that this tool makes it possible to perform demos, labs and practical exercises, greatly appreciated by the students, that would otherwise be unfeasible. Also, its portability allows to reproduce classroom activities, as well as the students' autonomous work.
Efficient operating system level virtualization techniques for cloud resources
NASA Astrophysics Data System (ADS)
Ansu, R.; Samiksha; Anju, S.; Singh, K. John
2017-11-01
Cloud computing is an advancing technology which provides the servcies of Infrastructure, Platform and Software. Virtualization and Computer utility are the keys of Cloud computing. The numbers of cloud users are increasing day by day. So it is the need of the hour to make resources available on demand to satisfy user requirements. The technique in which resources namely storage, processing power, memory and network or I/O are abstracted is known as Virtualization. For executing the operating systems various virtualization techniques are available. They are: Full System Virtualization and Para Virtualization. In Full Virtualization, the whole architecture of hardware is duplicated virtually. No modifications are required in Guest OS as the OS deals with the VM hypervisor directly. In Para Virtualization, modifications of OS is required to run in parallel with other OS. For the Guest OS to access the hardware, the host OS must provide a Virtual Machine Interface. OS virtualization has many advantages such as migrating applications transparently, consolidation of server, online maintenance of OS and providing security. This paper briefs both the virtualization techniques and discusses the issues in OS level virtualization.
Security-aware Virtual Machine Allocation in the Cloud: A Game Theoretic Approach
2015-01-13
predecessor, however, this paper used empirical evidence and actual data from running experiments on the Amazon EC2 cloud . They began by running all 5...is through effective VM allocation management of the cloud provider to ensure delivery of maximum security for all cloud users. The negative... Cloud : A Game Theoretic Approach 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f
Open access for ALICE analysis based on virtualization technology
NASA Astrophysics Data System (ADS)
Buncic, P.; Gheata, M.; Schutz, Y.
2015-12-01
Open access is one of the important leverages for long-term data preservation for a HEP experiment. To guarantee the usability of data analysis tools beyond the experiment lifetime it is crucial that third party users from the scientific community have access to the data and associated software. The ALICE Collaboration has developed a layer of lightweight components built on top of virtualization technology to hide the complexity and details of the experiment-specific software. Users can perform basic analysis tasks within CernVM, a lightweight generic virtual machine, paired with an ALICE specific contextualization. Once the virtual machine is launched, a graphical user interface is automatically started without any additional configuration. This interface allows downloading the base ALICE analysis software and running a set of ALICE analysis modules. Currently the available tools include fully documented tutorials for ALICE analysis, such as the measurement of strange particle production or the nuclear modification factor in Pb-Pb collisions. The interface can be easily extended to include an arbitrary number of additional analysis modules. We present the current status of the tools used by ALICE through the CERN open access portal, and the plans for future extensions of this system.
Mobile Edge Computing Empowers Internet of Things
NASA Astrophysics Data System (ADS)
Ansari, Nirwan; Sun, Xiang
In this paper, we propose a Mobile Edge Internet of Things (MEIoT) architecture by leveraging the fiber-wireless access technology, the cloudlet concept, and the software defined networking framework. The MEIoT architecture brings computing and storage resources close to Internet of Things (IoT) devices in order to speed up IoT data sharing and analytics. Specifically, the IoT devices (belonging to the same user) are associated to a specific proxy Virtual Machine (VM) in the nearby cloudlet. The proxy VM stores and analyzes the IoT data (generated by its IoT devices) in real-time. Moreover, we introduce the semantic and social IoT technology in the context of MEIoT to solve the interoperability and inefficient access control problem in the IoT system. In addition, we propose two dynamic proxy VM migration methods to minimize the end-to-end delay between proxy VMs and their IoT devices and to minimize the total on-grid energy consumption of the cloudlets, respectively. Performance of the proposed methods are validated via extensive simulations.
GO, an exec for running the programs: CELL, COLLIDER, MAGIC, PATRICIA, PETROS, TRANSPORT, and TURTLE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shoaee, H.
1982-05-01
An exec has been written and placed on the PEP group's public disk to facilitate the use of several PEP related computer programs available on VM. The exec's program list currently includes: CELL, COLLIDER, MAGIC, PATRICIA, PETROS, TRANSPORT, and TURTLE. In addition, provisions have been made to allow addition of new programs to this list as they become available. The GO exec is directly callable from inside the Wylbur editor (in fact, currently this is the only way to use the GO exec.). It provides the option of running any of the above programs in either interactive or batch mode.more » In the batch mode, the GO exec sends the data in the Wylbur active file along with the information required to run the job to the batch monitor (BMON, a virtual machine that schedules and controls execution of batch jobs). This enables the user to proceed with other VM activities at his/her terminal while the job executes, thus making it of particular interest to the users with jobs requiring much CPU time to execute and/or those wishing to run multiple jobs independently. In the interactive mode, useful for small jobs requiring less CPU time, the job is executed by the user's own Virtual Machine using the data in the active file as input. At the termination of an interactive job, the GO exec facilitates examination of the output by placing it in the Wylbur active file.« less
GO, an exec for running the programs: CELL, COLLIDER, MAGIC, PATRICIA, PETROS, TRANSPORT and TURTLE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shoaee, H.
1982-05-01
An exec has been written and placed on the PEP group's public disk (PUBRL 192) to facilitate the use of several PEP related computer programs available on VM. The exec's program list currently includes: CELL, COLLIDER, MAGIC, PATRICIA, PETROS, TRANSPORT, and TURTLE. In addition, provisions have been made to allow addition of new programs to this list as they become available. The GO exec is directly callable from inside the Wylbur editor (in fact, currently this is the only way to use the GO exec.) It provides the option of running any of the above programs in either interactive ormore » batch mode. In the batch mode, the GO exec sends the data in the Wylbur active file along with the information required to run the job to the batch monitor (BMON, a virtual machine that schedules and controls execution of batch jobs). This enables the user to proceed with other VM activities at his/her terminal while the job executes, thus making it of particular interest to the users with jobs requiring much CPU time to execute and/or those wishing to run multiple jobs independently. In the interactive mode, useful for small jobs requiring less CPU time, the job is executed by the user's own Virtual Machine using the data in the active file as input. At the termination of an interactive job, the GO exec facilitates examination of the output by placing it in the Wylbur active file.« less
NASA Astrophysics Data System (ADS)
Lanciotti, E.; Merino, G.; Bria, A.; Blomer, J.
2011-12-01
In a distributed computing model as WLCG the software of experiment specific application software has to be efficiently distributed to any site of the Grid. Application software is currently installed in a shared area of the site visible for all Worker Nodes (WNs) of the site through some protocol (NFS, AFS or other). The software is installed at the site by jobs which run on a privileged node of the computing farm where the shared area is mounted in write mode. This model presents several drawbacks which cause a non-negligible rate of job failure. An alternative model for software distribution based on the CERN Virtual Machine File System (CernVM-FS) has been tried at PIC, the Spanish Tierl site of WLCG. The test bed used and the results are presented in this paper.
NASA Astrophysics Data System (ADS)
Tudose, Alexandru; Terstyansky, Gabor; Kacsuk, Peter; Winter, Stephen
Grid Application Repositories vary greatly in terms of access interface, security system, implementation technology, communication protocols and repository model. This diversity has become a significant limitation in terms of interoperability and inter-repository access. This paper presents the Grid Application Meta-Repository System (GAMRS) as a solution that offers better options for the management of Grid applications. GAMRS proposes a generic repository architecture, which allows any Grid Application Repository (GAR) to be connected to the system independent of their underlying technology. It also presents applications in a uniform manner and makes applications from all connected repositories visible to web search engines, OGSI/WSRF Grid Services and other OAI (Open Archive Initiative)-compliant repositories. GAMRS can also function as a repository in its own right and can store applications under a new repository model. With the help of this model, applications can be presented as embedded in virtual machines (VM) and therefore they can be run in their native environments and can easily be deployed on virtualized infrastructures allowing interoperability with new generation technologies such as cloud computing, application-on-demand, automatic service/application deployments and automatic VM generation.
NASA Technical Reports Server (NTRS)
Mehhtz, Peter
2005-01-01
JPF is an explicit state software model checker for Java bytecode. Today, JPF is a swiss army knife for all sort of runtime based verification purposes. This basically means JPF is a Java virtual machine that executes your program not just once (like a normal VM), but theoretically in all possible ways, checking for property violations like deadlocks or unhandled exceptions along all potential execution paths. If it finds an error, JPF reports the whole execution that leads to it. Unlike a normal debugger, JPF keeps track of every step how it got to the defect.
Scalable computing for evolutionary genomics.
Prins, Pjotr; Belhachemi, Dominique; Möller, Steffen; Smant, Geert
2012-01-01
Genomic data analysis in evolutionary biology is becoming so computationally intensive that analysis of multiple hypotheses and scenarios takes too long on a single desktop computer. In this chapter, we discuss techniques for scaling computations through parallelization of calculations, after giving a quick overview of advanced programming techniques. Unfortunately, parallel programming is difficult and requires special software design. The alternative, especially attractive for legacy software, is to introduce poor man's parallelization by running whole programs in parallel as separate processes, using job schedulers. Such pipelines are often deployed on bioinformatics computer clusters. Recent advances in PC virtualization have made it possible to run a full computer operating system, with all of its installed software, on top of another operating system, inside a "box," or virtual machine (VM). Such a VM can flexibly be deployed on multiple computers, in a local network, e.g., on existing desktop PCs, and even in the Cloud, to create a "virtual" computer cluster. Many bioinformatics applications in evolutionary biology can be run in parallel, running processes in one or more VMs. Here, we show how a ready-made bioinformatics VM image, named BioNode, effectively creates a computing cluster, and pipeline, in a few steps. This allows researchers to scale-up computations from their desktop, using available hardware, anytime it is required. BioNode is based on Debian Linux and can run on networked PCs and in the Cloud. Over 200 bioinformatics and statistical software packages, of interest to evolutionary biology, are included, such as PAML, Muscle, MAFFT, MrBayes, and BLAST. Most of these software packages are maintained through the Debian Med project. In addition, BioNode contains convenient configuration scripts for parallelizing bioinformatics software. Where Debian Med encourages packaging free and open source bioinformatics software through one central project, BioNode encourages creating free and open source VM images, for multiple targets, through one central project. BioNode can be deployed on Windows, OSX, Linux, and in the Cloud. Next to the downloadable BioNode images, we provide tutorials online, which empower bioinformaticians to install and run BioNode in different environments, as well as information for future initiatives, on creating and building such images.
Integration of virtualized worker nodes in standard batch systems
NASA Astrophysics Data System (ADS)
Büge, Volker; Hessling, Hermann; Kemp, Yves; Kunze, Marcel; Oberst, Oliver; Quast, Günter; Scheurer, Armin; Synge, Owen
2010-04-01
Current experiments in HEP only use a limited number of operating system flavours. Their software might only be validated on one single OS platform. Resource providers might have other operating systems of choice for the installation of the batch infrastructure. This is especially the case if a cluster is shared with other communities, or communities that have stricter security requirements. One solution would be to statically divide the cluster into separated sub-clusters. In such a scenario, no opportunistic distribution of the load can be achieved, resulting in a poor overall utilization efficiency. Another approach is to make the batch system aware of virtualization, and to provide each community with its favoured operating system in a virtual machine. Here, the scheduler has full flexibility, resulting in a better overall efficiency of the resources. In our contribution, we present a lightweight concept for the integration of virtual worker nodes into standard batch systems. The virtual machines are started on the worker nodes just before jobs are executed there. No meta-scheduling is introduced. We demonstrate two prototype implementations, one based on the Sun Grid Engine (SGE), the other using Maui/Torque as a batch system. Both solutions support local job as well as Grid job submission. The hypervisors currently used are Xen and KVM, a port to another system is easily envisageable. To better handle different virtual machines on the physical host, the management solution VmImageManager is developed. We will present first experience from running the two prototype implementations. In a last part, we will show the potential future use of this lightweight concept when integrated into high-level (i.e. Grid) work-flows.
ERIC Educational Resources Information Center
Mukherjee, Maheswari S.
2012-01-01
Traditionally, cytotechnology (CT) students have been trained by using light microscopy (LM) and glass slides. However, this method of training has some drawbacks. Several other educational programs with similar issues have incorporated virtual microscopy (VM) in their curricula. In VM, the specimens on glass slides are converted into virtual…
Electronic Blending in Virtual Microscopy
ERIC Educational Resources Information Center
Maybury, Terrence S.; Farah, Camile S.
2010-01-01
Virtual microscopy (VM) is a relatively new technology that transforms the computer into a microscope. In essence, VM allows for the scanning and transfer of glass slides from light microscopy technology to the digital environment of the computer. This transition is also a function of the change from print knowledge to electronic knowledge, or as…
Dong, Yu-Shuang; Xu, Gao-Chao; Fu, Xiao-Dong
2014-01-01
The cloud platform provides various services to users. More and more cloud centers provide infrastructure as the main way of operating. To improve the utilization rate of the cloud center and to decrease the operating cost, the cloud center provides services according to requirements of users by sharding the resources with virtualization. Considering both QoS for users and cost saving for cloud computing providers, we try to maximize performance and minimize energy cost as well. In this paper, we propose a distributed parallel genetic algorithm (DPGA) of placement strategy for virtual machines deployment on cloud platform. It executes the genetic algorithm parallelly and distributedly on several selected physical hosts in the first stage. Then it continues to execute the genetic algorithm of the second stage with solutions obtained from the first stage as the initial population. The solution calculated by the genetic algorithm of the second stage is the optimal one of the proposed approach. The experimental results show that the proposed placement strategy of VM deployment can ensure QoS for users and it is more effective and more energy efficient than other placement strategies on the cloud platform. PMID:25097872
Implementation of Grid Tier 2 and Tier 3 facilities on a Distributed OpenStack Cloud
NASA Astrophysics Data System (ADS)
Limosani, Antonio; Boland, Lucien; Coddington, Paul; Crosby, Sean; Huang, Joanna; Sevior, Martin; Wilson, Ross; Zhang, Shunde
2014-06-01
The Australian Government is making a AUD 100 million investment in Compute and Storage for the academic community. The Compute facilities are provided in the form of 30,000 CPU cores located at 8 nodes around Australia in a distributed virtualized Infrastructure as a Service facility based on OpenStack. The storage will eventually consist of over 100 petabytes located at 6 nodes. All will be linked via a 100 Gb/s network. This proceeding describes the development of a fully connected WLCG Tier-2 grid site as well as a general purpose Tier-3 computing cluster based on this architecture. The facility employs an extension to Torque to enable dynamic allocations of virtual machine instances. A base Scientific Linux virtual machine (VM) image is deployed in the OpenStack cloud and automatically configured as required using Puppet. Custom scripts are used to launch multiple VMs, integrate them into the dynamic Torque cluster and to mount remote file systems. We report on our experience in developing this nation-wide ATLAS and Belle II Tier 2 and Tier 3 computing infrastructure using the national Research Cloud and storage facilities.
Dong, Yu-Shuang; Xu, Gao-Chao; Fu, Xiao-Dong
2014-01-01
The cloud platform provides various services to users. More and more cloud centers provide infrastructure as the main way of operating. To improve the utilization rate of the cloud center and to decrease the operating cost, the cloud center provides services according to requirements of users by sharding the resources with virtualization. Considering both QoS for users and cost saving for cloud computing providers, we try to maximize performance and minimize energy cost as well. In this paper, we propose a distributed parallel genetic algorithm (DPGA) of placement strategy for virtual machines deployment on cloud platform. It executes the genetic algorithm parallelly and distributedly on several selected physical hosts in the first stage. Then it continues to execute the genetic algorithm of the second stage with solutions obtained from the first stage as the initial population. The solution calculated by the genetic algorithm of the second stage is the optimal one of the proposed approach. The experimental results show that the proposed placement strategy of VM deployment can ensure QoS for users and it is more effective and more energy efficient than other placement strategies on the cloud platform.
Accessing the public MIMIC-II intensive care relational database for clinical research.
Scott, Daniel J; Lee, Joon; Silva, Ikaro; Park, Shinhyuk; Moody, George B; Celi, Leo A; Mark, Roger G
2013-01-10
The Multiparameter Intelligent Monitoring in Intensive Care II (MIMIC-II) database is a free, public resource for intensive care research. The database was officially released in 2006, and has attracted a growing number of researchers in academia and industry. We present the two major software tools that facilitate accessing the relational database: the web-based QueryBuilder and a downloadable virtual machine (VM) image. QueryBuilder and the MIMIC-II VM have been developed successfully and are freely available to MIMIC-II users. Simple example SQL queries and the resulting data are presented. Clinical studies pertaining to acute kidney injury and prediction of fluid requirements in the intensive care unit are shown as typical examples of research performed with MIMIC-II. In addition, MIMIC-II has also provided data for annual PhysioNet/Computing in Cardiology Challenges, including the 2012 Challenge "Predicting mortality of ICU Patients". QueryBuilder is a web-based tool that provides easy access to MIMIC-II. For more computationally intensive queries, one can locally install a complete copy of MIMIC-II in a VM. Both publicly available tools provide the MIMIC-II research community with convenient querying interfaces and complement the value of the MIMIC-II relational database.
NASA Astrophysics Data System (ADS)
Zhou, Jianfeng; Xu, Benda; Peng, Chuan; Yang, Yang; Huo, Zhuoxi
2015-08-01
AIRE-Linux is a dedicated Linux system for astronomers. Modern astronomy faces two big challenges: massive observed raw data which covers the whole electromagnetic spectrum, and overmuch professional data processing skill which exceeds personal or even a small team's abilities. AIRE-Linux, which is a specially designed Linux and will be distributed to users by Virtual Machine (VM) images in Open Virtualization Format (OVF), is to help astronomers confront the challenges. Most astronomical software packages, such as IRAF, MIDAS, CASA, Heasoft etc., will be integrated into AIRE-Linux. It is easy for astronomers to configure and customize the system and use what they just need. When incorporated into cloud computing platforms, AIRE-Linux will be able to handle data intensive and computing consuming tasks for astronomers. Currently, a Beta version of AIRE-Linux is ready for download and testing.
NASA Astrophysics Data System (ADS)
McNab, A.
2017-10-01
This paper describes GridPP’s Vacuum Platform for managing virtual machines (VMs), which has been used to run production workloads for WLCG and other HEP experiments. The platform provides a uniform interface between VMs and the sites they run at, whether the site is organised as an Infrastructure-as-a-Service cloud system such as OpenStack, or an Infrastructure-as-a-Client system such as Vac. The paper describes our experience in using this platform, in developing and operating VM lifecycle managers Vac and Vcycle, and in interacting with VMs provided by LHCb, ATLAS, ALICE, CMS, and the GridPP DIRAC service to run production workloads.
Cloud BioLinux: pre-configured and on-demand bioinformatics computing for the genomics community.
Krampis, Konstantinos; Booth, Tim; Chapman, Brad; Tiwari, Bela; Bicak, Mesude; Field, Dawn; Nelson, Karen E
2012-03-19
A steep drop in the cost of next-generation sequencing during recent years has made the technology affordable to the majority of researchers, but downstream bioinformatic analysis still poses a resource bottleneck for smaller laboratories and institutes that do not have access to substantial computational resources. Sequencing instruments are typically bundled with only the minimal processing and storage capacity required for data capture during sequencing runs. Given the scale of sequence datasets, scientific value cannot be obtained from acquiring a sequencer unless it is accompanied by an equal investment in informatics infrastructure. Cloud BioLinux is a publicly accessible Virtual Machine (VM) that enables scientists to quickly provision on-demand infrastructures for high-performance bioinformatics computing using cloud platforms. Users have instant access to a range of pre-configured command line and graphical software applications, including a full-featured desktop interface, documentation and over 135 bioinformatics packages for applications including sequence alignment, clustering, assembly, display, editing, and phylogeny. Each tool's functionality is fully described in the documentation directly accessible from the graphical interface of the VM. Besides the Amazon EC2 cloud, we have started instances of Cloud BioLinux on a private Eucalyptus cloud installed at the J. Craig Venter Institute, and demonstrated access to the bioinformatic tools interface through a remote connection to EC2 instances from a local desktop computer. Documentation for using Cloud BioLinux on EC2 is available from our project website, while a Eucalyptus cloud image and VirtualBox Appliance is also publicly available for download and use by researchers with access to private clouds. Cloud BioLinux provides a platform for developing bioinformatics infrastructures on the cloud. An automated and configurable process builds Virtual Machines, allowing the development of highly customized versions from a shared code base. This shared community toolkit enables application specific analysis platforms on the cloud by minimizing the effort required to prepare and maintain them.
Cloud BioLinux: pre-configured and on-demand bioinformatics computing for the genomics community
2012-01-01
Background A steep drop in the cost of next-generation sequencing during recent years has made the technology affordable to the majority of researchers, but downstream bioinformatic analysis still poses a resource bottleneck for smaller laboratories and institutes that do not have access to substantial computational resources. Sequencing instruments are typically bundled with only the minimal processing and storage capacity required for data capture during sequencing runs. Given the scale of sequence datasets, scientific value cannot be obtained from acquiring a sequencer unless it is accompanied by an equal investment in informatics infrastructure. Results Cloud BioLinux is a publicly accessible Virtual Machine (VM) that enables scientists to quickly provision on-demand infrastructures for high-performance bioinformatics computing using cloud platforms. Users have instant access to a range of pre-configured command line and graphical software applications, including a full-featured desktop interface, documentation and over 135 bioinformatics packages for applications including sequence alignment, clustering, assembly, display, editing, and phylogeny. Each tool's functionality is fully described in the documentation directly accessible from the graphical interface of the VM. Besides the Amazon EC2 cloud, we have started instances of Cloud BioLinux on a private Eucalyptus cloud installed at the J. Craig Venter Institute, and demonstrated access to the bioinformatic tools interface through a remote connection to EC2 instances from a local desktop computer. Documentation for using Cloud BioLinux on EC2 is available from our project website, while a Eucalyptus cloud image and VirtualBox Appliance is also publicly available for download and use by researchers with access to private clouds. Conclusions Cloud BioLinux provides a platform for developing bioinformatics infrastructures on the cloud. An automated and configurable process builds Virtual Machines, allowing the development of highly customized versions from a shared code base. This shared community toolkit enables application specific analysis platforms on the cloud by minimizing the effort required to prepare and maintain them. PMID:22429538
ERIC Educational Resources Information Center
Olympiou, Georgios; Zacharia, Zacharias C.
2012-01-01
This study aimed to investigate the effect of experimenting with physical manipulatives (PM), virtual manipulatives (VM), and a blended combination of PM and VM on undergraduate students' understanding of concepts in the domain of "Light and Color." A pre-post comparison study design was used for the purposes of this study that involved 70…
ERIC Educational Resources Information Center
Zacharia, Zacharias C.; Olympiou, Georgios; Papaevripidou, Marios
2008-01-01
This study aimed to investigate the comparative value of experimenting with physical manipulatives (PM) in a sequential combination with virtual manipulatives (VM), with the use of PM preceding the use of VM, and of experimenting with PM alone, with respect to changes in students' conceptual understanding in the domain of heat and temperature. A…
Exploiting GPUs in Virtual Machine for BioCloud
Jo, Heeseung; Jeong, Jinkyu; Lee, Myoungho; Choi, Dong Hoon
2013-01-01
Recently, biological applications start to be reimplemented into the applications which exploit many cores of GPUs for better computation performance. Therefore, by providing virtualized GPUs to VMs in cloud computing environment, many biological applications will willingly move into cloud environment to enhance their computation performance and utilize infinite cloud computing resource while reducing expenses for computations. In this paper, we propose a BioCloud system architecture that enables VMs to use GPUs in cloud environment. Because much of the previous research has focused on the sharing mechanism of GPUs among VMs, they cannot achieve enough performance for biological applications of which computation throughput is more crucial rather than sharing. The proposed system exploits the pass-through mode of PCI express (PCI-E) channel. By making each VM be able to access underlying GPUs directly, applications can show almost the same performance as when those are in native environment. In addition, our scheme multiplexes GPUs by using hot plug-in/out device features of PCI-E channel. By adding or removing GPUs in each VM in on-demand manner, VMs in the same physical host can time-share their GPUs. We implemented the proposed system using the Xen VMM and NVIDIA GPUs and showed that our prototype is highly effective for biological GPU applications in cloud environment. PMID:23710465
Exploiting GPUs in virtual machine for BioCloud.
Jo, Heeseung; Jeong, Jinkyu; Lee, Myoungho; Choi, Dong Hoon
2013-01-01
Recently, biological applications start to be reimplemented into the applications which exploit many cores of GPUs for better computation performance. Therefore, by providing virtualized GPUs to VMs in cloud computing environment, many biological applications will willingly move into cloud environment to enhance their computation performance and utilize infinite cloud computing resource while reducing expenses for computations. In this paper, we propose a BioCloud system architecture that enables VMs to use GPUs in cloud environment. Because much of the previous research has focused on the sharing mechanism of GPUs among VMs, they cannot achieve enough performance for biological applications of which computation throughput is more crucial rather than sharing. The proposed system exploits the pass-through mode of PCI express (PCI-E) channel. By making each VM be able to access underlying GPUs directly, applications can show almost the same performance as when those are in native environment. In addition, our scheme multiplexes GPUs by using hot plug-in/out device features of PCI-E channel. By adding or removing GPUs in each VM in on-demand manner, VMs in the same physical host can time-share their GPUs. We implemented the proposed system using the Xen VMM and NVIDIA GPUs and showed that our prototype is highly effective for biological GPU applications in cloud environment.
Yu, Si; Gui, Xiaolin; Lin, Jiancai; Tian, Feng; Zhao, Jianqiang; Dai, Min
2014-01-01
Cloud computing gets increasing attention for its capacity to leverage developers from infrastructure management tasks. However, recent works reveal that side channel attacks can lead to privacy leakage in the cloud. Enhancing isolation between users is an effective solution to eliminate the attack. In this paper, to eliminate side channel attacks, we investigate the isolation enhancement scheme from the aspect of virtual machine (VM) management. The security-awareness VMs management scheme (SVMS), a VMs isolation enhancement scheme to defend against side channel attacks, is proposed. First, we use the aggressive conflict of interest relation (ACIR) and aggressive in ally with relation (AIAR) to describe user constraint relations. Second, based on the Chinese wall policy, we put forward four isolation rules. Third, the VMs placement and migration algorithms are designed to enforce VMs isolation between the conflict users. Finally, based on the normal distribution, we conduct a series of experiments to evaluate SVMS. The experimental results show that SVMS is efficient in guaranteeing isolation between VMs owned by conflict users, while the resource utilization rate decreases but not by much.
Gui, Xiaolin; Lin, Jiancai; Tian, Feng; Zhao, Jianqiang; Dai, Min
2014-01-01
Cloud computing gets increasing attention for its capacity to leverage developers from infrastructure management tasks. However, recent works reveal that side channel attacks can lead to privacy leakage in the cloud. Enhancing isolation between users is an effective solution to eliminate the attack. In this paper, to eliminate side channel attacks, we investigate the isolation enhancement scheme from the aspect of virtual machine (VM) management. The security-awareness VMs management scheme (SVMS), a VMs isolation enhancement scheme to defend against side channel attacks, is proposed. First, we use the aggressive conflict of interest relation (ACIR) and aggressive in ally with relation (AIAR) to describe user constraint relations. Second, based on the Chinese wall policy, we put forward four isolation rules. Third, the VMs placement and migration algorithms are designed to enforce VMs isolation between the conflict users. Finally, based on the normal distribution, we conduct a series of experiments to evaluate SVMS. The experimental results show that SVMS is efficient in guaranteeing isolation between VMs owned by conflict users, while the resource utilization rate decreases but not by much. PMID:24688434
An adaptive process-based cloud infrastructure for space situational awareness applications
NASA Astrophysics Data System (ADS)
Liu, Bingwei; Chen, Yu; Shen, Dan; Chen, Genshe; Pham, Khanh; Blasch, Erik; Rubin, Bruce
2014-06-01
Space situational awareness (SSA) and defense space control capabilities are top priorities for groups that own or operate man-made spacecraft. Also, with the growing amount of space debris, there is an increase in demand for contextual understanding that necessitates the capability of collecting and processing a vast amount sensor data. Cloud computing, which features scalable and flexible storage and computing services, has been recognized as an ideal candidate that can meet the large data contextual challenges as needed by SSA. Cloud computing consists of physical service providers and middleware virtual machines together with infrastructure, platform, and software as service (IaaS, PaaS, SaaS) models. However, the typical Virtual Machine (VM) abstraction is on a per operating systems basis, which is at too low-level and limits the flexibility of a mission application architecture. In responding to this technical challenge, a novel adaptive process based cloud infrastructure for SSA applications is proposed in this paper. In addition, the details for the design rationale and a prototype is further examined. The SSA Cloud (SSAC) conceptual capability will potentially support space situation monitoring and tracking, object identification, and threat assessment. Lastly, the benefits of a more granular and flexible cloud computing resources allocation are illustrated for data processing and implementation considerations within a representative SSA system environment. We show that the container-based virtualization performs better than hypervisor-based virtualization technology in an SSA scenario.
ERIC Educational Resources Information Center
Gatumu, Margaret K.; MacMillan, Frances M.; Langton, Philip D.; Headley, P. Max; Harris, Judy R.
2014-01-01
This article describes the introduction of a virtual microscope (VM) that has allowed preclinical histology teaching to be fashioned to better suit the needs of approximately 900 undergraduate students per year studying medicine, dentistry, or veterinary science at the University of Bristol, United Kingdom. Features of the VM implementation…
Accessing the public MIMIC-II intensive care relational database for clinical research
2013-01-01
Background The Multiparameter Intelligent Monitoring in Intensive Care II (MIMIC-II) database is a free, public resource for intensive care research. The database was officially released in 2006, and has attracted a growing number of researchers in academia and industry. We present the two major software tools that facilitate accessing the relational database: the web-based QueryBuilder and a downloadable virtual machine (VM) image. Results QueryBuilder and the MIMIC-II VM have been developed successfully and are freely available to MIMIC-II users. Simple example SQL queries and the resulting data are presented. Clinical studies pertaining to acute kidney injury and prediction of fluid requirements in the intensive care unit are shown as typical examples of research performed with MIMIC-II. In addition, MIMIC-II has also provided data for annual PhysioNet/Computing in Cardiology Challenges, including the 2012 Challenge “Predicting mortality of ICU Patients”. Conclusions QueryBuilder is a web-based tool that provides easy access to MIMIC-II. For more computationally intensive queries, one can locally install a complete copy of MIMIC-II in a VM. Both publicly available tools provide the MIMIC-II research community with convenient querying interfaces and complement the value of the MIMIC-II relational database. PMID:23302652
1988-05-20
AVF Control Number: AVF-VSR-84.1087 ’S (0 87-03-10-TEL I- Ada® COMPILER VALIDATION SUMMARY REPORT: International Business Machines Corporation IBM...System, Version 1.1.0, International Business Machines Corporation, Wright-Patterson AFB. IBM 4381 under VM/SP CMS, Release 3.6 (host) and IBM 4381...an IBM 4381 operating under MVS, Release 3.8. On-site testing was performed 18 May 1987 through 20 May 1987 at International Business Machines
1988-03-28
International Business Machines Corporation IBM Development System for the Ada Language, Version 2.1.0 IBM 4381 under VM/HPO, host and target DTIC...necessary and identify by block number) International Business Machines Corporation, IBM Development System for the Ada Language, Version 2.1.0, IBM...in the compiler listed in this declaration. I declare that International Business Machines Corporation is the owner of record of the object code of the
1989-04-20
International Business Machines Corporation) IBM Development System for the Ada Language, VN11/CMS Ada Compiler, Version 2.1.1, Wright-Patterson AFB, IBM 3083...890420W1.10073 International Business Machines Corporation IBM Development System for the Ada Language VM/CMS Ada Compiler Version 2.1.1 IBM 3083... International Business Machines Corporation and reviewed by the validation team. The compiler was tested using all default option settings except for the
1988-03-28
International Business Machines Corporation IBM Development System for the Ada Language, Version 2.1.0 IBM 4381 under VM/HPO, host IBM 4381 under MVS/XA, target...Program Office, AJPO 20. ABSTRACT (Continue on reverse side if necessary and identify by block number) International Business Machines Corporation, IBM...Standard ANSI/MIL-STD-1815A in the compiler listed in this declaration. I declare that International Business Machines Corporation is the owner of record
New, Susan A; Livingstone, M Barbara E
2003-08-01
Availability of confectionery from vending machines in secondary schools provides a convenient point of purchase. There is concern that this may lead to 'over-indulgence' and hence an increase in susceptibility to obesity and poor 'dietary quality'. The study objective was to investigate the association between the frequency of consumption of confectionery purchased from vending machines and other sources and related lifestyle factors in adolescent boys and girls. A secondary school-based, cross-sectional study. A total of 504 subjects were investigated (age range 12-15 years), from three schools in southern and northern England. Using a lifestyle questionnaire, frequency of confectionery consumption (CC) from all sources (AS) and vending machines (VM) was recorded for a typical school week. Subjects were categorised into non-consumers, low, medium and high consumers using the following criteria: none, 0 times per week; low, 1-5 times per week; medium, 6-9 times per week; high, 10 times per week or greater. No differences were found in the frequency of CC from AS or VM between those who consumed breakfast and lunch and those who did not. No differences were found in the frequency of fruit and vegetable intake in high VM CC vs. none VM CC groups, or in any of the VM CC groups. Confectionery consumption from AS (but not VM) was found to be higher in subjects who were physically active on the journey to school (P<0.01) but also higher in those who spent more time watching television and playing computer games (P<0.01). No associations were found between smoking habits or alcohol consumption and frequency of CC. These results do not show a link between consumption of confectionery purchased from vending machines and 'poor' dietary practice or 'undesirable' lifestyle habits. Findings for total confectionery consumption showed some interesting trends, but the results were not consistent, either for a negative or positive effect.
NASA Astrophysics Data System (ADS)
Elantkowska, Magdalena; Ruczkowski, Jarosław; Sikorski, Andrzej; Dembczyński, Jerzy
2017-11-01
A parametric analysis of the hyperfine structure (hfs) for the even parity configurations of atomic terbium (Tb I) is presented in this work. We introduce the complete set of 4fN-core states in our high-performance computing (HPC) calculations. For calculations of the huge hyperfine structure matrix, requiring approximately 5000 hours when run on a single CPU, we propose the methods utilizing a personal computer cluster or, alternatively a cluster of Microsoft Azure virtual machines (VM). These methods give a factor 12 performance boost, enabling the calculations to complete in an acceptable time.
Kernodle, Michael W; McKethan, Robert N; Rabinowitz, Erik
2008-10-01
Traditional and virtual modeling were compared during learning of a multiple degree-of-freedom skill (fly casting) to assess the effect of the presence or absence of an authority figure on observational learning via virtual modeling. Participants were randomly assigned to one of four groups: Virtual Modeling with an authority figure present (VM-A) (n = 16), Virtual Modeling without an authority figure (VM-NA) (n = 16), Traditional Instruction (n = 17), and Control (n = 19). Results showed significant between-group differences on Form and Skill Acquisition scores. Except for one instance, all three learning procedures resulted in significant learning of fly casting. Virtual modeling with or without an authority figure present was as effective as traditional instruction; however, learning without an authority figure was less effective with regard to Accuracy scores.
Renkema, Kristin R; Li, Gang; Wu, Angela; Smithey, Megan J; Nikolich-Žugich, Janko
2014-01-01
Naive T cell responses are eroded with aging. We and others have recently shown that unimmunized old mice lose ≥ 70% of Ag-specific CD8 T cell precursors and that many of the remaining precursors acquire a virtual (central) memory (VM; CD44(hi)CD62L(hi)) phenotype. In this study, we demonstrate that unimmunized TCR transgenic (TCRTg) mice also undergo massive VM conversion with age, exhibiting rapid effector function upon both TCR and cytokine triggering. Age-related VM conversion in TCRTg mice directly depended on replacement of the original TCRTg specificity by endogenous TCRα rearrangements, indicating that TCR signals must be critical in VM conversion. Importantly, we found that VM conversion had adverse functional effects in both old wild-type and old TCRTg mice; that is, old VM, but not old true naive, T cells exhibited blunted TCR-mediated, but not IL-15-mediated, proliferation. This selective proliferative senescence correlated with increased apoptosis in old VM cells in response to peptide, but decreased apoptosis in response to homeostatic cytokines IL-7 and IL-15. Our results identify TCR as the key factor in differential maintenance and function of Ag-specific precursors in unimmunized mice with aging, and they demonstrate that two separate age-related defects--drastic reduction in true naive T cell precursors and impaired proliferative capacity of their VM cousins--combine to reduce naive T cell responses with aging.
Devi, D Chitra; Uthariaraj, V Rhymend
2016-01-01
Cloud computing uses the concepts of scheduling and load balancing to migrate tasks to underutilized VMs for effectively sharing the resources. The scheduling of the nonpreemptive tasks in the cloud computing environment is an irrecoverable restraint and hence it has to be assigned to the most appropriate VMs at the initial placement itself. Practically, the arrived jobs consist of multiple interdependent tasks and they may execute the independent tasks in multiple VMs or in the same VM's multiple cores. Also, the jobs arrive during the run time of the server in varying random intervals under various load conditions. The participating heterogeneous resources are managed by allocating the tasks to appropriate resources by static or dynamic scheduling to make the cloud computing more efficient and thus it improves the user satisfaction. Objective of this work is to introduce and evaluate the proposed scheduling and load balancing algorithm by considering the capabilities of each virtual machine (VM), the task length of each requested job, and the interdependency of multiple tasks. Performance of the proposed algorithm is studied by comparing with the existing methods.
Devi, D. Chitra; Uthariaraj, V. Rhymend
2016-01-01
Cloud computing uses the concepts of scheduling and load balancing to migrate tasks to underutilized VMs for effectively sharing the resources. The scheduling of the nonpreemptive tasks in the cloud computing environment is an irrecoverable restraint and hence it has to be assigned to the most appropriate VMs at the initial placement itself. Practically, the arrived jobs consist of multiple interdependent tasks and they may execute the independent tasks in multiple VMs or in the same VM's multiple cores. Also, the jobs arrive during the run time of the server in varying random intervals under various load conditions. The participating heterogeneous resources are managed by allocating the tasks to appropriate resources by static or dynamic scheduling to make the cloud computing more efficient and thus it improves the user satisfaction. Objective of this work is to introduce and evaluate the proposed scheduling and load balancing algorithm by considering the capabilities of each virtual machine (VM), the task length of each requested job, and the interdependency of multiple tasks. Performance of the proposed algorithm is studied by comparing with the existing methods. PMID:26955656
Cloud flexibility using DIRAC interware
NASA Astrophysics Data System (ADS)
Fernandez Albor, Víctor; Seco Miguelez, Marcos; Fernandez Pena, Tomas; Mendez Muñoz, Victor; Saborido Silva, Juan Jose; Graciani Diaz, Ricardo
2014-06-01
Communities of different locations are running their computing jobs on dedicated infrastructures without the need to worry about software, hardware or even the site where their programs are going to be executed. Nevertheless, this usually implies that they are restricted to use certain types or versions of an Operating System because either their software needs an definite version of a system library or a specific platform is required by the collaboration to which they belong. On this scenario, if a data center wants to service software to incompatible communities, it has to split its physical resources among those communities. This splitting will inevitably lead to an underuse of resources because the data centers are bound to have periods where one or more of its subclusters are idle. It is, in this situation, where Cloud Computing provides the flexibility and reduction in computational cost that data centers are searching for. This paper describes a set of realistic tests that we ran on one of such implementations. The test comprise software from three different HEP communities (Auger, LHCb and QCD phenomelogists) and the Parsec Benchmark Suite running on one or more of three Linux flavors (SL5, Ubuntu 10.04 and Fedora 13). The implemented infrastructure has, at the cloud level, CloudStack that manages the virtual machines (VM) and the hosts on which they run, and, at the user level, the DIRAC framework along with a VM extension that will submit, monitorize and keep track of the user jobs and also requests CloudStack to start or stop the necessary VM's. In this infrastructure, the community software is distributed via the CernVM-FS, which has been proven to be a reliable and scalable software distribution system. With the resulting infrastructure, users are allowed to send their jobs transparently to the Data Center. The main purpose of this system is the creation of flexible cluster, multiplatform with an scalable method for software distribution for several VOs. Users from different communities do not need to care about the installation of the standard software that is available at the nodes, nor the operating system of the host machine, which is transparent to the user.
1988-05-19
System for the Ada Language System, Version 1.1.0, 1.% International Business Machines Corporation, Wright-Patterson AFB. IBM 4381 under VM/SP CMS...THIS PAGE (When Data Enre’ed) AVF Control Number: AVF-VSR-82.1087 87-03-10-TEL ! Ada® COMPILER VALIDATION SUMMARY REPORT: International Business Machines...Organization (AVO). On-site testing was conducted from !8 May 1987 through 19 May 1987 at International Business Machines -orporation, San Diego CA. 1.2
Workflow as a Service in the Cloud: Architecture and Scheduling Algorithms.
Wang, Jianwu; Korambath, Prakashan; Altintas, Ilkay; Davis, Jim; Crawl, Daniel
2014-01-01
With more and more workflow systems adopting cloud as their execution environment, it becomes increasingly challenging on how to efficiently manage various workflows, virtual machines (VMs) and workflow execution on VM instances. To make the system scalable and easy-to-extend, we design a Workflow as a Service (WFaaS) architecture with independent services. A core part of the architecture is how to efficiently respond continuous workflow requests from users and schedule their executions in the cloud. Based on different targets, we propose four heuristic workflow scheduling algorithms for the WFaaS architecture, and analyze the differences and best usages of the algorithms in terms of performance, cost and the price/performance ratio via experimental studies.
Performance implications from sizing a VM on multi-core systems: A Data analytic application s view
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lim, Seung-Hwan; Horey, James L; Begoli, Edmon
In this paper, we present a quantitative performance analysis of data analytics applications running on multi-core virtual machines. Such environments form the core of cloud computing. In addition, data analytics applications, such as Cassandra and Hadoop, are becoming increasingly popular on cloud computing platforms. This convergence necessitates a better understanding of the performance and cost implications of such hybrid systems. For example, the very rst step in hosting applications in virtualized environments, requires the user to con gure the number of virtual processors and the size of memory. To understand performance implications of this step, we benchmarked three Yahoo Cloudmore » Serving Benchmark (YCSB) workloads in a virtualized multi-core environment. Our measurements indicate that the performance of Cassandra for YCSB workloads does not heavily depend on the processing capacity of a system, while the size of the data set is critical to performance relative to allocated memory. We also identi ed a strong relationship between the running time of workloads and various hardware events (last level cache loads, misses, and CPU migrations). From this analysis, we provide several suggestions to improve the performance of data analytics applications running on cloud computing environments.« less
Gatumu, Margaret K; MacMillan, Frances M; Langton, Philip D; Headley, P Max; Harris, Judy R
2014-01-01
This article describes the introduction of a virtual microscope (VM) that has allowed preclinical histology teaching to be fashioned to better suit the needs of approximately 900 undergraduate students per year studying medicine, dentistry, or veterinary science at the University of Bristol, United Kingdom. Features of the VM implementation include: (1) the facility for students and teachers to make annotations on the digital slides; (2) in-house development of VM-based quizzes that are used for both formative and summative assessments; (3) archiving of teaching materials generated each year, enabling students to access their personalized learning resources throughout their programs; and (4) retention of light microscopy capability alongside the VM. Student feedback on the VM is particularly positive about its ease of use, the value of the annotation tool, the quizzes, and the accessibility of all components off-campus. Analysis of login data indicates considerable, although variable, use of the VM by students outside timetabled teaching. The median number of annual logins per student account for every course exceeded the number of timetabled histology classes for that course (1.6–3.5 times). The total number of annual student logins across all cohorts increased from approximately 9,000 in the year 2007–2008 to 22,000 in the year 2010–2011. The implementation of the VM has improved teaching and learning in practical classes within the histology laboratory and facilitated consolidation and revision of material outside the laboratory. Discussion is provided of some novel strategies that capitalize on the benefits of introducing a VM, as well as strategies adopted to overcome some potential challenges.
Development of a Virtual Museum Including a 4d Presentation of Building History in Virtual Reality
NASA Astrophysics Data System (ADS)
Kersten, T. P.; Tschirschwitz, F.; Deggim, S.
2017-02-01
In the last two decades the definition of the term "virtual museum" changed due to rapid technological developments. Using today's available 3D technologies a virtual museum is no longer just a presentation of collections on the Internet or a virtual tour of an exhibition using panoramic photography. On one hand, a virtual museum should enhance a museum visitor's experience by providing access to additional materials for review and knowledge deepening either before or after the real visit. On the other hand, a virtual museum should also be used as teaching material in the context of museum education. The laboratory for Photogrammetry & Laser Scanning of the HafenCity University Hamburg has developed a virtual museum (VM) of the museum "Alt-Segeberger Bürgerhaus", a historic town house. The VM offers two options for visitors wishing to explore the museum without travelling to the city of Bad Segeberg, Schleswig-Holstein, Germany. Option a, an interactive computer-based, tour for visitors to explore the exhibition and to collect information of interest or option b, to immerse into virtual reality in 3D with the HTC Vive Virtual Reality System.
VMCast: A VM-Assisted Stability Enhancing Solution for Tree-Based Overlay Multicast
Gu, Weidong; Zhang, Xinchang; Gong, Bin; Zhang, Wei; Wang, Lu
2015-01-01
Tree-based overlay multicast is an effective group communication method for media streaming applications. However, a group member’s departure causes all of its descendants to be disconnected from the multicast tree for some time, which results in poor performance. The above problem is difficult to be addressed because overlay multicast tree is intrinsically instable. In this paper, we proposed a novel stability enhancing solution, VMCast, for tree-based overlay multicast. This solution uses two types of on-demand cloud virtual machines (VMs), i.e., multicast VMs (MVMs) and compensation VMs (CVMs). MVMs are used to disseminate the multicast data, whereas CVMs are used to offer streaming compensation. The used VMs in the same cloud datacenter constitute a VM cluster. Each VM cluster is responsible for a service domain (VMSD), and each group member belongs to a specific VMSD. The data source delivers the multicast data to MVMs through a reliable path, and MVMs further disseminate the data to group members along domain overlay multicast trees. The above approach structurally improves the stability of the overlay multicast tree. We further utilized CVM-based streaming compensation to enhance the stability of the data distribution in the VMSDs. VMCast can be used as an extension to existing tree-based overlay multicast solutions, to provide better services for media streaming applications. We applied VMCast to two application instances (i.e., HMTP and HCcast). The results show that it can obviously enhance the stability of the data distribution. PMID:26562152
VMCast: A VM-Assisted Stability Enhancing Solution for Tree-Based Overlay Multicast.
Gu, Weidong; Zhang, Xinchang; Gong, Bin; Zhang, Wei; Wang, Lu
2015-01-01
Tree-based overlay multicast is an effective group communication method for media streaming applications. However, a group member's departure causes all of its descendants to be disconnected from the multicast tree for some time, which results in poor performance. The above problem is difficult to be addressed because overlay multicast tree is intrinsically instable. In this paper, we proposed a novel stability enhancing solution, VMCast, for tree-based overlay multicast. This solution uses two types of on-demand cloud virtual machines (VMs), i.e., multicast VMs (MVMs) and compensation VMs (CVMs). MVMs are used to disseminate the multicast data, whereas CVMs are used to offer streaming compensation. The used VMs in the same cloud datacenter constitute a VM cluster. Each VM cluster is responsible for a service domain (VMSD), and each group member belongs to a specific VMSD. The data source delivers the multicast data to MVMs through a reliable path, and MVMs further disseminate the data to group members along domain overlay multicast trees. The above approach structurally improves the stability of the overlay multicast tree. We further utilized CVM-based streaming compensation to enhance the stability of the data distribution in the VMSDs. VMCast can be used as an extension to existing tree-based overlay multicast solutions, to provide better services for media streaming applications. We applied VMCast to two application instances (i.e., HMTP and HCcast). The results show that it can obviously enhance the stability of the data distribution.
Workflow as a Service in the Cloud: Architecture and Scheduling Algorithms
Wang, Jianwu; Korambath, Prakashan; Altintas, Ilkay; Davis, Jim; Crawl, Daniel
2017-01-01
With more and more workflow systems adopting cloud as their execution environment, it becomes increasingly challenging on how to efficiently manage various workflows, virtual machines (VMs) and workflow execution on VM instances. To make the system scalable and easy-to-extend, we design a Workflow as a Service (WFaaS) architecture with independent services. A core part of the architecture is how to efficiently respond continuous workflow requests from users and schedule their executions in the cloud. Based on different targets, we propose four heuristic workflow scheduling algorithms for the WFaaS architecture, and analyze the differences and best usages of the algorithms in terms of performance, cost and the price/performance ratio via experimental studies. PMID:29399237
Enhanced virtual microscopy for collaborative education.
Triola, Marc M; Holloway, William J
2011-01-26
Curricular reform efforts and a desire to use novel educational strategies that foster student collaboration are challenging the traditional microscope-based teaching of histology. Computer-based histology teaching tools and Virtual Microscopes (VM), computer-based digital slide viewers, have been shown to be effective and efficient educational strategies. We developed an open-source VM system based on the Google Maps engine to transform our histology education and introduce new teaching methods. This VM allows students and faculty to collaboratively create content, annotate slides with markers, and it is enhanced with social networking features to give the community of learners more control over the system. We currently have 1,037 slides in our VM system comprised of 39,386,941 individual JPEG files that take up 349 gigabytes of server storage space. Of those slides 682 are for general teaching and available to our students and the public; the remaining 355 slides are used for practical exams and have restricted access. The system has seen extensive use with 289,352 unique slide views to date. Students viewed an average of 56.3 slides per month during the histology course and accessed the system at all hours of the day. Of the 621 annotations added to 126 slides 26.2% were added by faculty and 73.8% by students. The use of the VM system reduced the amount of time faculty spent administering the course by 210 hours, but did not reduce the number of laboratory sessions or the number of required faculty. Laboratory sessions were reduced from three hours to two hours each due to the efficiencies in the workflow of the VM system. Our virtual microscope system has been an effective solution to the challenges facing traditional histopathology laboratories and the novel needs of our revised curriculum. The web-based system allowed us to empower learners to have greater control over their content, as well as the ability to work together in collaborative groups. The VM system saved faculty time and there was no significant difference in student performance on an identical practical exam before and after its adoption. We have made the source code of our VM freely available and encourage use of the publically available slides on our website.
Implementing virtual microscopy improves outcomes in a hematology morphology course.
Brueggeman, Mauri S; Swinehart, Cheryl; Yue, Mary Jane; Conway-Klaassen, Janice M; Wiesner, Stephen M
2012-01-01
In this study, we evaluated the efficacy of virtual microscopy as the primary mode of laboratory instruction in undergraduate level clinical hematology teaching. Distance education (DE) has become a popular option for expanding education and optimizing expenses but continues to be controversial. The challenge of delivering an equitable curriculum to distant locations along with the need to preserve our slide collection directed our effort to digitize the slide sets used in our teaching laboratories. Students enrolled at two performance sites were randomly assigned to either traditional microscopy (TM) or virtual microscopy (VM) instruction. The VM group performed significantly better than the TM group. We anticipate that this approach will play a central role in the distributed delivery of hematology through distance education as new programs are initiated to address workforce shortage needs.
Design and performance of the virtualization platform for offline computing on the ATLAS TDAQ Farm
NASA Astrophysics Data System (ADS)
Ballestrero, S.; Batraneanu, S. M.; Brasolin, F.; Contescu, C.; Di Girolamo, A.; Lee, C. J.; Pozo Astigarraga, M. E.; Scannicchio, D. A.; Twomey, M. S.; Zaytsev, A.
2014-06-01
With the LHC collider at CERN currently going through the period of Long Shutdown 1 there is an opportunity to use the computing resources of the experiments' large trigger farms for other data processing activities. In the case of the ATLAS experiment, the TDAQ farm, consisting of more than 1500 compute nodes, is suitable for running Monte Carlo (MC) production jobs that are mostly CPU and not I/O bound. This contribution gives a thorough review of the design and deployment of a virtualized platform running on this computing resource and of its use to run large groups of CernVM based virtual machines operating as a single CERN-P1 WLCG site. This platform has been designed to guarantee the security and the usability of the ATLAS private network, and to minimize interference with TDAQ's usage of the farm. Openstack has been chosen to provide a cloud management layer. The experience gained in the last 3.5 months shows that the use of the TDAQ farm for the MC simulation contributes to the ATLAS data processing at the level of a large Tier-1 WLCG site, despite the opportunistic nature of the underlying computing resources being used.
Liu, Tian-Shuang; Li, Zhen-Chun; Chen, Xiao-Dong
2009-04-01
To investigate the interface bond and thermal compatibility between Mark II machining ceramic and Vita VM9 veneering porcelain. A bar shaped specimen (30 mm x 15 mm x 1 mm in size) of Mark II block was prepared, with 0.5 mm-deep notch (vertical to the long axis of specimen) at the middle of the bottom surface. The upper surface was veneered with 0.3 mm VM9 dentin base porcelain. Then the specimen was fractured from the notching site and the fracture surface was examined under scanning electron microscope (SEM) and electron microprobe analyzer (EMPA) with electron beam of 1 microm in diameter. Another ten specimens (30 mm x 15 mm x 1.5 mm in size) were fabricated and the temperature of thermal shock resistance were tested. SEM observation showed tight bond between these two materials and EMPA results showed penetration of Al element from Mark II block into veneering porcelain and Ca element from veneering porcelain into Mark II block occurred after sintering baking. The average temperature of thermal shock resistance for specimens in this study was (194.0+/-10.3) degrees C. Cracks were mainly distributed in veneering porcelain. Chemical bond exists between the Mark II machining ceramic and Vita VM9 veneering porcelain, and there is good thermal compatibility between them.
Hanson, G Jay; Michalak, Gregory J; Childs, Robert; McCollough, Brian; Kurup, Anil N; Hough, David M; Frye, Judson M; Fidler, Jeff L; Venkatesh, Sudhakar K; Leng, Shuai; Yu, Lifeng; Halaweish, Ahmed F; Harmsen, W Scott; McCollough, Cynthia H; Fletcher, J G
2018-06-01
Single-energy low tube potential (SE-LTP) and dual-energy virtual monoenergetic (DE-VM) CT images both increase the conspicuity of hepatic lesions by increasing iodine signal. Our purpose was to compare the conspicuity of proven liver lesions, artifacts, and radiologist preferences in dose-matched SE-LTP and DE-VM images. Thirty-one patients with 72 proven liver lesions (21 benign, 51 malignant) underwent full-dose contrast-enhanced dual-energy CT (DECT). Half-dose images were obtained using single tube reconstruction of the dual-source SE-LTP projection data (80 or 100 kV), and by inserting noise into dual-energy projection data, with DE-VM images reconstructed from 40 to 70 keV. Three blinded gastrointestinal radiologists evaluated half-dose SE-LTP and DE-VM images, ranking and grading liver lesion conspicuity and diagnostic confidence (4-point scale) on a per-lesion basis. Image quality (noise, artifacts, sharpness) was evaluated, and overall image preference was ranked on per-patient basis. Lesion-to-liver contrast-to-noise ratio (CNR) was compared between techniques. Mean lesion size was 1.5 ± 1.2 cm. Across the readers, the mean conspicuity ratings for 40, 45, and 50 keV half-dose DE-VM images were superior compared to other half-dose image sets (p < 0.0001). Per-lesion diagnostic confidence was similar between half-dose SE-LTP compared to half-dose DE-VM images (p ≥ 0.05; 1.19 vs. 1.24-1.32). However, SE-LTP images had less noise and artifacts and were sharper compared to DE-VM images less than 70 keV (p < 0.05). On a per-patient basis, radiologists preferred SE-LTP images the most and preferred 40-50 keV the least (p < 0.0001). Lesion CNR was also higher in SE-LTP images than DE-VM images (p < 0.01). For the same applied dose level, liver lesions were more conspicuous using DE-VM compared to SE-LTP; however, SE-LTP images were preferred more than any single DE-VM energy level, likely due to lower noise and artifacts.
The Virtual Mission - A step-wise approach to large space missions
NASA Technical Reports Server (NTRS)
Hansen, Elaine; Jones, Morgan; Hooke, Adrian; Pomphrey, Richard
1992-01-01
Attention is given to the Virtual Mission (VM) concept, wherein multiple scientific instruments will be on different platforms, in different orbits, operated from different control centers, at different institutions, and reporting to different user groups. The VM concept enables NASA's science and application users to accomplish their broad science goals with a fleet made up of smaller, more focused spacecraft and to alleviate the difficulties involved with single, large, complex spacecraft. The concept makes possible the stepwise 'go-as-you-pay' extensible approach recommended by Augustine (1990). It enables scientists to mix and match the use of many smaller satellites in novel ways to respond to new scientific ideas and needs.
NASA Astrophysics Data System (ADS)
Lee, Ho Ki; Baek, Kye Hyun; Shin, Kyoungsub
2017-06-01
As semiconductor devices are scaled down to sub-20 nm, process window of plasma etching gets extremely small so that process drift or shift becomes more significant. This study addresses one of typical process drift issues caused by consumable parts erosion over time and provides feasible solution by using virtual metrology (VM) based wafer-to-wafer control. Since erosion of a shower head has center-to-edge area dependency, critical dimensions (CDs) at the wafer center and edge area get reversed over time. That CD trend is successfully estimated on a wafer-to-wafer basis by a partial least square (PLS) model which combines variables from optical emission spectroscopy (OES), VI-probe and equipment state gauges. R 2 of the PLS model reaches 0.89 and its prediction performance is confirmed in a mass production line. As a result, the model can be exploited as a VM for wafer-to-wafer control. With the VM, advanced process control (APC) strategy is implemented to solve the CD drift. Three σ of CD across wafer is improved from the range (1.3-2.9 nm) to the range (0.79-1.7 nm). Hopefully, results introduced in this paper will contribute to accelerating implementation of VM based APC strategy in semiconductor industry.
The ALICE Software Release Validation cluster
NASA Astrophysics Data System (ADS)
Berzano, D.; Krzewicki, M.
2015-12-01
One of the most important steps of software lifecycle is Quality Assurance: this process comprehends both automatic tests and manual reviews, and all of them must pass successfully before the software is approved for production. Some tests, such as source code static analysis, are executed on a single dedicated service: in High Energy Physics, a full simulation and reconstruction chain on a distributed computing environment, backed with a sample “golden” dataset, is also necessary for the quality sign off. The ALICE experiment uses dedicated and virtualized computing infrastructures for the Release Validation in order not to taint the production environment (i.e. CVMFS and the Grid) with non-validated software and validation jobs: the ALICE Release Validation cluster is a disposable virtual cluster appliance based on CernVM and the Virtual Analysis Facility, capable of deploying on demand, and with a single command, a dedicated virtual HTCondor cluster with an automatically scalable number of virtual workers on any cloud supporting the standard EC2 interface. Input and output data are externally stored on EOS, and a dedicated CVMFS service is used to provide the software to be validated. We will show how the Release Validation Cluster deployment and disposal are completely transparent for the Release Manager, who simply triggers the validation from the ALICE build system's web interface. CernVM 3, based entirely on CVMFS, permits to boot any snapshot of the operating system in time: we will show how this allows us to certify each ALICE software release for an exact CernVM snapshot, addressing the problem of Long Term Data Preservation by ensuring a consistent environment for software execution and data reprocessing in the future.
Medial prefrontal pathways for the contextual regulation of extinguished fear in humans
Åhs, Fredrik; Kragel, Philip A.; Zielinski, David J.; Brady, Rachael; LaBar, Kevin S.
2015-01-01
The maintenance of anxiety disorders is thought to depend, in part, on deficits in extinction memory, possibly due to reduced contextual control of extinction that leads to fear renewal. Animal studies suggest that the neural circuitry responsible fear renewal includes the hippocampus, amygdala, and dorsomedial (dmPFC) and ventromedial (vmPFC) prefrontal cortex. However, the neural mechanisms of context-dependent fear renewal in humans remain poorly understood. We used functional magnetic resonance imaging (fMRI), combined with psychophysiology and immersive virtual reality, to elucidate how the hippocampus, amygdala, and dmPFC and vmPFC interact to drive the context-dependent renewal of extinguished fear. Healthy human participants encountered dynamic fear-relevant conditioned stimuli (CSs) while navigating through 3-D virtual reality environments in the MRI scanner. Conditioning and extinction were performed in two different virtual contexts. Twenty-four hours later, participants were exposed to the CSs without reinforcement while navigating through both contexts in the MRI scanner. Participants showed enhanced skin conductance responses (SCRs) to the previously-reinforced CS+ in the acquisition context on Day 2, consistent with fear renewal, and sustained responses in the dmPFC. In contrast, participants showed low SCRs to the CSs in the extinction context on Day 2, consistent with extinction recall, and enhanced vmPFC activation to the non-reinforced CS−. Structural equation modeling revealed that the dmPFC fully mediated the effect of the hippocampus on right amygdala activity during fear renewal, whereas the vmPFC partially mediated the effect of the hippocampus on right amygdala activity during extinction recall. These results indicate dissociable contextual influences of the hippocampus on prefrontal pathways, which, in turn, determine the level of reactivation of fear associations. PMID:26220745
Evaluation of virtual microscopy in medical histology teaching.
Mione, Sylvia; Valcke, Martin; Cornelissen, Maria
2013-01-01
Histology stands as a major discipline in the life science curricula, and the practice of teaching it is based on theoretical didactic strategies along with practical training. Traditionally, students achieve practical competence in this subject by learning optical microscopy. Today, students can use newer information and communication technologies in the study of digital microscopic images. A virtual microscopy program was recently introduced at Ghent University. Since little empirical evidence is available concerning the impact of virtual microscopy (VM) versus optical microscopy (OM) on the acquisition of histology knowledge, this study was set up in the Faculty of Medicine and Health Sciences. A pretest-post test and cross-over design was adopted. In the first phase, the experiment yielded two groups in a total population of 199 students, Group 1 performing the practical sessions with OM versus Group 2 performing the same sessions with VM. In the second phase, the research subjects switched conditions. The prior knowledge level of all research subjects was assessed with a pretest. Knowledge acquisition was measured with a post test after each phase (T1 and T2). Analysis of covariance was carried out to study the differential gain in knowledge at T1 and T2, considering the possible differences in prior knowledge at the start of the study. The results pointed to non-significant differences at T1 and at T2. This supports the assumption that the acquisition of the histology knowledge is independent of the microscopy representation mode (VM versus OM) of the learning material. The conclusion that VM is equivalent to OM offers new directions in view of ongoing innovations in medical education technology. Copyright © 2013 American Association of Anatomists.
NASA Astrophysics Data System (ADS)
Timm, S.; Cooper, G.; Fuess, S.; Garzoglio, G.; Holzman, B.; Kennedy, R.; Grassano, D.; Tiradani, A.; Krishnamurthy, R.; Vinayagam, S.; Raicu, I.; Wu, H.; Ren, S.; Noh, S.-Y.
2017-10-01
The Fermilab HEPCloud Facility Project has as its goal to extend the current Fermilab facility interface to provide transparent access to disparate resources including commercial and community clouds, grid federations, and HPC centers. This facility enables experiments to perform the full spectrum of computing tasks, including data-intensive simulation and reconstruction. We have evaluated the use of the commercial cloud to provide elasticity to respond to peaks of demand without overprovisioning local resources. Full scale data-intensive workflows have been successfully completed on Amazon Web Services for two High Energy Physics Experiments, CMS and NOνA, at the scale of 58000 simultaneous cores. This paper describes the significant improvements that were made to the virtual machine provisioning system, code caching system, and data movement system to accomplish this work. The virtual image provisioning and contextualization service was extended to multiple AWS regions, and to support experiment-specific data configurations. A prototype Decision Engine was written to determine the optimal availability zone and instance type to run on, minimizing cost and job interruptions. We have deployed a scalable on-demand caching service to deliver code and database information to jobs running on the commercial cloud. It uses the frontiersquid server and CERN VM File System (CVMFS) clients on EC2 instances and utilizes various services provided by AWS to build the infrastructure (stack). We discuss the architecture and load testing benchmarks on the squid servers. We also describe various approaches that were evaluated to transport experimental data to and from the cloud, and the optimal solutions that were used for the bulk of the data transport. Finally, we summarize lessons learned from this scale test, and our future plans to expand and improve the Fermilab HEP Cloud Facility.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Timm, S.; Cooper, G.; Fuess, S.
The Fermilab HEPCloud Facility Project has as its goal to extend the current Fermilab facility interface to provide transparent access to disparate resources including commercial and community clouds, grid federations, and HPC centers. This facility enables experiments to perform the full spectrum of computing tasks, including data-intensive simulation and reconstruction. We have evaluated the use of the commercial cloud to provide elasticity to respond to peaks of demand without overprovisioning local resources. Full scale data-intensive workflows have been successfully completed on Amazon Web Services for two High Energy Physics Experiments, CMS and NOνA, at the scale of 58000 simultaneous cores.more » This paper describes the significant improvements that were made to the virtual machine provisioning system, code caching system, and data movement system to accomplish this work. The virtual image provisioning and contextualization service was extended to multiple AWS regions, and to support experiment-specific data configurations. A prototype Decision Engine was written to determine the optimal availability zone and instance type to run on, minimizing cost and job interruptions. We have deployed a scalable on-demand caching service to deliver code and database information to jobs running on the commercial cloud. It uses the frontiersquid server and CERN VM File System (CVMFS) clients on EC2 instances and utilizes various services provided by AWS to build the infrastructure (stack). We discuss the architecture and load testing benchmarks on the squid servers. We also describe various approaches that were evaluated to transport experimental data to and from the cloud, and the optimal solutions that were used for the bulk of the data transport. Finally, we summarize lessons learned from this scale test, and our future plans to expand and improve the Fermilab HEP Cloud Facility.« less
Quinn, Kylie M; Fox, Annette; Harland, Kim L; Russ, Brendan E; Li, Jasmine; Nguyen, Thi H O; Loh, Liyen; Olshanksy, Moshe; Naeem, Haroon; Tsyganov, Kirill; Wiede, Florian; Webster, Rosela; Blyth, Chantelle; Sng, Xavier Y X; Tiganis, Tony; Powell, David; Doherty, Peter C; Turner, Stephen J; Kedzierska, Katherine; La Gruta, Nicole L
2018-06-19
Age-associated decreases in primary CD8 + T cell responses occur, in part, due to direct effects on naive CD8 + T cells to reduce intrinsic functionality, but the precise nature of this defect remains undefined. Aging also causes accumulation of antigen-naive but semi-differentiated "virtual memory" (T VM ) cells, but their contribution to age-related functional decline is unclear. Here, we show that T VM cells are poorly proliferative in aged mice and humans, despite being highly proliferative in young individuals, while conventional naive T cells (T N cells) retain proliferative capacity in both aged mice and humans. Adoptive transfer experiments in mice illustrated that naive CD8 T cells can acquire a proliferative defect imposed by the aged environment but age-related proliferative dysfunction could not be rescued by a young environment. Molecular analyses demonstrate that aged T VM cells exhibit a profile consistent with senescence, marking an observation of senescence in an antigenically naive T cell population. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.
2014-11-20
techniques to defend against stealthy malware, i.e., rootkits. For example, we have been developing new virtualization-based security service called AirBag ...for mobile devices. AirBag is a virtualization-based system that enables dynamic switching of (guest) Android im- ages in one VM, with one image
ERIC Educational Resources Information Center
Zacharia, Zacharias C.; de Jong, Ton
2014-01-01
This study investigates whether Virtual Manipulatives (VM) within a Physical Manipulatives (PM)-oriented curriculum affect conceptual understanding of electric circuits and related experimentation processes. A pre-post comparison study randomly assigned 194 undergraduates in an introductory physics course to one of five conditions: three…
Susceptibility to social pressure following ventromedial prefrontal cortex damage
Rusch, Michelle L.; Dawson, Jeffrey D.; Rizzo, Matthew; Anderson, Steven W.
2015-01-01
Social pressure influences human behavior including risk taking, but the psychological and neural underpinnings of this process are not well understood. We used the human lesion method to probe the role of ventromedial prefrontal cortex (vmPFC) in resisting adverse social pressure in the presence of risk. Thirty-seven participants (11 with vmPFC damage, 12 with brain damage outside the vmPFC and 14 without brain damage) were tested in driving simulator scenarios requiring left-turn decisions across oncoming traffic with varying time gaps between the oncoming vehicles. Social pressure was applied by a virtual driver who honked aggressively from behind. Participants with vmPFC damage were more likely to select smaller and potentially unsafe gaps under social pressure, while gap selection by the comparison groups did not change under social pressure. Participants with vmPFC damage also showed prolonged elevated skin conductance responses (SCR) under social pressure. Comparison groups showed similar initial elevated SCR, which then declined prior to making left-turn decisions. The findings suggest that the vmPFC plays an important role in resisting explicit and immediately present social pressure with potentially negative consequences. The vmPFC appears to contribute to the regulation of emotional responses and the modulation of decision making to optimize long-term outcomes. PMID:25816815
The StratusLab cloud distribution: Use-cases and support for scientific applications
NASA Astrophysics Data System (ADS)
Floros, E.
2012-04-01
The StratusLab project is integrating an open cloud software distribution that enables organizations to setup and provide their own private or public IaaS (Infrastructure as a Service) computing clouds. StratusLab distribution capitalizes on popular infrastructure virtualization solutions like KVM, the OpenNebula virtual machine manager, Claudia service manager and SlipStream deployment platform, which are further enhanced and expanded with additional components developed within the project. The StratusLab distribution covers the core aspects of a cloud IaaS architecture, namely Computing (life-cycle management of virtual machines), Storage, Appliance management and Networking. The resulting software stack provides a packaged turn-key solution for deploying cloud computing services. The cloud computing infrastructures deployed using StratusLab can support a wide range of scientific and business use cases. Grid computing has been the primary use case pursued by the project and for this reason the initial priority has been the support for the deployment and operation of fully virtualized production-level grid sites; a goal that has already been achieved by operating such a site as part of EGI's (European Grid Initiative) pan-european grid infrastructure. In this area the project is currently working to provide non-trivial capabilities like elastic and autonomic management of grid site resources. Although grid computing has been the motivating paradigm, StratusLab's cloud distribution can support a wider range of use cases. Towards this direction, we have developed and currently provide support for setting up general purpose computing solutions like Hadoop, MPI and Torque clusters. For what concerns scientific applications the project is collaborating closely with the Bioinformatics community in order to prepare VM appliances and deploy optimized services for bioinformatics applications. In a similar manner additional scientific disciplines like Earth Science can take advantage of StratusLab cloud solutions. Interested users are welcomed to join StratusLab's user community by getting access to the reference cloud services deployed by the project and offered to the public.
Implementation of dual-energy technique for virtual monochromatic and linearly mixed CBCTs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li Hao; Giles, William; Ren Lei
Purpose: To implement dual-energy imaging technique for virtual monochromatic (VM) and linearly mixed (LM) cone beam CTs (CBCTs) and to demonstrate their potential applications in metal artifact reduction and contrast enhancement in image-guided radiation therapy (IGRT). Methods: A bench-top CBCT system was used to acquire 80 kVp and 150 kVp projections, with an additional 0.8 mm tin filtration. To implement the VM technique, these projections were first decomposed into acrylic and aluminum basis material projections to synthesize VM projections, which were then used to reconstruct VM CBCTs. The effect of VM CBCT on the metal artifact reduction was evaluated withmore » an in-house titanium-BB phantom. The optimal VM energy to maximize contrast-to-noise ratio (CNR) for iodine contrast and minimize beam hardening in VM CBCT was determined using a water phantom containing two iodine concentrations. The LM technique was implemented by linearly combining the low-energy (80 kVp) and high-energy (150 kVp) CBCTs. The dose partitioning between low-energy and high-energy CBCTs was varied (20%, 40%, 60%, and 80% for low-energy) while keeping total dose approximately equal to single-energy CBCTs, measured using an ion chamber. Noise levels and CNRs for four tissue types were investigated for dual-energy LM CBCTs in comparison with single-energy CBCTs at 80, 100, 125, and 150 kVp. Results: The VM technique showed substantial reduction of metal artifacts at 100 keV with a 40% reduction in the background standard deviation compared to a 125 kVp single-energy scan of equal dose. The VM energy to maximize CNR for both iodine concentrations and minimize beam hardening in the metal-free object was 50 keV and 60 keV, respectively. The difference of average noise levels measured in the phantom background was 1.2% between dual-energy LM CBCTs and equivalent-dose single-energy CBCTs. CNR values in the LM CBCTs of any dose partitioning are better than those of 150 kVp single-energy CBCTs. The average CNR for four tissue types with 80% dose fraction at low-energy showed 9.0% and 4.1% improvement relative to 100 kVp and 125 kVp single-energy CBCTs, respectively. CNRs for low-contrast objects improved as dose partitioning was more heavily weighted toward low-energy (80 kVp) for LM CBCTs. Conclusions: Dual-energy CBCT imaging techniques were implemented to synthesize VM CBCT and LM CBCTs. VM CBCT was effective at achieving metal artifact reduction. Depending on the dose-partitioning scheme, LM CBCT demonstrated the potential to improve CNR for low contrast objects compared to single-energy CBCT acquired with equivalent dose.« less
Jakobsen, Markus Due; Sundstrup, Emil; Andersen, Christoffer H; Bandholm, Thomas; Thorborg, Kristian; Zebis, Mette K; Andersen, Lars L
2012-12-01
While elastic resistance training, targeting the upper body is effective for strength training, the effect of elastic resistance training on lower body muscle activity remains questionable. The purpose of this study was to evaluate the EMG-angle relationship of the quadriceps muscle during 10-RM knee-extensions performed with elastic tubing and an isotonic strength training machine. 7 women and 9 men aged 28-67 years (mean age 44 and 41 years, respectively) participated. Electromyographic (EMG) activity was recorded in 10 muscles during the concentric and eccentric contraction phase of a knee extension exercise performed with elastic tubing and in training machine and normalized to maximal voluntary isometric contraction (MVC) EMG (nEMG). Knee joint angle was measured during the exercises using electronic inclinometers (range of motion 0-90°). When comparing the machine and elastic resistance exercises there were no significant differences in peak EMG of the rectus femoris (RF), vastus lateralis (VL), vastus medialis (VM) during the concentric contraction phase. However, during the eccentric phase, peak EMG was significantly higher (p<0.01) in RF and VM when performing knee extensions using the training machine. In VL and VM the EMG-angle pattern was different between the two training modalities (significant angle by exercise interaction). When using elastic resistance, the EMG-angle pattern peaked towards full knee extension (0°), whereas angle at peak EMG occurred closer to knee flexion position (90°) during the machine exercise. Perceived loading (Borg CR10) was similar during knee extensions performed with elastic tubing (5.7±0.6) compared with knee extensions performed in training machine (5.9±0.5). Knee extensions performed with elastic tubing induces similar high (>70% nEMG) quadriceps muscle activity during the concentric contraction phase, but slightly lower during the eccentric contraction phase, as knee extensions performed using an isotonic training machine. During the concentric contraction phase the two different conditions displayed reciprocal EMG-angle patterns during the range of motion. 5.
Computer network defense system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Urias, Vincent; Stout, William M. S.; Loverro, Caleb
A method and apparatus for protecting virtual machines. A computer system creates a copy of a group of the virtual machines in an operating network in a deception network to form a group of cloned virtual machines in the deception network when the group of the virtual machines is accessed by an adversary. The computer system creates an emulation of components from the operating network in the deception network. The components are accessible by the group of the cloned virtual machines as if the group of the cloned virtual machines was in the operating network. The computer system moves networkmore » connections for the group of the virtual machines in the operating network used by the adversary from the group of the virtual machines in the operating network to the group of the cloned virtual machines, enabling protecting the group of the virtual machines from actions performed by the adversary.« less
Software-defined optical network for metro-scale geographically distributed data centers.
Samadi, Payman; Wen, Ke; Xu, Junjie; Bergman, Keren
2016-05-30
The emergence of cloud computing and big data has rapidly increased the deployment of small and mid-sized data centers. Enterprises and cloud providers require an agile network among these data centers to empower application reliability and flexible scalability. We present a software-defined inter data center network to enable on-demand scale out of data centers on a metro-scale optical network. The architecture consists of a combined space/wavelength switching platform and a Software-Defined Networking (SDN) control plane equipped with a wavelength and routing assignment module. It enables establishing transparent and bandwidth-selective connections from L2/L3 switches, on-demand. The architecture is evaluated in a testbed consisting of 3 data centers, 5-25 km apart. We successfully demonstrated end-to-end bulk data transfer and Virtual Machine (VM) migrations across data centers with less than 100 ms connection setup time and close to full link capacity utilization.
Computationally efficient models of neuromuscular recruitment and mechanics.
Song, D; Raphael, G; Lan, N; Loeb, G E
2008-06-01
We have improved the stability and computational efficiency of a physiologically realistic, virtual muscle (VM 3.*) model (Cheng et al 2000 J. Neurosci. Methods 101 117-30) by a simpler structure of lumped fiber types and a novel recruitment algorithm. In the new version (VM 4.0), the mathematical equations are reformulated into state-space representation and structured into a CMEX S-function in SIMULINK. A continuous recruitment scheme approximates the discrete recruitment of slow and fast motor units under physiological conditions. This makes it possible to predict force output during smooth recruitment and derecruitment without having to simulate explicitly a large number of independently recruited units. We removed the intermediate state variable, effective length (Leff), which had been introduced to model the delayed length dependency of the activation-frequency relationship, but which had little effect and could introduce instability under physiological conditions of use. Both of these changes greatly reduce the number of state variables with little loss of accuracy compared to the original VM. The performance of VM 4.0 was validated by comparison with VM 3.1.5 for both single-muscle force production and a multi-joint task. The improved VM 4.0 model is more suitable for the analysis of neural control of movements and for design of prosthetic systems to restore lost or impaired motor functions. VM 4.0 is available via the internet and includes options to use the original VM model, which remains useful for detailed simulations of single motor unit behavior.
Computationally efficient models of neuromuscular recruitment and mechanics
NASA Astrophysics Data System (ADS)
Song, D.; Raphael, G.; Lan, N.; Loeb, G. E.
2008-06-01
We have improved the stability and computational efficiency of a physiologically realistic, virtual muscle (VM 3.*) model (Cheng et al 2000 J. Neurosci. Methods 101 117-30) by a simpler structure of lumped fiber types and a novel recruitment algorithm. In the new version (VM 4.0), the mathematical equations are reformulated into state-space representation and structured into a CMEX S-function in SIMULINK. A continuous recruitment scheme approximates the discrete recruitment of slow and fast motor units under physiological conditions. This makes it possible to predict force output during smooth recruitment and derecruitment without having to simulate explicitly a large number of independently recruited units. We removed the intermediate state variable, effective length (Leff), which had been introduced to model the delayed length dependency of the activation-frequency relationship, but which had little effect and could introduce instability under physiological conditions of use. Both of these changes greatly reduce the number of state variables with little loss of accuracy compared to the original VM. The performance of VM 4.0 was validated by comparison with VM 3.1.5 for both single-muscle force production and a multi-joint task. The improved VM 4.0 model is more suitable for the analysis of neural control of movements and for design of prosthetic systems to restore lost or impaired motor functions. VM 4.0 is available via the internet and includes options to use the original VM model, which remains useful for detailed simulations of single motor unit behavior.
Static Memory Deduplication for Performance Optimization in Cloud Computing.
Jia, Gangyong; Han, Guangjie; Wang, Hao; Yang, Xuan
2017-04-27
In a cloud computing environment, the number of virtual machines (VMs) on a single physical server and the number of applications running on each VM are continuously growing. This has led to an enormous increase in the demand of memory capacity and subsequent increase in the energy consumption in the cloud. Lack of enough memory has become a major bottleneck for scalability and performance of virtualization interfaces in cloud computing. To address this problem, memory deduplication techniques which reduce memory demand through page sharing are being adopted. However, such techniques suffer from overheads in terms of number of online comparisons required for the memory deduplication. In this paper, we propose a static memory deduplication (SMD) technique which can reduce memory capacity requirement and provide performance optimization in cloud computing. The main innovation of SMD is that the process of page detection is performed offline, thus potentially reducing the performance cost, especially in terms of response time. In SMD, page comparisons are restricted to the code segment, which has the highest shared content. Our experimental results show that SMD efficiently reduces memory capacity requirement and improves performance. We demonstrate that, compared to other approaches, the cost in terms of the response time is negligible.
Static Memory Deduplication for Performance Optimization in Cloud Computing
Jia, Gangyong; Han, Guangjie; Wang, Hao; Yang, Xuan
2017-01-01
In a cloud computing environment, the number of virtual machines (VMs) on a single physical server and the number of applications running on each VM are continuously growing. This has led to an enormous increase in the demand of memory capacity and subsequent increase in the energy consumption in the cloud. Lack of enough memory has become a major bottleneck for scalability and performance of virtualization interfaces in cloud computing. To address this problem, memory deduplication techniques which reduce memory demand through page sharing are being adopted. However, such techniques suffer from overheads in terms of number of online comparisons required for the memory deduplication. In this paper, we propose a static memory deduplication (SMD) technique which can reduce memory capacity requirement and provide performance optimization in cloud computing. The main innovation of SMD is that the process of page detection is performed offline, thus potentially reducing the performance cost, especially in terms of response time. In SMD, page comparisons are restricted to the code segment, which has the highest shared content. Our experimental results show that SMD efficiently reduces memory capacity requirement and improves performance. We demonstrate that, compared to other approaches, the cost in terms of the response time is negligible. PMID:28448434
Susceptibility to social pressure following ventromedial prefrontal cortex damage.
Chen, Kuan-Hua; Rusch, Michelle L; Dawson, Jeffrey D; Rizzo, Matthew; Anderson, Steven W
2015-11-01
Social pressure influences human behavior including risk taking, but the psychological and neural underpinnings of this process are not well understood. We used the human lesion method to probe the role of ventromedial prefrontal cortex (vmPFC) in resisting adverse social pressure in the presence of risk. Thirty-seven participants (11 with vmPFC damage, 12 with brain damage outside the vmPFC and 14 without brain damage) were tested in driving simulator scenarios requiring left-turn decisions across oncoming traffic with varying time gaps between the oncoming vehicles. Social pressure was applied by a virtual driver who honked aggressively from behind. Participants with vmPFC damage were more likely to select smaller and potentially unsafe gaps under social pressure, while gap selection by the comparison groups did not change under social pressure. Participants with vmPFC damage also showed prolonged elevated skin conductance responses (SCR) under social pressure. Comparison groups showed similar initial elevated SCR, which then declined prior to making left-turn decisions. The findings suggest that the vmPFC plays an important role in resisting explicit and immediately present social pressure with potentially negative consequences. The vmPFC appears to contribute to the regulation of emotional responses and the modulation of decision making to optimize long-term outcomes. © The Author (2015). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
X3DOM as Carrier of the Virtual Heritage
NASA Astrophysics Data System (ADS)
Jung, Y.; Behr, J.; Graf, H.
2011-09-01
Virtual Museums (VM) are a new model of communication that aims at creating a personalized, immersive, and interactive way to enhance our understanding of the world around us. The term "VM" is a short-cut that comprehends various types of digital creations. One of the carriers for the communication of the virtual heritage at future internet level as de-facto standard is browser front-ends presenting the content and assets of museums. A major driving technology for the documentation and presentation of heritage driven media is real-time 3D content, thus imposing new strategies for a web inclusion. 3D content must become a first class web media that can be created, modified, and shared in the same way as text, images, audio and video are handled on the web right now. A new integration model based on a DOM integration into the web browsers' architecture opens up new possibilities for declarative 3 D content on the web and paves the way for new application scenarios for the virtual heritage at future internet level. With special regards to the X3DOM project as enabling technology for declarative 3D in HTML, this paper describes application scenarios and analyses its technological requirements for an efficient presentation and manipulation of virtual heritage assets on the web.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harding, Samuel F.; Romero-Gomez, Pedro D. J.; Richmond, Marshall C.
Standards provide recommendations for the best practices in the installation of current meters for measuring fluid flow in closed conduits. These include PTC-18 and IEC-41 . Both of these standards refer to the requirements of the ISO Standard 3354 for cases where the velocity distribution is assumed to be regular and the flow steady. Due to the nature of the short converging intakes of Kaplan hydroturbines, these assumptions may be invalid if current meters are intended to be used to characterize turbine flows. In this study, we examine a combination of measurement guidelines from both ISO standards by means ofmore » virtual current meters (VCM) set up over a simulated hydroturbine flow field. To this purpose, a computational fluid dynamics (CFD) model was developed to model the velocity field of a short converging intake of the Ice Harbor Dam on the Snake River, in the State of Washington. The detailed geometry and resulting wake of the submersible traveling screen (STS) at the first gate slot was of particular interest in the development of the CFD model using a detached eddy simulation (DES) turbulence solution. An array of virtual point velocity measurements were extracted from the resulting velocity field to simulate VCM at two virtual measurement (VM) locations at different distances downstream of the STS. The discharge through each bay was calculated from the VM using the graphical integration solution to the velocity-area method. This method of representing practical velocimetry techniques in a numerical flow field has been successfully used in a range of marine and conventional hydropower applications. A sensitivity analysis was performed to observe the effect of the VCM array resolution on the discharge error. The downstream VM section required 11–33% less VCM in the array than the upstream VM location to achieve a given discharge error. In general, more instruments were required to quantify the discharge at high levels of accuracy when the STS was introduced because of the increased spatial variability of the flow velocity.« less
Modeling Constellation Virtual Missions Using the Vdot(Trademark) Process Management Tool
NASA Technical Reports Server (NTRS)
Hardy, Roger; ONeil, Daniel; Sturken, Ian; Nix, Michael; Yanez, Damian
2011-01-01
The authors have identified a software tool suite that will support NASA's Virtual Mission (VM) effort. This is accomplished by transforming a spreadsheet database of mission events, task inputs and outputs, timelines, and organizations into process visualization tools and a Vdot process management model that includes embedded analysis software as well as requirements and information related to data manipulation and transfer. This paper describes the progress to date, and the application of the Virtual Mission to not only Constellation but to other architectures, and the pertinence to other aerospace applications. Vdot s intuitive visual interface brings VMs to life by turning static, paper-based processes into active, electronic processes that can be deployed, executed, managed, verified, and continuously improved. A VM can be executed using a computer-based, human-in-the-loop, real-time format, under the direction and control of the NASA VM Manager. Engineers in the various disciplines will not have to be Vdot-proficient but rather can fill out on-line, Excel-type databases with the mission information discussed above. The author s tool suite converts this database into several process visualization tools for review and into Microsoft Project, which can be imported directly into Vdot. Many tools can be embedded directly into Vdot, and when the necessary data/information is received from a preceding task, the analysis can be initiated automatically. Other NASA analysis tools are too complex for this process but Vdot automatically notifies the tool user that the data has been received and analysis can begin. The VM can be simulated from end-to-end using the author s tool suite. The planned approach for the Vdot-based process simulation is to generate the process model from a database; other advantages of this semi-automated approach are the participants can be geographically remote and after refining the process models via the human-in-the-loop simulation, the system can evolve into a process management server for the actual process.
Review of Enabling Technologies to Facilitate Secure Compute Customization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aderholdt, Ferrol; Caldwell, Blake A; Hicks, Susan Elaine
High performance computing environments are often used for a wide variety of workloads ranging from simulation, data transformation and analysis, and complex workflows to name just a few. These systems may process data for a variety of users, often requiring strong separation between job allocations. There are many challenges to establishing these secure enclaves within the shared infrastructure of high-performance computing (HPC) environments. The isolation mechanisms in the system software are the basic building blocks for enabling secure compute enclaves. There are a variety of approaches and the focus of this report is to review the different virtualization technologies thatmore » facilitate the creation of secure compute enclaves. The report reviews current operating system (OS) protection mechanisms and modern virtualization technologies to better understand the performance/isolation properties. We also examine the feasibility of running ``virtualized'' computing resources as non-privileged users, and providing controlled administrative permissions for standard users running within a virtualized context. Our examination includes technologies such as Linux containers (LXC [32], Docker [15]) and full virtualization (KVM [26], Xen [5]). We categorize these different approaches to virtualization into two broad groups: OS-level virtualization and system-level virtualization. The OS-level virtualization uses containers to allow a single OS kernel to be partitioned to create Virtual Environments (VE), e.g., LXC. The resources within the host's kernel are only virtualized in the sense of separate namespaces. In contrast, system-level virtualization uses hypervisors to manage multiple OS kernels and virtualize the physical resources (hardware) to create Virtual Machines (VM), e.g., Xen, KVM. This terminology of VE and VM, detailed in Section 2, is used throughout the report to distinguish between the two different approaches to providing virtualized execution environments. As part of our technology review we analyzed several current virtualization solutions to assess their vulnerabilities. This included a review of common vulnerabilities and exposures (CVEs) for Xen, KVM, LXC and Docker to gauge their susceptibility to different attacks. The complete details are provided in Section 5 on page 33. Based on this review we concluded that system-level virtualization solutions have many more vulnerabilities than OS level virtualization solutions. As such, security mechanisms like sVirt (Section 3.3) should be considered when using system-level virtualization solutions in order to protect the host against exploits. The majority of vulnerabilities related to KVM, LXC, and Docker are in specific regions of the system. Therefore, future "zero day attacks" are likely to be in the same regions, which suggests that protecting these areas can simplify the protection of the host and maintain the isolation between users. The evaluations of virtualization technologies done thus far are discussed in Section 4. This includes experiments with 'user' namespaces in VEs, which provides the ability to isolate user privileges and allow a user to run with different UIDs within the container while mapping them to non-privileged UIDs in the host. We have identified Linux namespaces as a promising mechanism to isolate shared resources, while maintaining good performance. In Section 4.1 we describe our tests with LXC as a non-root user and leveraging namespaces to control UID/GID mappings and support controlled sharing of parallel file-systems. We highlight several of these namespace capabilities in Section 6.2.3. The other evaluations that were performed during this initial phase of work provide baseline performance data for comparing VEs and VMs to purely native execution. In Section 4.2 we performed tests using the High-Performance Computing Conjugate Gradient (HPCCG) benchmark to establish baseline performance for a scientific application when run on the Native (host) machine in contrast with execution under Docker and KVM. Our tests verified prior studies showing roughly 2-4% overheads in application execution time & MFlops when running in hypervisor-base environments (VMs) as compared to near native performance with VEs. For more details, see Figures 4.5 (page 28), 4.6 (page 28), and 4.7 (page 29). Additionally, in Section 4.3 we include network measurements for TCP bandwidth performance over the 10GigE interface in our testbed. The Native and Docker based tests achieved >= ~9Gbits/sec, while the KVM configuration only achieved 2.5Gbits/sec (Table 4.6 on page 32). This may be a configuration issue with our KVM installation, and is a point for further testing as we refine the network settings in the testbed. The initial network tests were done using a bridged networking configuration. The report outline is as follows: - Section 1 introduces the report and clarifies the scope of the proj...« less
NASA Technical Reports Server (NTRS)
Shearrow, Charles A.
1999-01-01
One of the identified goals of EM3 is to implement virtual manufacturing by the time the year 2000 has ended. To realize this goal of a true virtual manufacturing enterprise the initial development of a machinability database and the infrastructure must be completed. This will consist of the containment of the existing EM-NET problems and developing machine, tooling, and common materials databases. To integrate the virtual manufacturing enterprise with normal day to day operations the development of a parallel virtual manufacturing machinability database, virtual manufacturing database, virtual manufacturing paradigm, implementation/integration procedure, and testable verification models must be constructed. Common and virtual machinability databases will include the four distinct areas of machine tools, available tooling, common machine tool loads, and a materials database. The machine tools database will include the machine envelope, special machine attachments, tooling capacity, location within NASA-JSC or with a contractor, and availability/scheduling. The tooling database will include available standard tooling, custom in-house tooling, tool properties, and availability. The common materials database will include materials thickness ranges, strengths, types, and their availability. The virtual manufacturing databases will consist of virtual machines and virtual tooling directly related to the common and machinability databases. The items to be completed are the design and construction of the machinability databases, virtual manufacturing paradigm for NASA-JSC, implementation timeline, VNC model of one bridge mill and troubleshoot existing software and hardware problems with EN4NET. The final step of this virtual manufacturing project will be to integrate other production sites into the databases bringing JSC's EM3 into a position of becoming a clearing house for NASA's digital manufacturing needs creating a true virtual manufacturing enterprise.
Generation of virtual monochromatic CBCT from dual kV/MV beam projections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Hao; Liu, Bo; Yin, Fang-Fang, E-mail: fangfang.yin@duke.edu
Purpose: To develop a novel on-board imaging technique which allows generation of virtual monochromatic (VM) cone-beam CT (CBCT) with a selected energy from combined kilovoltage (kV)/megavoltage (MV) beam projections. Methods: With the current orthogonal kV/MV imaging hardware equipped in modern linear accelerators, both MV projections (from gantry angle of 0°–100°) and kV projections (90°–200°) were acquired as gantry rotated a total of 110°. A selected range of overlap projections between 90° to 100° were then decomposed into two material projections using experimentally determined parameters from orthogonally stacked aluminum and acrylic step-wedges. Given attenuation coefficients of aluminum and acrylic at amore » predetermined energy, one set of VM projections could be synthesized from two corresponding sets of decomposed projections. Two linear functions were generated using projection information at overlap angles to convert kV and MV projections at nonoverlap angles to approximate VM projections for CBCT reconstruction. The contrast-to-noise ratios (CNRs) were calculated for different inserts in VM CBCTs of a CatPhan phantom with various selected energies and compared with those in kV and MV CBCTs. The effect of overlap projection number on CNR was evaluated. Additionally, the effect of beam orientation was studied by scanning the CatPhan sandwiched with two 5 cm solid-water phantoms on both lateral sides and an electronic density phantom with two metal bolt inserts. Results: Proper selection of VM energy [30 and 40 keV for low-density polyethylene (LDPE), polymethylpentene, 2 MeV for Delrin] provided comparable or even better CNR results as compared with kV or MV CBCT. An increased number of overlap kV and MV projection demonstrated only marginal improvements of CNR for different inserts (with the exception of LDPE) and therefore one projection overlap was found to be sufficient for the CatPhan study. It was also evident that the optimal CBCT image quality was achieved when MV beams penetrated through the heavy attenuation direction of the object. Conclusions: A novel technique was developed to generate VM CBCTs from kV/MV projections. This technique has the potential to improve CNR at selected VM energies and to suppress artifacts at appropriate beam orientations.« less
Generation of virtual monochromatic CBCT from dual kV/MV beam projections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Hao; Liu, Bo; Yin, Fang-Fang, E-mail: fangfang.yin@duke.edu
2013-12-15
Purpose: To develop a novel on-board imaging technique which allows generation of virtual monochromatic (VM) cone-beam CT (CBCT) with a selected energy from combined kilovoltage (kV)/megavoltage (MV) beam projections. Methods: With the current orthogonal kV/MV imaging hardware equipped in modern linear accelerators, both MV projections (from gantry angle of 0°–100°) and kV projections (90°–200°) were acquired as gantry rotated a total of 110°. A selected range of overlap projections between 90° to 100° were then decomposed into two material projections using experimentally determined parameters from orthogonally stacked aluminum and acrylic step-wedges. Given attenuation coefficients of aluminum and acrylic at amore » predetermined energy, one set of VM projections could be synthesized from two corresponding sets of decomposed projections. Two linear functions were generated using projection information at overlap angles to convert kV and MV projections at nonoverlap angles to approximate VM projections for CBCT reconstruction. The contrast-to-noise ratios (CNRs) were calculated for different inserts in VM CBCTs of a CatPhan phantom with various selected energies and compared with those in kV and MV CBCTs. The effect of overlap projection number on CNR was evaluated. Additionally, the effect of beam orientation was studied by scanning the CatPhan sandwiched with two 5 cm solid-water phantoms on both lateral sides and an electronic density phantom with two metal bolt inserts. Results: Proper selection of VM energy [30 and 40 keV for low-density polyethylene (LDPE), polymethylpentene, 2 MeV for Delrin] provided comparable or even better CNR results as compared with kV or MV CBCT. An increased number of overlap kV and MV projection demonstrated only marginal improvements of CNR for different inserts (with the exception of LDPE) and therefore one projection overlap was found to be sufficient for the CatPhan study. It was also evident that the optimal CBCT image quality was achieved when MV beams penetrated through the heavy attenuation direction of the object. Conclusions: A novel technique was developed to generate VM CBCTs from kV/MV projections. This technique has the potential to improve CNR at selected VM energies and to suppress artifacts at appropriate beam orientations.« less
Modeling and simulation of five-axis virtual machine based on NX
NASA Astrophysics Data System (ADS)
Li, Xiaoda; Zhan, Xianghui
2018-04-01
Virtual technology in the machinery manufacturing industry has shown the role of growing. In this paper, the Siemens NX software is used to model the virtual CNC machine tool, and the parameters of the virtual machine are defined according to the actual parameters of the machine tool so that the virtual simulation can be carried out without loss of the accuracy of the simulation. How to use the machine builder of the CAM module to define the kinematic chain and machine components of the machine is described. The simulation of virtual machine can provide alarm information of tool collision and over cutting during the process to users, and can evaluate and forecast the rationality of the technological process.
Perspective: Electronic systems of knowledge in the world of virtual microscopy.
Maybury, Terrence; Farah, Camile S
2009-09-01
Across a broad range of medical disciplines, learning how to use an optical or light microscope has been a mandatory inclusion in the undergraduate curriculum. The development of virtual microscopy (VM) technology during the past 10 years has called into question the use of the optical microscope in educational contexts. VM allows slide specimens to be digitized, which, in turn, allows the computer to mimic the workings of the light microscope. This move from analog technology (the light microscope) to digital technology (the computer as microscope) is part of the many significant changes going on in education, a singular manifestation of the broader move from print-literate traditions of knowledge (requiring literacy) to an electronics-literate, or "electrate," mode (requiring "electracy"). VM is here used as an exemplar of this broad transition from literacy to electracy, some components of which include data deluge, a multimodal structure, and modularity. Understandably, this transition is important to clarify educationally, especially in a global context mediated via digital means. A related aspect of these educational changes is the move from teacher-directed learning to student-centered learning, or "user-led education," which points to a redefinition of "pedagogy" as "andragogy." The dissemination of the specific value of VM, then, is critical to both learners and teachers and to a more coherent understanding of electracy. A practical consequence of this clarity might be a better application of this knowledge in the evolving fields of computer simulation and telemedicine, areas in which today's medical students will need future expertise.
Integration of XRootD into the cloud infrastructure for ALICE data analysis
NASA Astrophysics Data System (ADS)
Kompaniets, Mikhail; Shadura, Oksana; Svirin, Pavlo; Yurchenko, Volodymyr; Zarochentsev, Andrey
2015-12-01
Cloud technologies allow easy load balancing between different tasks and projects. From the viewpoint of the data analysis in the ALICE experiment, cloud allows to deploy software using Cern Virtual Machine (CernVM) and CernVM File System (CVMFS), to run different (including outdated) versions of software for long term data preservation and to dynamically allocate resources for different computing activities, e.g. grid site, ALICE Analysis Facility (AAF) and possible usage for local projects or other LHC experiments. We present a cloud solution for Tier-3 sites based on OpenStack and Ceph distributed storage with an integrated XRootD based storage element (SE). One of the key features of the solution is based on idea that Ceph has been used as a backend for Cinder Block Storage service for OpenStack, and in the same time as a storage backend for XRootD, with redundancy and availability of data preserved by Ceph settings. For faster and easier OpenStack deployment was applied the Packstack solution, which is based on the Puppet configuration management system. Ceph installation and configuration operations are structured and converted to Puppet manifests describing node configurations and integrated into Packstack. This solution can be easily deployed, maintained and used even in small groups with limited computing resources and small organizations, which usually have lack of IT support. The proposed infrastructure has been tested on two different clouds (SPbSU & BITP) and integrates successfully with the ALICE data analysis model.
Context-sensitive trace inlining for Java.
Häubl, Christian; Wimmer, Christian; Mössenböck, Hanspeter
2013-12-01
Method inlining is one of the most important optimizations in method-based just-in-time (JIT) compilers. It widens the compilation scope and therefore allows optimizing multiple methods as a whole, which increases the performance. However, if method inlining is used too frequently, the compilation time increases and too much machine code is generated. This has negative effects on the performance. Trace-based JIT compilers only compile frequently executed paths, so-called traces, instead of whole methods. This may result in faster compilation, less generated machine code, and better optimized machine code. In the previous work, we implemented a trace recording infrastructure and a trace-based compiler for [Formula: see text], by modifying the Java HotSpot VM. Based on this work, we evaluate the effect of trace inlining on the performance and the amount of generated machine code. Trace inlining has several major advantages when compared to method inlining. First, trace inlining is more selective than method inlining, because only frequently executed paths are inlined. Second, the recorded traces may capture information about virtual calls, which simplify inlining. A third advantage is that trace information is context sensitive so that different method parts can be inlined depending on the specific call site. These advantages allow more aggressive inlining while the amount of generated machine code is still reasonable. We evaluate several inlining heuristics on the benchmark suites DaCapo 9.12 Bach, SPECjbb2005, and SPECjvm2008 and show that our trace-based compiler achieves an up to 51% higher peak performance than the method-based Java HotSpot client compiler. Furthermore, we show that the large compilation scope of our trace-based compiler has a positive effect on other compiler optimizations such as constant folding or null check elimination.
Cyber Event Artifact Investigation Training in a Virtual Environment
2017-12-01
Rolling Box) and several Windows versions with few patches, often having only the 1st Service Pack. We selected a WinOS VM for our Training and...or services are currently in use by that account. In the Training Lab, the suspicious (i.e., attacker created) account is viewable from the login...ARTIFACT INVESTIGATION TRAINING IN A VIRTUAL ENVIRONMENT by Simone M. Mims Tye R. Wylkynsone December 2017 Thesis Advisor: J.D. Fulp Second
Virtualization Technology for System of Systems Test and Evaluation
2012-06-01
Peterson , Tillman, & Hatfield (1972) outlined the capabilities of virtualization in the early days of VM with some guiding principles. The following...Sheikh, based on the work of Balci (1994, 1995), and Balci et al. ( 1996 ), seeks to organize types of tests and to align requirements to the appropriate...Verification, validation, and testing in software engineering (pp. 155–184). Hershey , PA: Idea Group. Adair, R. J., Bayles, R. U., Comeau, L. W
Fast-Solving Quasi-Optimal LS-S3VM Based on an Extended Candidate Set.
Ma, Yuefeng; Liang, Xun; Kwok, James T; Li, Jianping; Zhou, Xiaoping; Zhang, Haiyan
2018-04-01
The semisupervised least squares support vector machine (LS-S 3 VM) is an important enhancement of least squares support vector machines in semisupervised learning. Given that most data collected from the real world are without labels, semisupervised approaches are more applicable than standard supervised approaches. Although a few training methods for LS-S 3 VM exist, the problem of deriving the optimal decision hyperplane efficiently and effectually has not been solved. In this paper, a fully weighted model of LS-S 3 VM is proposed, and a simple integer programming (IP) model is introduced through an equivalent transformation to solve the model. Based on the distances between the unlabeled data and the decision hyperplane, a new indicator is designed to represent the possibility that the label of an unlabeled datum should be reversed in each iteration during training. Using the indicator, we construct an extended candidate set consisting of the indices of unlabeled data with high possibilities, which integrates more information from unlabeled data. Our algorithm is degenerated into a special scenario of the previous algorithm when the extended candidate set is reduced into a set with only one element. Two strategies are utilized to determine the descent directions based on the extended candidate set. Furthermore, we developed a novel method for locating a good starting point based on the properties of the equivalent IP model. Combined with the extended candidate set and the carefully computed starting point, a fast algorithm to solve LS-S 3 VM quasi-optimally is proposed. The choice of quasi-optimal solutions results in low computational cost and avoidance of overfitting. Experiments show that our algorithm equipped with the two designed strategies is more effective than other algorithms in at least one of the following three aspects: 1) computational complexity; 2) generalization ability; and 3) flexibility. However, our algorithm and other algorithms have similar levels of performance in the remaining aspects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tikotekar, Anand A; Vallee, Geoffroy R; Naughton III, Thomas J
2008-01-01
The topic of system-level virtualization has recently begun to receive interest for high performance computing (HPC). This is in part due to the isolation and encapsulation offered by the virtual machine. These traits enable applications to customize their environments and maintain consistent software configurations in their virtual domains. Additionally, there are mechanisms that can be used for fault tolerance like live virtual machine migration. Given these attractive benefits to virtualization, a fundamental question arises, how does this effect my scientific application? We use this as the premise for our paper and observe a real-world scientific code running on a Xenmore » virtual machine. We studied the effects of running a radiative transfer simulation, Hydrolight, on a virtual machine. We discuss our methodology and report observations regarding the usage of virtualization with this application.« less
Context-aware distributed cloud computing using CloudScheduler
NASA Astrophysics Data System (ADS)
Seuster, R.; Leavett-Brown, CR; Casteels, K.; Driemel, C.; Paterson, M.; Ring, D.; Sobie, RJ; Taylor, RP; Weldon, J.
2017-10-01
The distributed cloud using the CloudScheduler VM provisioning service is one of the longest running systems for HEP workloads. It has run millions of jobs for ATLAS and Belle II over the past few years using private and commercial clouds around the world. Our goal is to scale the distributed cloud to the 10,000-core level, with the ability to run any type of application (low I/O, high I/O and high memory) on any cloud. To achieve this goal, we have been implementing changes that utilize context-aware computing designs that are currently employed in the mobile communication industry. Context-awareness makes use of real-time and archived data to respond to user or system requirements. In our distributed cloud, we have many opportunistic clouds with no local HEP services, software or storage repositories. A context-aware design significantly improves the reliability and performance of our system by locating the nearest location of the required services. We describe how we are collecting and managing contextual information from our workload management systems, the clouds, the virtual machines and our services. This information is used not only to monitor the system but also to carry out automated corrective actions. We are incrementally adding new alerting and response services to our distributed cloud. This will enable us to scale the number of clouds and virtual machines. Further, a context-aware design will enable us to run analysis or high I/O application on opportunistic clouds. We envisage an open-source HTTP data federation (for example, the DynaFed system at CERN) as a service that would provide us access to existing storage elements used by the HEP experiments.
Evaluation of the virtual mentor cataract training program.
Henderson, Bonnie An; Kim, Jae Yong; Golnik, Karl C; Oetting, Thomas A; Lee, Andrew G; Volpe, Nicholas J; Aaron, Maria; Uhler, Tara A; Arnold, Anthony; Dunn, James P; Prajna, N Venkatesh; Lane, Anne Marie; Loewenstein, John I
2010-02-01
Evaluate the effectiveness of an interactive cognitive computer simulation for teaching the hydrodissection portion of cataract surgery compared with standard teaching and to assess the attitudes of residents about the teaching tools and their perceived confidence in the knowledge gained after using the tools. Case-control study. Residents at academic institutions. Prospective, multicenter, single-masked, controlled trial was performed in 7 academic departments of ophthalmology (Harvard Medical School/Massachusetts Eye and Ear Infirmary, University of Iowa, Emory University, University of Cincinnati, University of Pennsylvania/Scheie Eye Institute, Jefferson Medical College of Thomas Jefferson University/Wills Eye Institute, and the Aravind Eye Institute). All residents from these centers were asked to participate and were randomized into 2 groups. Group A (n = 30) served as the control and received traditional teaching materials; group B (n = 38) received a digital video disc of the Virtual Mentor program. This program is an interactive cognitive simulation, specifically designed to separate cognitive aspects (such as decision making and error recognition) from the motor aspects. Both groups took online anonymous pretests (n = 68) and posttests (n = 58), and answered satisfaction questionnaires (n = 53). Wilcoxon tests were completed to compare pretest and posttest scores between groups. Analysis of variance was performed to assess differences in mean scores between groups. Scores on pretests, posttests, and satisfaction questionnaires. There was no difference in the pretest scores between the 2 groups (P = 0.62). However, group B (Virtual Mentor [VM]) scored significantly higher on the posttest (P = 0.01). Mean difference between pretest and posttest scores were significantly better in the VM group than in the traditional learning group (P = 0.04). Questionnaire revealed that the VM program was "more fun" to use (24.1% vs 4.2%) and residents were more likely to use this type of program again compared with the likelihood of using the traditional tools (58.6% vs 4.2%). The VM, a cognitive computer simulation, augmented teaching of the hydrodissection step of phacoemulsification surgery compared with traditional teaching alone. The program was more enjoyable and more likely to be used repetitively by ophthalmology residents. Copyright (c) 2010 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.
Managing virtual machines with Vac and Vcycle
NASA Astrophysics Data System (ADS)
McNab, A.; Love, P.; MacMahon, E.
2015-12-01
We compare the Vac and Vcycle virtual machine lifecycle managers and our experiences in providing production job execution services for ATLAS, CMS, LHCb, and the GridPP VO at sites in the UK, France and at CERN. In both the Vac and Vcycle systems, the virtual machines are created outside of the experiment's job submission and pilot framework. In the case of Vac, a daemon runs on each physical host which manages a pool of virtual machines on that host, and a peer-to-peer UDP protocol is used to achieve the desired target shares between experiments across the site. In the case of Vcycle, a daemon manages a pool of virtual machines on an Infrastructure-as-a-Service cloud system such as OpenStack, and has within itself enough information to create the types of virtual machines to achieve the desired target shares. Both systems allow unused shares for one experiment to temporarily taken up by other experiements with work to be done. The virtual machine lifecycle is managed with a minimum of information, gathered from the virtual machine creation mechanism (such as libvirt or OpenStack) and using the proposed Machine/Job Features API from WLCG. We demonstrate that the same virtual machine designs can be used to run production jobs on Vac and Vcycle/OpenStack sites for ATLAS, CMS, LHCb, and GridPP, and that these technologies allow sites to be operated in a reliable and robust way.
Vector Meson Production at Hera
NASA Astrophysics Data System (ADS)
Szuba, Dorota
The diffractive production of vector mesons ep→eVMY, with VM=ρ0, ω, ϕ, J/ψ, ψ‧ or ϒ and with Y being either the scattered proton or a low mass hadronic system, has been extensively investigated at HERA. HERA offers a unique opportunity to study the dependences of diffractive processes on different scales: the mass of the vector meson, mVM, the centre-of-mass energy of the γp system, W, the photon virtuality, Q2 and the four-momentum transfer squared at the proton vertex, |t|. Strong interactions can be investigated in the transition from the hard to the soft regime, where the confinement of quarks and gluons occurs.
Virtual Microscope Views of the Apollo 11 and 12 Lunar Samples
NASA Technical Reports Server (NTRS)
Gibson, E. K.; Tindle, A. G.; Kelley, S. P.; Pillinger, J. M.
2016-01-01
The Apollo virtual microscope is a means of viewing, over the Internet, polished thin sections of every rock in the Apollo lunar sample collections via software, duplicating many of the functions of a petrological microscope, is described. Images from the Apollo 11 and 12 missions may be viewed at: www.virtualmicroscope.org/content/apollo. Introduction: During the six NASA missions to the Moon from 1969-72 a total of 382 kilograms of rocks and soils, often referred to as "the legacy of Apollo", were collected and returned to Earth. A unique collection of polished thin sections (PTSs) was made from over 400 rocks by the Lunar Sample Curatorial Facility at the Johnson Spacecraft Center (JSC), Houston. These materials have been available for loan to approved PIs but of course they can't be simultaneously investigated by several researchers unless they are co-located or the sample is passed back and forward between them by mail/hand carrying which is inefficient and very risky for irreplaceable material. When The Open University (OU), the world's largest Distance Learning Higher Education Establishment found itself facing a comparable problem (how to supply thousands of undergraduate students with an interactive petrological microscope and a personal set of thin sections), it decided to develop a software tool called the Virtual Microscope (VM). As a result it is now able to make the unique and precious collection of Apollo specimens universally available as a resource for concurrent study by anybody in the world's Earth and Planetary Sciences community. Herein, we describe the first steps of a collaborative project between OU and the Johnson Space Center (JSC) Curatorial Facility to record a PTS for every lunar rock, beginning with those collected by the Apollo 11 and 12 missions. Method: Production of a virtual microscope dedicated to a particular theme divides into four main parts - photography, image processing, building and assembly of virtual microscope components, and publication on a website. Two large research quality microscopes are used to collect all the images required for a virtual microscope. The first is part of an integrated package that utilizes Leica PowerMosaic software and a motorised XYZ stage to generate large area mosaics. It includes a fast acquisition camera and depending on the PTS size normally is used to produce seamless mosaic images consisting of 100-500 individual photographs. If the sample is suitable, three mosaics of each sample are recorded - plane polarised light, between crossed polars and reflected light. In order for the VM to be a true petrological microscope it is necessary to recreate the features of a rotating stage and perform observations using filters to produce polarised light. Thus the petrological VM includes the capability of seeing changes in optical properties (pleochroism and birefringence) during rotation allowing mineral identification. The second microscope in the system provides the functions of the rotating stage. To this microscope we have added a robotically controlled motor to acquire seventy-two images (5 degree intervals) in plane polarised light and between crossed polars. To process the images acquired from the two microscopes involves a combination of proprietary software (Photoshop) and our own in-house code. The final stage involves assembling all the components in an HTML5 environment. Pathfinder investigations: We have undertaken a number of pilot studies to demonstrate the efficacy of the petrological microscope with lunar samples. The first was to make available on-line images collected from the Educational Package of Apollo samples provided by NASA to the UK STFC (Science and Technical Facilities Council) for loan as educational material e.g. for schools. The real PTSs of the samples are now no longer sent out to schools removing the risks associated with transport, accidental breakage and eliminating the possibility of loss. The availability of lunar sample VM-related material was further extended to include twenty-eight specimens from all of the Apollo missions. Some of these samples were made more generally available through an ibook entitled "Moon Rocks: an introduction to the Geology of the Moon," free from the Apple Bookstore. Research possibilities: Although the Virtual Microscope was originally conceived as a teaching aid and was later recognised as a means of public outreach and engagement, we now realize that it also has enormous potential as a high level research tool. Following discussions with the JSC Curators we have received Curation and Analysis Planning Team for Extraterrestrial Materials (CAPTEM) permission to embark on a programme of digitizing the entire lunar sample PTS collection for all three of the above purposes. By the time of the 47th Lunar and Planetary Science Conference (LPSC) we will have completed 81 rocks collected during the Apollo 11 and 12 missions and the data, with cross-links to the Lunar Sample Compendium will go live on the Web at the 47th LPSC. The VM images of the Apollo 11 (41 VM images) and 12 (40 VM images) missions can be viewed at: http:/www.virtualmicroscope.org/content/apollo. The lunar sample VM will enable large numbers of skilled/unskilled microscopists (professional and amateur researchers, educators and students, enthusiasts and the simply curious non-scientists) to share the information from a single sample. It will mean that all the PTSs already cut, even historical ones, could be available for new joint investigations or private study. The scientific return from the collection will increase exponentially as a result of further debate and discussion. Simultaneously the VM will remove the need for making unnecessary multiple samplings, avoid consignment of delicate/breakable specimens (all of which are priceless) to insecure mail/courier services and reduce direct labour and indirect costs, travel budgets and unproductive travelling time necessary for co-location of collaborating researchers. For the future we have already recognized further potential for virtual technology. There is nothing that a petrologist likes more than to see the original rock as a hand specimen. It is entirely possible to recreate virtual hand specimens with 3-D hard and software, already developed for viewing fossils, located within the Curatorial Facility, http://curator.jsc.nasa.gov/lunar/lsc/index.cfm.
Towards Cloud-based Asynchronous Elasticity for Iterative HPC Applications
NASA Astrophysics Data System (ADS)
da Rosa Righi, Rodrigo; Facco Rodrigues, Vinicius; André da Costa, Cristiano; Kreutz, Diego; Heiss, Hans-Ulrich
2015-10-01
Elasticity is one of the key features of cloud computing. It allows applications to dynamically scale computing and storage resources, avoiding over- and under-provisioning. In high performance computing (HPC), initiatives are normally modeled to handle bag-of-tasks or key-value applications through a load balancer and a loosely-coupled set of virtual machine (VM) instances. In the joint-field of Message Passing Interface (MPI) and tightly-coupled HPC applications, we observe the need of rewriting source codes, previous knowledge of the application and/or stop-reconfigure-and-go approaches to address cloud elasticity. Besides, there are problems related to how profit this new feature in the HPC scope, since in MPI 2.0 applications the programmers need to handle communicators by themselves, and a sudden consolidation of a VM, together with a process, can compromise the entire execution. To address these issues, we propose a PaaS-based elasticity model, named AutoElastic. It acts as a middleware that allows iterative HPC applications to take advantage of dynamic resource provisioning of cloud infrastructures without any major modification. AutoElastic provides a new concept denoted here as asynchronous elasticity, i.e., it provides a framework to allow applications to either increase or decrease their computing resources without blocking the current execution. The feasibility of AutoElastic is demonstrated through a prototype that runs a CPU-bound numerical integration application on top of the OpenNebula middleware. The results showed the saving of about 3 min at each scaling out operations, emphasizing the contribution of the new concept on contexts where seconds are precious.
The body unbound: vestibular-motor hallucinations and out-of-body experiences.
Cheyne, J Allan; Girard, Todd A
2009-02-01
Among the varied hallucinations associated with sleep paralysis (SP), out-of-body experiences (OBEs) and vestibular-motor (V-M) sensations represent a distinct factor. Recent studies of direct stimulation of vestibular cortex report a virtually identical set of bodily-self hallucinations. Both programs of research agree on numerous details of OBEs and V-M experiences and suggest similar hypotheses concerning their association. In the present study, self-report data from two on-line surveys of SP-related experiences were employed to assess hypotheses concerning the causal structure of relations among V-M experiences and OBEs during SP episodes. The results complement neurophysiological evidence and are consistent with the hypothesis that OBEs represent a breakdown in the normal binding of bodily-self sensations and suggest that out-of-body feelings (OBFs) are consequences of anomalous V-M experiences and precursors to a particular form of autoscopic experience, out-of-body autoscopy (OBA). An additional finding was that vestibular and motor experiences make relatively independent contributions to OBE variance. Although OBEs are superficially consistent with universal dualistic and supernatural intuitions about the nature of the soul and its relation to the body, recent research increasingly offers plausible alternative naturalistic explanations of the relevant phenomenology.
Software platform virtualization in chemistry research and university teaching
2009-01-01
Background Modern chemistry laboratories operate with a wide range of software applications under different operating systems, such as Windows, LINUX or Mac OS X. Instead of installing software on different computers it is possible to install those applications on a single computer using Virtual Machine software. Software platform virtualization allows a single guest operating system to execute multiple other operating systems on the same computer. We apply and discuss the use of virtual machines in chemistry research and teaching laboratories. Results Virtual machines are commonly used for cheminformatics software development and testing. Benchmarking multiple chemistry software packages we have confirmed that the computational speed penalty for using virtual machines is low and around 5% to 10%. Software virtualization in a teaching environment allows faster deployment and easy use of commercial and open source software in hands-on computer teaching labs. Conclusion Software virtualization in chemistry, mass spectrometry and cheminformatics is needed for software testing and development of software for different operating systems. In order to obtain maximum performance the virtualization software should be multi-core enabled and allow the use of multiprocessor configurations in the virtual machine environment. Server consolidation, by running multiple tasks and operating systems on a single physical machine, can lead to lower maintenance and hardware costs especially in small research labs. The use of virtual machines can prevent software virus infections and security breaches when used as a sandbox system for internet access and software testing. Complex software setups can be created with virtual machines and are easily deployed later to multiple computers for hands-on teaching classes. We discuss the popularity of bioinformatics compared to cheminformatics as well as the missing cheminformatics education at universities worldwide. PMID:20150997
Software platform virtualization in chemistry research and university teaching.
Kind, Tobias; Leamy, Tim; Leary, Julie A; Fiehn, Oliver
2009-11-16
Modern chemistry laboratories operate with a wide range of software applications under different operating systems, such as Windows, LINUX or Mac OS X. Instead of installing software on different computers it is possible to install those applications on a single computer using Virtual Machine software. Software platform virtualization allows a single guest operating system to execute multiple other operating systems on the same computer. We apply and discuss the use of virtual machines in chemistry research and teaching laboratories. Virtual machines are commonly used for cheminformatics software development and testing. Benchmarking multiple chemistry software packages we have confirmed that the computational speed penalty for using virtual machines is low and around 5% to 10%. Software virtualization in a teaching environment allows faster deployment and easy use of commercial and open source software in hands-on computer teaching labs. Software virtualization in chemistry, mass spectrometry and cheminformatics is needed for software testing and development of software for different operating systems. In order to obtain maximum performance the virtualization software should be multi-core enabled and allow the use of multiprocessor configurations in the virtual machine environment. Server consolidation, by running multiple tasks and operating systems on a single physical machine, can lead to lower maintenance and hardware costs especially in small research labs. The use of virtual machines can prevent software virus infections and security breaches when used as a sandbox system for internet access and software testing. Complex software setups can be created with virtual machines and are easily deployed later to multiple computers for hands-on teaching classes. We discuss the popularity of bioinformatics compared to cheminformatics as well as the missing cheminformatics education at universities worldwide.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duan, X; Guild, J; Arbique, G
2015-06-15
Purpose To evaluate the image quality and spectral information of a spectral detector CT (SDCT) scanner using virtual monochromatic (VM) energy images. Methods The SDCT scanner (Philips Healthcare) was equipped with a dual-layer detector and spectral iterative reconstruction (IR), which generates conventional 80–140 kV polychromatic energy (PE) CT images using both detector layers, PE images from the low-energy (upper) and high-energy (lower) detector layers and VM images. A solid water phantom with iodine (2.0–20.0 mg I/ml) and calcium (50.0–600.0 mg Ca/ml) rod inserts was used to evaluate effective energy estimate (EEE) and iodine contrast to noise ratio (CNR). The EEEmore » corresponding to an insert CT number in a PE image was calculated from a CT number fit to the VM image set. Since PE image is prone to beam-hardening artifact EEE may underestimate the actual energy separation from two layers of the detector. A 30-cm-diameter water phantom was used to evaluate noise power spectrum (NPS). The phantoms were scanned at 120 and 140 kV with the same CTDIvol. Results The CT number difference for contrast inserts in VM images (50–150 keV) was 1.3±6% between 120 and 140 kV scans. The difference of EEE calculated from low- and high-energy detector images was 11.5 and 16.7 keV for 120 and 140 kV scans, respectively. The differences calculated from 140 and 100 kV conventional PE images were 12.8, and 20.1 keV from 140 and 80 kV conventional PE images. The iodine CNR increased monotonically with decreased keV. Compared to conventional PE images, the peak of NPS curves from VM images were shifted to lower frequency. Conclusion The EEE results indicates that SDCT at 120 and 140 kV may have energy separation comparable to 100/140 kV and 80/140 kV dual-kV imaging. The effects of IR on CNR and NPS require further investigation for SDCT. Author YY and AD are Philips Healthcare employees.« less
Techno-Mathematical Discourse: A Conceptual Framework for Analyzing Classroom Discussions
ERIC Educational Resources Information Center
Anderson-Pence, Katie L.
2017-01-01
Extensive research has been published on the nature of classroom mathematical discourse and on the impact of technology tools, such as virtual manipulatives (VM), on students' learning, while less research has focused on how technology tools facilitate that mathematical discourse. This paper presents an emerging construct, the Techno-Mathematical…
ACStor: Optimizing Access Performance of Virtual Disk Images in Clouds
Wu, Song; Wang, Yihong; Luo, Wei; ...
2017-03-02
In virtualized data centers, virtual disk images (VDIs) serve as the containers in virtual environment, so their access performance is critical for the overall system performance. Some distributed VDI chunk storage systems have been proposed in order to alleviate the I/O bottleneck for VM management. As the system scales up to a large number of running VMs, however, the overall network traffic would become unbalanced with hot spots on some VMs inevitably, leading to I/O performance degradation when accessing the VMs. Here, we propose an adaptive and collaborative VDI storage system (ACStor) to resolve the above performance issue. In comparisonmore » with the existing research, our solution is able to dynamically balance the traffic workloads in accessing VDI chunks, based on the run-time network state. Specifically, compute nodes with lightly loaded traffic will be adaptively assigned more chunk access requests from remote VMs and vice versa, which can effectively eliminate the above problem and thus improves the I/O performance of VMs. We also implement a prototype based on our ACStor design, and evaluate it by various benchmarks on a real cluster with 32 nodes and a simulated platform with 256 nodes. Experiments show that under different network traffic patterns of data centers, our solution achieves up to 2-8 performance gain on VM booting time and VM’s I/O throughput, in comparison with the other state-of-the-art approaches.« less
Hardware assisted hypervisor introspection.
Shi, Jiangyong; Yang, Yuexiang; Tang, Chuan
2016-01-01
In this paper, we introduce hypervisor introspection, an out-of-box way to monitor the execution of hypervisors. Similar to virtual machine introspection which has been proposed to protect virtual machines in an out-of-box way over the past decade, hypervisor introspection can be used to protect hypervisors which are the basis of cloud security. Virtual machine introspection tools are usually deployed either in hypervisor or in privileged virtual machines, which might also be compromised. By utilizing hardware support including nested virtualization, EPT protection and #BP, we are able to monitor all hypercalls belongs to the virtual machines of one hypervisor, include that of privileged virtual machine and even when the hypervisor is compromised. What's more, hypercall injection method is used to simulate hypercall-based attacks and evaluate the performance of our method. Experiment results show that our method can effectively detect hypercall-based attacks with some performance cost. Lastly, we discuss our furture approaches of reducing the performance cost and preventing the compromised hypervisor from detecting the existence of our introspector, in addition with some new scenarios to apply our hypervisor introspection system.
A virtual maintenance-based approach for satellite assembling and troubleshooting assessment
NASA Astrophysics Data System (ADS)
Geng, Jie; Li, Ying; Wang, Ranran; Wang, Zili; Lv, Chuan; Zhou, Dong
2017-09-01
In this study, a Virtual Maintenance (VM)-based approach for satellite troubleshooting assessment is proposed. By focusing on various elements in satellite assemble troubleshooting, such as accessibility, ergonomics, wiring, and extent of damage, a systematic, quantitative, and objective assessment model is established to decrease subjectivity in satellite assembling and troubleshooting assessment. Afterwards, based on the established assessment model and satellite virtual prototype, an application process of this model suitable for a virtual environment is presented. Finally, according to the application process, all the elements in satellite troubleshooting are analyzed and assessed. The corresponding improvements, which realize the transformation from a conventional way to a virtual simulation and assessment, are suggested, and the flaws in assembling and troubleshooting are revealed. Assembling or troubleshooting schemes can be improved in the early stage of satellite design with the help of a virtual prototype. Repetition in the practical operation is beneficial to companies as risk and cost are effectively reduced.
An overview of the DII-HEP OpenStack based CMS data analysis
NASA Astrophysics Data System (ADS)
Osmani, L.; Tarkoma, S.; Eerola, P.; Komu, M.; Kortelainen, M. J.; Kraemer, O.; Lindén, T.; Toor, S.; White, J.
2015-05-01
An OpenStack based private cloud with the Cluster File System has been built and used with both CMS analysis and Monte Carlo simulation jobs in the Datacenter Indirection Infrastructure for Secure High Energy Physics (DII-HEP) project. On the cloud we run the ARC middleware that allows running CMS applications without changes on the job submission side. Our test results indicate that the adopted approach provides a scalable and resilient solution for managing resources without compromising on performance and high availability. To manage the virtual machines (VM) dynamically in an elastic fasion, we are testing the EMI authorization service (Argus) and the Execution Environment Service (Argus-EES). An OpenStackplugin has been developed for Argus-EES. The Host Identity Protocol (HIP) has been designed for mobile networks and it provides a secure method for IP multihoming. HIP separates the end-point identifier and locator role for IP address which increases the network availability for the applications. Our solution leverages HIP for traffic management. This presentation gives an update on the status of the work and our lessons learned in creating an OpenStackbased cloud for HEP.
One-Click Data Analysis Software for Science Operations
NASA Astrophysics Data System (ADS)
Navarro, Vicente
2015-12-01
One of the important activities of ESA Science Operations Centre is to provide Data Analysis Software (DAS) to enable users and scientists to process data further to higher levels. During operations and post-operations, Data Analysis Software (DAS) is fully maintained and updated for new OS and library releases. Nonetheless, once a Mission goes into the "legacy" phase, there are very limited funds and long-term preservation becomes more and more difficult. Building on Virtual Machine (VM), Cloud computing and Software as a Service (SaaS) technologies, this project has aimed at providing long-term preservation of Data Analysis Software for the following missions: - PIA for ISO (1995) - SAS for XMM-Newton (1999) - Hipe for Herschel (2009) - EXIA for EXOSAT (1983) Following goals have guided the architecture: - Support for all operations, post-operations and archive/legacy phases. - Support for local (user's computer) and cloud environments (ESAC-Cloud, Amazon - AWS). - Support for expert users, requiring full capabilities. - Provision of a simple web-based interface. This talk describes the architecture, challenges, results and lessons learnt gathered in this project.
An Effective Mechanism for Virtual Machine Placement using Aco in IAAS Cloud
NASA Astrophysics Data System (ADS)
Shenbaga Moorthy, Rajalakshmi; Fareentaj, U.; Divya, T. K.
2017-08-01
Cloud computing provides an effective way to dynamically provide numerous resources to meet customer demands. A major challenging problem for cloud providers is designing efficient mechanisms for optimal virtual machine Placement (OVMP). Such mechanisms enable the cloud providers to effectively utilize their available resources and obtain higher profits. In order to provide appropriate resources to the clients an optimal virtual machine placement algorithm is proposed. Virtual machine placement is NP-Hard problem. Such NP-Hard problem can be solved using heuristic algorithm. In this paper, Ant Colony Optimization based virtual machine placement is proposed. Our proposed system focuses on minimizing the cost spending in each plan for hosting virtual machines in a multiple cloud provider environment and the response time of each cloud provider is monitored periodically, in such a way to minimize delay in providing the resources to the users. The performance of the proposed algorithm is compared with greedy mechanism. The proposed algorithm is simulated in Eclipse IDE. The results clearly show that the proposed algorithm minimizes the cost, response time and also number of migrations.
Paging memory from random access memory to backing storage in a parallel computer
Archer, Charles J; Blocksome, Michael A; Inglett, Todd A; Ratterman, Joseph D; Smith, Brian E
2013-05-21
Paging memory from random access memory (`RAM`) to backing storage in a parallel computer that includes a plurality of compute nodes, including: executing a data processing application on a virtual machine operating system in a virtual machine on a first compute node; providing, by a second compute node, backing storage for the contents of RAM on the first compute node; and swapping, by the virtual machine operating system in the virtual machine on the first compute node, a page of memory from RAM on the first compute node to the backing storage on the second compute node.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Song; Wang, Yihong; Luo, Wei
In virtualized data centers, virtual disk images (VDIs) serve as the containers in virtual environment, so their access performance is critical for the overall system performance. Some distributed VDI chunk storage systems have been proposed in order to alleviate the I/O bottleneck for VM management. As the system scales up to a large number of running VMs, however, the overall network traffic would become unbalanced with hot spots on some VMs inevitably, leading to I/O performance degradation when accessing the VMs. Here, we propose an adaptive and collaborative VDI storage system (ACStor) to resolve the above performance issue. In comparisonmore » with the existing research, our solution is able to dynamically balance the traffic workloads in accessing VDI chunks, based on the run-time network state. Specifically, compute nodes with lightly loaded traffic will be adaptively assigned more chunk access requests from remote VMs and vice versa, which can effectively eliminate the above problem and thus improves the I/O performance of VMs. We also implement a prototype based on our ACStor design, and evaluate it by various benchmarks on a real cluster with 32 nodes and a simulated platform with 256 nodes. Experiments show that under different network traffic patterns of data centers, our solution achieves up to 2-8 performance gain on VM booting time and VM’s I/O throughput, in comparison with the other state-of-the-art approaches.« less
Interfacing HTCondor-CE with OpenStack
NASA Astrophysics Data System (ADS)
Bockelman, B.; Caballero Bejar, J.; Hover, J.
2017-10-01
Over the past few years, Grid Computing technologies have reached a high level of maturity. One key aspect of this success has been the development and adoption of newer Compute Elements to interface the external Grid users with local batch systems. These new Compute Elements allow for better handling of jobs requirements and a more precise management of diverse local resources. However, despite this level of maturity, the Grid Computing world is lacking diversity in local execution platforms. As Grid Computing technologies have historically been driven by the needs of the High Energy Physics community, most resource providers run the platform (operating system version and architecture) that best suits the needs of their particular users. In parallel, the development of virtualization and cloud technologies has accelerated recently, making available a variety of solutions, both commercial and academic, proprietary and open source. Virtualization facilitates performing computational tasks on platforms not available at most computing sites. This work attempts to join the technologies, allowing users to interact with computing sites through one of the standard Computing Elements, HTCondor-CE, but running their jobs within VMs on a local cloud platform, OpenStack, when needed. The system will re-route, in a transparent way, end user jobs into dynamically-launched VM worker nodes when they have requirements that cannot be satisfied by the static local batch system nodes. Also, once the automated mechanisms are in place, it becomes straightforward to allow an end user to invoke a custom Virtual Machine at the site. This will allow cloud resources to be used without requiring the user to establish a separate account. Both scenarios are described in this work.
Cross-VM Side Channels and Their Use to Extract Private Keys
2012-10-16
clouds such as Amazon EC2 and Rackspace, but also by other Xen use cases. For ex- 4 ample, many virtual desktop infrastructure ( VDI ) solutions (e.g...whose bit length is, for example, 337, 403, or 457 when κ is 2048 , 3072, or 4096, respectively. We note that this deviates from standard ElGamal, in
Bauer, José Roberto de Oliveira; Grande, Rosa Helena Miranda; Rodrigues-Filho, Leonardo Eloy; Pinto, Marcelo Mendes; Loguercio, Alessandro Dourado
2012-01-01
The aim of the present study was to evaluate the tensile strength, elongation, microhardness, microstructure and fracture pattern of various metal ceramic alloys cast under different casting conditions. Two Ni-Cr alloys, Co-Cr and Pd-Ag were used. The casting conditions were as follows: electromagnetic induction under argon atmosphere, vacuum, using blowtorch without atmosphere control. For each condition, 16 specimens, each measuring 25 mm long and 2.5 mm in diameter, were obtained. Ultimate tensile strength (UTS) and elongation (EL) tests were performed using a Kratos machine. Vickers Microhardness (VM), fracture mode and microstructure were analyzed by SEM. UTS, EL and VM data were statistically analyzed using ANOVA. For UTS, alloy composition had a direct influence on casting condition of alloys (Wiron 99 and Remanium CD), with higher values shown when cast with Flame/Air (p < 0.05). The factors 'alloy" and 'casting condition" influenced the EL and VM results, generally presenting opposite results, i.e., alloy with high elongation value had lower hardness (Wiron 99), and casting condition with the lowest EL values had the highest VM values (blowtorch). Both factors had significant influence on the properties evaluated, and prosthetic laboratories should select the appropriate casting method for each alloy composition to obtain the desired property.
Purawat, Shweta; Cowart, Charles; Amaro, Rommie E; Altintas, Ilkay
2016-06-01
The BBDTC (https://biobigdata.ucsd.edu) is a community-oriented platform to encourage high-quality knowledge dissemination with the aim of growing a well-informed biomedical big data community through collaborative efforts on training and education. The BBDTC collaborative is an e-learning platform that supports the biomedical community to access, develop and deploy open training materials. The BBDTC supports Big Data skill training for biomedical scientists at all levels, and from varied backgrounds. The natural hierarchy of courses allows them to be broken into and handled as modules . Modules can be reused in the context of multiple courses and reshuffled, producing a new and different, dynamic course called a playlist . Users may create playlists to suit their learning requirements and share it with individual users or the wider public. BBDTC leverages the maturity and design of the HUBzero content-management platform for delivering educational content. To facilitate the migration of existing content, the BBDTC supports importing and exporting course material from the edX platform. Migration tools will be extended in the future to support other platforms. Hands-on training software packages, i.e., toolboxes , are supported through Amazon EC2 and Virtualbox virtualization technologies, and they are available as: ( i ) downloadable lightweight Virtualbox Images providing a standardized software tool environment with software packages and test data on their personal machines, and ( ii ) remotely accessible Amazon EC2 Virtual Machines for accessing biomedical big data tools and scalable big data experiments. At the moment, the BBDTC site contains three open Biomedical big data training courses with lecture contents, videos and hands-on training utilizing VM toolboxes, covering diverse topics. The courses have enhanced the hands-on learning environment by providing structured content that users can use at their own pace. A four course biomedical big data series is planned for development in 2016.
Future Cyborgs: Human-Machine Interface for Virtual Reality Applications
2007-04-01
FUTURE CYBORGS : HUMAN-MACHINE INTERFACE FOR VIRTUAL REALITY APPLICATIONS Robert R. Powell, Major, USAF April 2007 Blue Horizons...SUBTITLE Future Cyborgs : Human-Machine Interface for Virtual Reality Applications 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER...Nicholas Negroponte, Being Digital (New York: Alfred A Knopf, Inc, 1995), 123. 23 Ibid. 24 Andy Clark, Natural-Born Cyborgs (New York: Oxford
An imperialist competitive algorithm for virtual machine placement in cloud computing
NASA Astrophysics Data System (ADS)
Jamali, Shahram; Malektaji, Sepideh; Analoui, Morteza
2017-05-01
Cloud computing, the recently emerged revolution in IT industry, is empowered by virtualisation technology. In this paradigm, the user's applications run over some virtual machines (VMs). The process of selecting proper physical machines to host these virtual machines is called virtual machine placement. It plays an important role on resource utilisation and power efficiency of cloud computing environment. In this paper, we propose an imperialist competitive-based algorithm for the virtual machine placement problem called ICA-VMPLC. The base optimisation algorithm is chosen to be ICA because of its ease in neighbourhood movement, good convergence rate and suitable terminology. The proposed algorithm investigates search space in a unique manner to efficiently obtain optimal placement solution that simultaneously minimises power consumption and total resource wastage. Its final solution performance is compared with several existing methods such as grouping genetic and ant colony-based algorithms as well as bin packing heuristic. The simulation results show that the proposed method is superior to other tested algorithms in terms of power consumption, resource wastage, CPU usage efficiency and memory usage efficiency.
Ma, Xiao H; Jia, Jia; Zhu, Feng; Xue, Ying; Li, Ze R; Chen, Yu Z
2009-05-01
Machine learning methods have been explored as ligand-based virtual screening tools for facilitating drug lead discovery. These methods predict compounds of specific pharmacodynamic, pharmacokinetic or toxicological properties based on their structure-derived structural and physicochemical properties. Increasing attention has been directed at these methods because of their capability in predicting compounds of diverse structures and complex structure-activity relationships without requiring the knowledge of target 3D structure. This article reviews current progresses in using machine learning methods for virtual screening of pharmacodynamically active compounds from large compound libraries, and analyzes and compares the reported performances of machine learning tools with those of structure-based and other ligand-based (such as pharmacophore and clustering) virtual screening methods. The feasibility to improve the performance of machine learning methods in screening large libraries is discussed.
Model-based sensorimotor integration for multi-joint control: development of a virtual arm model.
Song, D; Lan, N; Loeb, G E; Gordon, J
2008-06-01
An integrated, sensorimotor virtual arm (VA) model has been developed and validated for simulation studies of control of human arm movements. Realistic anatomical features of shoulder, elbow and forearm joints were captured with a graphic modeling environment, SIMM. The model included 15 musculotendon elements acting at the shoulder, elbow and forearm. Muscle actions on joints were evaluated by SIMM generated moment arms that were matched to experimentally measured profiles. The Virtual Muscle (VM) model contained appropriate admixture of slow and fast twitch fibers with realistic physiological properties for force production. A realistic spindle model was embedded in each VM with inputs of fascicle length, gamma static (gamma(stat)) and dynamic (gamma(dyn)) controls and outputs of primary (I(a)) and secondary (II) afferents. A piecewise linear model of Golgi Tendon Organ (GTO) represented the ensemble sampling (I(b)) of the total muscle force at the tendon. All model components were integrated into a Simulink block using a special software tool. The complete VA model was validated with open-loop simulation at discrete hand positions within the full range of alpha and gamma drives to extrafusal and intrafusal muscle fibers. The model behaviors were consistent with a wide variety of physiological phenomena. Spindle afferents were effectively modulated by fusimotor drives and hand positions of the arm. These simulations validated the VA model as a computational tool for studying arm movement control. The VA model is available to researchers at website http://pt.usc.edu/cel .
NASA Astrophysics Data System (ADS)
Mouffe, M.; Getirana, A.; Ricci, S. M.; Lion, C.; Biancamaria, S.; Boone, A.; Mognard, N. M.; Rogel, P.
2011-12-01
The Surface Water and Ocean Topography (SWOT) mission is a swath mapping radar interferometer that will provide global measurements of water surface elevation (WSE). The revisit time depends upon latitude and varies from two (low latitudes) to ten (high latitudes) per 22-day orbit repeat period. The high resolution and the global coverage of the SWOT data open the way for new hydrology studies. Here, the aim is to investigate the use of virtually generated SWOT data to improve discharge simulation using data assimilation techniques. In the framework of the SWOT virtual mission (VM), this study presents the first results of the automatic calibration of a global flow routing (GFR) scheme using SWOT VM measurements for the Amazon basin. The Hydrological Modeling and Analysis Platform (HyMAP) is used along with the MOCOM-UA multi-criteria global optimization algorithm. HyMAP has a 0.25-degree spatial resolution and runs at the daily time step to simulate discharge, water levels and floodplains. The surface runoff and baseflow drainage derived from the Interactions Sol-Biosphère-Atmosphère (ISBA) model are used as inputs for HyMAP. Previous works showed that the use of ENVISAT data enables the reduction of the uncertainty on some of the hydrological model parameters, such as river width and depth, Manning roughness coefficient and groundwater time delay. In the framework of the SWOT preparation work, the automatic calibration procedure was applied using SWOT VM measurements. For this Observing System Experiment (OSE), the synthetical data were obtained applying an instrument simulator (representing realistic SWOT errors) for one hydrological year to HYMAP simulated WSE using a "true" set of parameters. Only pixels representing rivers larger than 100 meters within the Amazon basin are considered to produce SWOT VM measurements. The automatic calibration procedure leads to the estimation of optimal parametersminimizing objective functions that formulate the difference between SWOT observations and modeled WSE using a perturbed set of parameters. Different formulations of the objective function were used, especially to account for SWOT observation errors, as well as various sets of calibration parameters.
Integration of the virtual 3D model of a control system with the virtual controller
NASA Astrophysics Data System (ADS)
Herbuś, K.; Ociepka, P.
2015-11-01
Nowadays the design process includes simulation analysis of different components of a constructed object. It involves the need for integration of different virtual object to simulate the whole investigated technical system. The paper presents the issues related to the integration of a virtual 3D model of a chosen control system of with a virtual controller. The goal of integration is to verify the operation of an adopted object of in accordance with the established control program. The object of the simulation work is the drive system of a tunneling machine for trenchless work. In the first stage of work was created an interactive visualization of functioning of the 3D virtual model of a tunneling machine. For this purpose, the software of the VR (Virtual Reality) class was applied. In the elaborated interactive application were created adequate procedures allowing controlling the drive system of a translatory motion, a rotary motion and the drive system of a manipulator. Additionally was created the procedure of turning on and off the output crushing head, mounted on the last element of the manipulator. In the elaborated interactive application have been established procedures for receiving input data from external software, on the basis of the dynamic data exchange (DDE), which allow controlling actuators of particular control systems of the considered machine. In the next stage of work, the program on a virtual driver, in the ladder diagram (LD) language, was created. The control program was developed on the basis of the adopted work cycle of the tunneling machine. The element integrating the virtual model of the tunneling machine for trenchless work with the virtual controller is the application written in a high level language (Visual Basic). In the developed application was created procedures responsible for collecting data from the running, in a simulation mode, virtual controller and transferring them to the interactive application, in which is verified the operation of the adopted research object. The carried out work allowed foot the integration of the virtual model of the control system of the tunneling machine with the virtual controller, enabling the verification of its operation.
Maestro: an orchestration framework for large-scale WSN simulations.
Riliskis, Laurynas; Osipov, Evgeny
2014-03-18
Contemporary wireless sensor networks (WSNs) have evolved into large and complex systems and are one of the main technologies used in cyber-physical systems and the Internet of Things. Extensive research on WSNs has led to the development of diverse solutions at all levels of software architecture, including protocol stacks for communications. This multitude of solutions is due to the limited computational power and restrictions on energy consumption that must be accounted for when designing typical WSN systems. It is therefore challenging to develop, test and validate even small WSN applications, and this process can easily consume significant resources. Simulations are inexpensive tools for testing, verifying and generally experimenting with new technologies in a repeatable fashion. Consequently, as the size of the systems to be tested increases, so does the need for large-scale simulations. This article describes a tool called Maestro for the automation of large-scale simulation and investigates the feasibility of using cloud computing facilities for such task. Using tools that are built into Maestro, we demonstrate a feasible approach for benchmarking cloud infrastructure in order to identify cloud Virtual Machine (VM)instances that provide an optimal balance of performance and cost for a given simulation.
Maestro: An Orchestration Framework for Large-Scale WSN Simulations
Riliskis, Laurynas; Osipov, Evgeny
2014-01-01
Contemporary wireless sensor networks (WSNs) have evolved into large and complex systems and are one of the main technologies used in cyber-physical systems and the Internet of Things. Extensive research on WSNs has led to the development of diverse solutions at all levels of software architecture, including protocol stacks for communications. This multitude of solutions is due to the limited computational power and restrictions on energy consumption that must be accounted for when designing typical WSN systems. It is therefore challenging to develop, test and validate even small WSN applications, and this process can easily consume significant resources. Simulations are inexpensive tools for testing, verifying and generally experimenting with new technologies in a repeatable fashion. Consequently, as the size of the systems to be tested increases, so does the need for large-scale simulations. This article describes a tool called Maestro for the automation of large-scale simulation and investigates the feasibility of using cloud computing facilities for such task. Using tools that are built into Maestro, we demonstrate a feasible approach for benchmarking cloud infrastructure in order to identify cloud Virtual Machine (VM)instances that provide an optimal balance of performance and cost for a given simulation. PMID:24647123
Novel method of finding extreme edges in a convex set of N-dimension vectors
NASA Astrophysics Data System (ADS)
Hu, Chia-Lun J.
2001-11-01
As we published in the last few years, for a binary neural network pattern recognition system to learn a given mapping {Um mapped to Vm, m=1 to M} where um is an N- dimension analog (pattern) vector, Vm is a P-bit binary (classification) vector, the if-and-only-if (IFF) condition that this network can learn this mapping is that each i-set in {Ymi, m=1 to M} (where Ymithere existsVmiUm and Vmi=+1 or -1, is the i-th bit of VR-m).)(i=1 to P and there are P sets included here.) Is POSITIVELY, LINEARLY, INDEPENDENT or PLI. We have shown that this PLI condition is MORE GENERAL than the convexity condition applied to a set of N-vectors. In the design of old learning machines, we know that if a set of N-dimension analog vectors form a convex set, and if the machine can learn the boundary vectors (or extreme edges) of this set, then it can definitely learn the inside vectors contained in this POLYHEDRON CONE. This paper reports a new method and new algorithm to find the boundary vectors of a convex set of ND analog vectors.
Huang, Yukun; Chen, Rong; Wei, Jingbo; Pei, Xilong; Cao, Jing; Prakash Jayaraman, Prem; Ranjan, Rajiv
2014-01-01
JNI in the Android platform is often observed with low efficiency and high coding complexity. Although many researchers have investigated the JNI mechanism, few of them solve the efficiency and the complexity problems of JNI in the Android platform simultaneously. In this paper, a hybrid polylingual object (HPO) model is proposed to allow a CAR object being accessed as a Java object and as vice in the Dalvik virtual machine. It is an acceptable substitute for JNI to reuse the CAR-compliant components in Android applications in a seamless and efficient way. The metadata injection mechanism is designed to support the automatic mapping and reflection between CAR objects and Java objects. A prototype virtual machine, called HPO-Dalvik, is implemented by extending the Dalvik virtual machine to support the HPO model. Lifespan management, garbage collection, and data type transformation of HPO objects are also handled in the HPO-Dalvik virtual machine automatically. The experimental result shows that the HPO model outweighs the standard JNI in lower overhead on native side, better executing performance with no JNI bridging code being demanded.
The ASSERT Virtual Machine Kernel: Support for Preservation of Temporal Properties
NASA Astrophysics Data System (ADS)
Zamorano, J.; de la Puente, J. A.; Pulido, J. A.; Urueña
2008-08-01
A new approach to building embedded real-time software has been developed in the ASSERT project. One of its key elements is the concept of a virtual machine preserving the non-functional properties of the system, and especially real-time properties, all the way down from high- level design models down to executable code. The paper describes one instance of the virtual machine concept that provides support for the preservation of temporal properties both at the source code level —by accept- ing only "legal" entities, i.e. software components with statically analysable real-tim behaviour— and at run-time —by monitoring the temporal behaviour of the system. The virtual machine has been validated on several pilot projects carried out by aerospace companies in the framework of the ASSERT project.
Means and method of balancing multi-cylinder reciprocating machines
Corey, John A.; Walsh, Michael M.
1985-01-01
A virtual balancing axis arrangement is described for multi-cylinder reciprocating piston machines for effectively balancing out imbalanced forces and minimizing residual imbalance moments acting on the crankshaft of such machines without requiring the use of additional parallel-arrayed balancing shafts or complex and expensive gear arrangements. The novel virtual balancing axis arrangement is capable of being designed into multi-cylinder reciprocating piston and crankshaft machines for substantially reducing vibrations induced during operation of such machines with only minimal number of additional component parts. Some of the required component parts may be available from parts already required for operation of auxiliary equipment, such as oil and water pumps used in certain types of reciprocating piston and crankshaft machine so that by appropriate location and dimensioning in accordance with the teachings of the invention, the virtual balancing axis arrangement can be built into the machine at little or no additional cost.
Biaxial flexural strength of CAD/CAM ceramics.
Buso, L; Oliveira-Júnior, O B; Hiroshi Fujiy, F; Leão Lombardo, G H; Ramalho Sarmento, H; Campos, F; Assunção Souza, R O
2011-06-01
Aim of the study was to evaluate the biaxial flexural strength of ceramics processed using the Cerec inLab system. The hypothesis was that the flexural strength would be influenced by the type of ceramic. Ten samples (ISO 6872) of each ceramic (N.=50/n.=10) were made using Cerec inLab (software Cerec 3D) (Ø:15 mm, thickness: 1.2 mm). Three silica-based ceramics (Vita Mark II [VM], ProCad [PC] and e-max CAD ECAD]) and two yttria-stabilized tetragonal-zirconia-polycrystalline ceramics (Y-TZP) (e-max ZirCad [ZrCAD] and Vita In-Ceram 2000 YZ Cubes [VYZ]) were tested. The samples were finished with wet silicone carbide papers up to 1 200-grit and polished in a polishing machine with diamond paste (3 µm). The samples were then submitted to biaxial flexural strength testing in a universal testing machine (EMIC), 1 mm/min. The data (MPa) were analyzed using the Kruskal-Wallis and Dunn (5%) tests. Scanning electronic microscopy (SEM) was performed on a representative sample from each group. The values (median, mean±sd) obtained for the experimental groups were: VM (101.7, 102.1±13.65 MPa), PC (165.2, 160±34.7 MPa), ECAD (437.2, 416.1±50.1 MPa), ZrCAD (804.2, 800.8±64.47 MPa) and VYZ (792.7, 807±100.7 MPa). The type of ceramic influenced the flexural strength values (P=0.0001). The ceramics ECADa, e-max ZrCADa and VYZa presented similar flexural strength values which were significantly higher than the other groups (PCb and VM IIb), which were similar statistically between them (Dunn's test). The hypothesis was accepted. The polycrystalline ceramics (Y-TZP) should be material chosen for make FPDs because of their higher flexural strength values.
Hybrid Cloud Computing Environment for EarthCube and Geoscience Community
NASA Astrophysics Data System (ADS)
Yang, C. P.; Qin, H.
2016-12-01
The NSF EarthCube Integration and Test Environment (ECITE) has built a hybrid cloud computing environment to provides cloud resources from private cloud environments by using cloud system software - OpenStack and Eucalyptus, and also manages public cloud - Amazon Web Service that allow resource synchronizing and bursting between private and public cloud. On ECITE hybrid cloud platform, EarthCube and geoscience community can deploy and manage the applications by using base virtual machine images or customized virtual machines, analyze big datasets by using virtual clusters, and real-time monitor the virtual resource usage on the cloud. Currently, a number of EarthCube projects have deployed or started migrating their projects to this platform, such as CHORDS, BCube, CINERGI, OntoSoft, and some other EarthCube building blocks. To accomplish the deployment or migration, administrator of ECITE hybrid cloud platform prepares the specific needs (e.g. images, port numbers, usable cloud capacity, etc.) of each project in advance base on the communications between ECITE and participant projects, and then the scientists or IT technicians in those projects launch one or multiple virtual machines, access the virtual machine(s) to set up computing environment if need be, and migrate their codes, documents or data without caring about the heterogeneity in structure and operations among different cloud platforms.
Cooperating reduction machines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kluge, W.E.
1983-11-01
This paper presents a concept and a system architecture for the concurrent execution of program expressions of a concrete reduction language based on lamda-expressions. If formulated appropriately, these expressions are well-suited for concurrent execution, following a demand-driven model of computation. In particular, recursive program expressions with nonlinear expansion may, at run time, recursively be partitioned into a hierarchy of independent subexpressions which can be reduced by a corresponding hierarchy of virtual reduction machines. This hierarchy unfolds and collapses dynamically, with virtual machines recursively assuming the role of masters that create and eventually terminate, or synchronize with, slaves. The paper alsomore » proposes a nonhierarchically organized system of reduction machines, each featuring a stack architecture, that effectively supports the allocation of virtual machines to the real machines of the system in compliance with their hierarchical order of creation and termination. 25 references.« less
Lu, Hao; Papathomas, Thomas G; van Zessen, David; Palli, Ivo; de Krijger, Ronald R; van der Spek, Peter J; Dinjens, Winand N M; Stubbs, Andrew P
2014-11-25
In prognosis and therapeutics of adrenal cortical carcinoma (ACC), the selection of the most active areas in proliferative rate (hotspots) within a slide and objective quantification of immunohistochemical Ki67 Labelling Index (LI) are of critical importance. In addition to intratumoral heterogeneity in proliferative rate i.e. levels of Ki67 expression within a given ACC, lack of uniformity and reproducibility in the method of quantification of Ki67 LI may confound an accurate assessment of Ki67 LI. We have implemented an open source toolset, Automated Selection of Hotspots (ASH), for automated hotspot detection and quantification of Ki67 LI. ASH utilizes NanoZoomer Digital Pathology Image (NDPI) splitter to convert the specific NDPI format digital slide scanned from the Hamamatsu instrument into a conventional tiff or jpeg format image for automated segmentation and adaptive step finding hotspots detection algorithm. Quantitative hotspot ranking is provided by the functionality from the open source application ImmunoRatio as part of the ASH protocol. The output is a ranked set of hotspots with concomitant quantitative values based on whole slide ranking. We have implemented an open source automated detection quantitative ranking of hotspots to support histopathologists in selecting the 'hottest' hotspot areas in adrenocortical carcinoma. To provide wider community easy access to ASH we implemented a Galaxy virtual machine (VM) of ASH which is available from http://bioinformatics.erasmusmc.nl/wiki/Automated_Selection_of_Hotspots . The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/13000_2014_216.
ERIC Educational Resources Information Center
Chou, Chih-Yueh; Huang, Bau-Hung; Lin, Chi-Jen
2011-01-01
This study proposes a virtual teaching assistant (VTA) to share teacher tutoring tasks in helping students practice program tracing and proposes two mechanisms of complementing machine intelligence and human intelligence to develop the VTA. The first mechanism applies machine intelligence to extend human intelligence (teacher answers) to evaluate…
Wilkins, Leanne K; Girard, Todd A; Herdman, Katherine A; Christensen, Bruce K; King, Jelena; Kiang, Michael; Bohbot, Veronique D
2017-10-30
Different strategies may be spontaneously adopted to solve most navigation tasks. These strategies are associated with dissociable brain systems. Here, we use brain-imaging and cognitive tasks to test the hypothesis that individuals living with Schizophrenia Spectrum Disorders (SSD) have selective impairment using a hippocampal-dependent spatial navigation strategy. Brain activation and memory performance were examined using functional magnetic resonance imaging (fMRI) during the 4-on-8 virtual maze (4/8VM) task, a human analog of the rodent radial-arm maze that is amenable to both response-based (egocentric or landmark-based) and spatial (allocentric, cognitive mapping) strategies to remember and navigate to target objects. SSD (schizophrenia and schizoaffective disorder) participants who adopted a spatial strategy performed more poorly on the 4/8VM task and had less hippocampal activation than healthy comparison participants using either strategy as well as SSD participants using a response strategy. This study highlights the importance of strategy use in relation to spatial cognitive functioning in SSD. Consistent with a selective-hippocampal dependent deficit in SSD, these results support the further development of protocols to train impaired hippocampal-dependent abilities or harness non-hippocampal dependent intact abilities. Copyright © 2017 Elsevier B.V. All rights reserved.
Simplified Virtualization in a HEP/NP Environment with Condor
NASA Astrophysics Data System (ADS)
Strecker-Kellogg, W.; Caramarcu, C.; Hollowell, C.; Wong, T.
2012-12-01
In this work we will address the development of a simple prototype virtualized worker node cluster, using Scientific Linux 6.x as a base OS, KVM and the libvirt API for virtualization, and the Condor batch software to manage virtual machines. The discussion in this paper provides details on our experience with building, configuring, and deploying the various components from bare metal, including the base OS, creation and distribution of the virtualized OS images and the integration of batch services with the virtual machines. Our focus was on simplicity and interoperability with our existing architecture.
Huang, Yukun; Chen, Rong; Wei, Jingbo; Pei, Xilong; Cao, Jing; Prakash Jayaraman, Prem; Ranjan, Rajiv
2014-01-01
JNI in the Android platform is often observed with low efficiency and high coding complexity. Although many researchers have investigated the JNI mechanism, few of them solve the efficiency and the complexity problems of JNI in the Android platform simultaneously. In this paper, a hybrid polylingual object (HPO) model is proposed to allow a CAR object being accessed as a Java object and as vice in the Dalvik virtual machine. It is an acceptable substitute for JNI to reuse the CAR-compliant components in Android applications in a seamless and efficient way. The metadata injection mechanism is designed to support the automatic mapping and reflection between CAR objects and Java objects. A prototype virtual machine, called HPO-Dalvik, is implemented by extending the Dalvik virtual machine to support the HPO model. Lifespan management, garbage collection, and data type transformation of HPO objects are also handled in the HPO-Dalvik virtual machine automatically. The experimental result shows that the HPO model outweighs the standard JNI in lower overhead on native side, better executing performance with no JNI bridging code being demanded. PMID:25110745
An incremental anomaly detection model for virtual machines.
Zhang, Hancui; Chen, Shuyu; Liu, Jun; Zhou, Zhen; Wu, Tianshu
2017-01-01
Self-Organizing Map (SOM) algorithm as an unsupervised learning method has been applied in anomaly detection due to its capabilities of self-organizing and automatic anomaly prediction. However, because of the algorithm is initialized in random, it takes a long time to train a detection model. Besides, the Cloud platforms with large scale virtual machines are prone to performance anomalies due to their high dynamic and resource sharing characters, which makes the algorithm present a low accuracy and a low scalability. To address these problems, an Improved Incremental Self-Organizing Map (IISOM) model is proposed for anomaly detection of virtual machines. In this model, a heuristic-based initialization algorithm and a Weighted Euclidean Distance (WED) algorithm are introduced into SOM to speed up the training process and improve model quality. Meanwhile, a neighborhood-based searching algorithm is presented to accelerate the detection time by taking into account the large scale and high dynamic features of virtual machines on cloud platform. To demonstrate the effectiveness, experiments on a common benchmark KDD Cup dataset and a real dataset have been performed. Results suggest that IISOM has advantages in accuracy and convergence velocity of anomaly detection for virtual machines on cloud platform.
An incremental anomaly detection model for virtual machines
Zhang, Hancui; Chen, Shuyu; Liu, Jun; Zhou, Zhen; Wu, Tianshu
2017-01-01
Self-Organizing Map (SOM) algorithm as an unsupervised learning method has been applied in anomaly detection due to its capabilities of self-organizing and automatic anomaly prediction. However, because of the algorithm is initialized in random, it takes a long time to train a detection model. Besides, the Cloud platforms with large scale virtual machines are prone to performance anomalies due to their high dynamic and resource sharing characters, which makes the algorithm present a low accuracy and a low scalability. To address these problems, an Improved Incremental Self-Organizing Map (IISOM) model is proposed for anomaly detection of virtual machines. In this model, a heuristic-based initialization algorithm and a Weighted Euclidean Distance (WED) algorithm are introduced into SOM to speed up the training process and improve model quality. Meanwhile, a neighborhood-based searching algorithm is presented to accelerate the detection time by taking into account the large scale and high dynamic features of virtual machines on cloud platform. To demonstrate the effectiveness, experiments on a common benchmark KDD Cup dataset and a real dataset have been performed. Results suggest that IISOM has advantages in accuracy and convergence velocity of anomaly detection for virtual machines on cloud platform. PMID:29117245
IBM NJE protocol emulator for VAX/VMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engert, D.E.
1981-01-01
Communications software has been written at Argonne National Laboratory to enable a VAX/VMS system to participate as an end-node in a standard IBM network by emulating the Network Job Entry (NJE) protocol. NJE is actually a collection of programs that support job networking for the operating systems used on most large IBM-compatible computers (e.g., VM/370, MVS with JES2 or JES3, SVS, MVT with ASP or HASP). Files received by the VAX can be printed or saved in user-selected disk files. Files sent to the network can be routed to any node in the network for printing, punching, or job submission,more » as well as to a VM/370 user's virtual reader. Files sent from the VAX are queued and transmitted asynchronously to allow users to perform other work while files are awaiting transmission. No changes are required to the IBM software.« less
Analysis towards VMEM File of a Suspended Virtual Machine
NASA Astrophysics Data System (ADS)
Song, Zheng; Jin, Bo; Sun, Yongqing
With the popularity of virtual machines, forensic investigators are challenged with more complicated situations, among which discovering the evidences in virtualized environment is of significant importance. This paper mainly analyzes the file suffixed with .vmem in VMware Workstation, which stores all pseudo-physical memory into an image. The internal file structure of .vmem file is studied and disclosed. Key information about processes and threads of a suspended virtual machine is revealed. Further investigation into the Windows XP SP3 heap contents is conducted and a proof-of-concept tool is provided. Different methods to obtain forensic memory images are introduced, with both advantages and limits analyzed. We conclude with an outlook.
The HEPiX Virtualisation Working Group: Towards a Grid of Clouds
NASA Astrophysics Data System (ADS)
Cass, Tony
2012-12-01
The use of virtual machine images, as for example with Cloud services such as Amazon's Elastic Compute Cloud, is attractive for users as they have a guaranteed execution environment, something that cannot today be provided across sites participating in computing grids such as the Worldwide LHC Computing Grid. However, Grid sites often operate within computer security frameworks which preclude the use of remotely generated images. The HEPiX Virtualisation Working Group was setup with the objective to enable use of remotely generated virtual machine images at Grid sites and, to this end, has introduced the idea of trusted virtual machine images which are guaranteed to be secure and configurable by sites such that security policy commitments can be met. This paper describes the requirements and details of these trusted virtual machine images and presents a model for their use to facilitate the integration of Grid- and Cloud-based computing environments for High Energy Physics.
Human Machine Interfaces for Teleoperators and Virtual Environments
NASA Technical Reports Server (NTRS)
Durlach, Nathaniel I. (Compiler); Sheridan, Thomas B. (Compiler); Ellis, Stephen R. (Compiler)
1991-01-01
In Mar. 1990, a meeting organized around the general theme of teleoperation research into virtual environment display technology was conducted. This is a collection of conference-related fragments that will give a glimpse of the potential of the following fields and how they interplay: sensorimotor performance; human-machine interfaces; teleoperation; virtual environments; performance measurement and evaluation methods; and design principles and predictive models.
1988-02-29
by memory copyin g will degrade system performance on shared-memory multiprocessors. Virtual memor y (VM) remapping, as opposed to memory copying...Bershad, G.D. Giuseppe Facchetti, Kevin Fall, G . Scott Graham, Ellen Nelson , P. Venkat Rangan, Bruno Sartirana, Shin-Yuan Tzou, Raj Vaswani, and Robert...Remote Execution in NEST", IEEE Trans. on Software Eng. 13, 8 (August 1987), 905-912. 3. G . T. Almes, A. P. Black, E. Lazowska and J. Noe, "The Eden
Virtual C Machine and Integrated Development Environment for ATMS Controllers.
DOT National Transportation Integrated Search
2000-04-01
The overall objective of this project is to develop a prototype virtual machine that fits on current Advanced Traffic Management Systems (ATMS) controllers and provides functionality for complex traffic operations.;Prepared in cooperation with Utah S...
Using Impact Modulation to Identify Loose Bolts on a Satellite
2011-10-21
for public release; distribution is unlimited the literature to be an effective damage detection method for cracks, delamination, and fatigue in...to identify loose bolts and fatigue damage using optimized sensor locations using a Support Vector Machines algorithm to classify the dam- age. Finally...48] did preliminary work which showed that VM is effective in detecting fatigue cracks in engineering components despite changes in actuator location
System-Level Virtualization Research at Oak Ridge National Laboratory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scott, Stephen L; Vallee, Geoffroy R; Naughton, III, Thomas J
2010-01-01
System-level virtualization is today enjoying a rebirth as a technique to effectively share what were then considered large computing resources to subsequently fade from the spotlight as individual workstations gained in popularity with a one machine - one user approach. One reason for this resurgence is that the simple workstation has grown in capability to rival that of anything available in the past. Thus, computing centers are again looking at the price/performance benefit of sharing that single computing box via server consolidation. However, industry is only concentrating on the benefits of using virtualization for server consolidation (enterprise computing) whereas ourmore » interest is in leveraging virtualization to advance high-performance computing (HPC). While these two interests may appear to be orthogonal, one consolidating multiple applications and users on a single machine while the other requires all the power from many machines to be dedicated solely to its purpose, we propose that virtualization does provide attractive capabilities that may be exploited to the benefit of HPC interests. This does raise the two fundamental questions of: is the concept of virtualization (a machine sharing technology) really suitable for HPC and if so, how does one go about leveraging these virtualization capabilities for the benefit of HPC. To address these questions, this document presents ongoing studies on the usage of system-level virtualization in a HPC context. These studies include an analysis of the benefits of system-level virtualization for HPC, a presentation of research efforts based on virtualization for system availability, and a presentation of research efforts for the management of virtual systems. The basis for this document was material presented by Stephen L. Scott at the Collaborative and Grid Computing Technologies meeting held in Cancun, Mexico on April 12-14, 2007.« less
An element search ant colony technique for solving virtual machine placement problem
NASA Astrophysics Data System (ADS)
Srija, J.; Rani John, Rose; Kanaga, Grace Mary, Dr.
2017-09-01
The data centres in the cloud environment play a key role in providing infrastructure for ubiquitous computing, pervasive computing, mobile computing etc. This computing technique tries to utilize the available resources in order to provide services. Hence maintaining the resource utilization without wastage of power consumption has become a challenging task for the researchers. In this paper we propose the direct guidance ant colony system for effective mapping of virtual machines to the physical machine with maximal resource utilization and minimal power consumption. The proposed algorithm has been compared with the existing ant colony approach which is involved in solving virtual machine placement problem and thus the proposed algorithm proves to provide better result than the existing technique.
Performance Analysis of Ivshmem for High-Performance Computing in Virtual Machines
NASA Astrophysics Data System (ADS)
Ivanovic, Pavle; Richter, Harald
2018-01-01
High-Performance computing (HPC) is rarely accomplished via virtual machines (VMs). In this paper, we present a remake of ivshmem which can change this. Ivshmem was a shared memory (SHM) between virtual machines on the same server, with SHM-access synchronization included, until about 5 years ago when newer versions of Linux and its virtualization library libvirt evolved. We restored that SHM-access synchronization feature because it is indispensable for HPC and made ivshmem runnable with contemporary versions of Linux, libvirt, KVM, QEMU and especially MPICH, which is an implementation of MPI - the standard HPC communication library. Additionally, MPICH was transparently modified by us to get ivshmem included, resulting in a three to ten times performance improvement compared to TCP/IP. Furthermore, we have transparently replaced MPI_PUT, a single-side MPICH communication mechanism, by an own MPI_PUT wrapper. As a result, our ivshmem even surpasses non-virtualized SHM data transfers for block lengths greater than 512 KBytes, showing the benefits of virtualization. All improvements were possible without using SR-IOV.
Dynamic simulation of perturbation responses in a closed-loop virtual arm model.
Du, Yu-Fan; He, Xin; Lan, Ning
2010-01-01
A closed-loop virtual arm (VA) model has been developed in SIMULINK environment by adding spinal reflex circuits and propriospinal neural networks to the open-loop VA model developed in early study [1]. An improved virtual muscle model (VM4.0) is used to speed up simulation and to generate more precise recruitment of muscle force at low levels of muscle activation. Time delays in the reflex loops are determined by their synaptic connections and afferent transmission back to the spinal cord. Reflex gains are properly selected so that closed-loop responses are stable. With the closed-loop VA model, we are developing an approach to evaluate system behaviors by dynamic simulation of perturbation responses. Joint stiffness is calculated based on simulated perturbation responses by a least-squares algorithm in MATLAB. This method of dynamic simulation will be essential for further evaluation of feedforward and reflex control of arm movement and position.
Concurrent access to a virtual microscope using a web service oriented architecture
NASA Astrophysics Data System (ADS)
Corredor, Germán.; Iregui, Marcela; Arias, Viviana; Romero, Eduardo
2013-11-01
Virtual microscopy (VM) facilitates visualization and deployment of histopathological virtual slides (VS), a useful tool for education, research and diagnosis. In recent years, it has become popular, yet its use is still limited basically because of the very large sizes of VS, typically of the order of gigabytes. Such volume of data requires efficacious and efficient strategies to access the VS content. In an educative or research scenario, several users may require to access and interact with VS at the same time, so, due to large data size, a very expensive and powerful infrastructure is usually required. This article introduces a novel JPEG2000-based service oriented architecture for streaming and visualizing very large images under scalable strategies, which in addition need not require very specialized infrastructure. Results suggest that the proposed architecture enables transmission and simultaneous visualization of large images, while it is efficient using resources and offering users proper response times.
NASA Astrophysics Data System (ADS)
Berzano, D.; Blomer, J.; Buncic, P.; Charalampidis, I.; Ganis, G.; Meusel, R.
2015-12-01
During the last years, several Grid computing centres chose virtualization as a better way to manage diverse use cases with self-consistent environments on the same bare infrastructure. The maturity of control interfaces (such as OpenNebula and OpenStack) opened the possibility to easily change the amount of resources assigned to each use case by simply turning on and off virtual machines. Some of those private clouds use, in production, copies of the Virtual Analysis Facility, a fully virtualized and self-contained batch analysis cluster capable of expanding and shrinking automatically upon need: however, resources starvation occurs frequently as expansion has to compete with other virtual machines running long-living batch jobs. Such batch nodes cannot relinquish their resources in a timely fashion: the more jobs they run, the longer it takes to drain them and shut off, and making one-job virtual machines introduces a non-negligible virtualization overhead. By improving several components of the Virtual Analysis Facility we have realized an experimental “Docked” Analysis Facility for ALICE, which leverages containers instead of virtual machines for providing performance and security isolation. We will present the techniques we have used to address practical problems, such as software provisioning through CVMFS, as well as our considerations on the maturity of containers for High Performance Computing. As the abstraction layer is thinner, our Docked Analysis Facilities may feature a more fine-grained sizing, down to single-job node containers: we will show how this approach will positively impact automatic cluster resizing by deploying lightweight pilot containers instead of replacing central queue polls.
Lee, Kyeong-Ryoon; Chae, Yoon-Jee; Cho, Sung-Eel; Chung, Suk-Jae
2011-12-01
A single-dose glass ampoule was developed for ease of administration. When glass ampoules are opened, resulting in contamination by particulate matter. Reducing its contamination may minimize the risk in patients due to particulates. This study reports on an attempt to reduce insoluble particulate contamination by developing methods for the precise measurement of this. A vacuum machine (VM) was used to reduce the level of insoluble particulate contamination, and a microscopy, scanning electron microscopy-energy dispersive X-ray spectrometer (SEM-EDS) and inductively coupled plasma-atomic emission spectrometer (ICP-AES) were used to evaluate the level of reduction. The method permitted the insoluble particle content to be reduced by up to 87.8 and 89.3% after opening 1 and 2 mL-ampoules, respectively. The morphology of the glass particulate contaminants was very sharp and rough, a condition that can be harmful to human health. The total weight of glass particles in the opened ampoules was determined to be 104 ± 72.9 μg and 30.5 ± 1.00 μg after opening 1 and 2 mL-ampoules when the VM was operated at highest power. The total weights were reduced to 53.6 and 50.6%, respectively for 1 and 2 mL-ampoules, compared to opening by hand. The loss of ampoule contents on opening by the VM was 6.50 and 4.67% for 1 and 2 mL-ampoules, respectively. As a result, the VM efficiently reduced glass particulate contamination and the evaluation methods used were appropriate for quantifying these levels of contamination.
Atmospheric Science Data Center
2013-04-01
MISR Center Block Time Tool The misr_time tool calculates the block center times for MISR Level 1B2 files. This is ... version of the IDL package or by using the IDL Virtual Machine application. The IDL Virtual Machine is bundled with IDL and is ...
System-Level Virtualization for High Performance Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vallee, Geoffroy R; Naughton, III, Thomas J; Engelmann, Christian
2008-01-01
System-level virtualization has been a research topic since the 70's but regained popularity during the past few years because of the availability of efficient solution such as Xen and the implementation of hardware support in commodity processors (e.g. Intel-VT, AMD-V). However, a majority of system-level virtualization projects is guided by the server consolidation market. As a result, current virtualization solutions appear to not be suitable for high performance computing (HPC) which is typically based on large-scale systems. On another hand there is significant interest in exploiting virtual machines (VMs) within HPC for a number of other reasons. By virtualizing themore » machine, one is able to run a variety of operating systems and environments as needed by the applications. Virtualization allows users to isolate workloads, improving security and reliability. It is also possible to support non-native environments and/or legacy operating environments through virtualization. In addition, it is possible to balance work loads, use migration techniques to relocate applications from failing machines, and isolate fault systems for repair. This document presents the challenges for the implementation of a system-level virtualization solution for HPC. It also presents a brief survey of the different approaches and techniques to address these challenges.« less
The Machine / Job Features Mechanism
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alef, M.; Cass, T.; Keijser, J. J.
Within the HEPiX virtualization group and the Worldwide LHC Computing Grid’s Machine/Job Features Task Force, a mechanism has been developed which provides access to detailed information about the current host and the current job to the job itself. This allows user payloads to access meta information, independent of the current batch system or virtual machine model. The information can be accessed either locally via the filesystem on a worker node, or remotely via HTTP(S) from a webserver. This paper describes the final version of the specification from 2016 which was published as an HEP Software Foundation technical note, and themore » design of the implementations of this version for batch and virtual machine platforms. We discuss early experiences with these implementations and how they can be exploited by experiment frameworks.« less
The machine/job features mechanism
NASA Astrophysics Data System (ADS)
Alef, M.; Cass, T.; Keijser, J. J.; McNab, A.; Roiser, S.; Schwickerath, U.; Sfiligoi, I.
2017-10-01
Within the HEPiX virtualization group and the Worldwide LHC Computing Grid’s Machine/Job Features Task Force, a mechanism has been developed which provides access to detailed information about the current host and the current job to the job itself. This allows user payloads to access meta information, independent of the current batch system or virtual machine model. The information can be accessed either locally via the filesystem on a worker node, or remotely via HTTP(S) from a webserver. This paper describes the final version of the specification from 2016 which was published as an HEP Software Foundation technical note, and the design of the implementations of this version for batch and virtual machine platforms. We discuss early experiences with these implementations and how they can be exploited by experiment frameworks.
On-demand Simulation of Atmospheric Transport Processes on the AlpEnDAC Cloud
NASA Astrophysics Data System (ADS)
Hachinger, S.; Harsch, C.; Meyer-Arnek, J.; Frank, A.; Heller, H.; Giemsa, E.
2016-12-01
The "Alpine Environmental Data Analysis Centre" (AlpEnDAC) develops a data-analysis platform for high-altitude research facilities within the "Virtual Alpine Observatory" project (VAO). This platform, with its web portal, will support use cases going much beyond data management: On user request, the data are augmented with "on-demand" simulation results, such as air-parcel trajectories for tracing down the source of pollutants when they appear in high concentration. The respective back-end mechanism uses the Compute Cloud of the Leibniz Supercomputing Centre (LRZ) to transparently calculate results requested by the user, as far as they have not yet been stored in AlpEnDAC. The queuing-system operation model common in supercomputing is replaced by a model in which Virtual Machines (VMs) on the cloud are automatically created/destroyed, providing the necessary computing power immediately on demand. From a security point of view, this allows to perform simulations in a sandbox defined by the VM configuration, without direct access to a computing cluster. Within few minutes, the user receives conveniently visualized results. The AlpEnDAC infrastructure is distributed among two participating institutes [front-end at German Aerospace Centre (DLR), simulation back-end at LRZ], requiring an efficient mechanism for synchronization of measured and augmented data. We discuss our iRODS-based solution for these data-management tasks as well as the general AlpEnDAC framework. Our cloud-based offerings aim at making scientific computing for our users much more convenient and flexible than it has been, and to allow scientists without a broad background in scientific computing to benefit from complex numerical simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hasenkamp, Daren; Sim, Alexander; Wehner, Michael
Extensive computing power has been used to tackle issues such as climate changes, fusion energy, and other pressing scientific challenges. These computations produce a tremendous amount of data; however, many of the data analysis programs currently only run a single processor. In this work, we explore the possibility of using the emerging cloud computing platform to parallelize such sequential data analysis tasks. As a proof of concept, we wrap a program for analyzing trends of tropical cyclones in a set of virtual machines (VMs). This approach allows the user to keep their familiar data analysis environment in the VMs, whilemore » we provide the coordination and data transfer services to ensure the necessary input and output are directed to the desired locations. This work extensively exercises the networking capability of the cloud computing systems and has revealed a number of weaknesses in the current cloud system software. In our tests, we are able to scale the parallel data analysis job to a modest number of VMs and achieve a speedup that is comparable to running the same analysis task using MPI. However, compared to MPI based parallelization, the cloud-based approach has a number of advantages. The cloud-based approach is more flexible because the VMs can capture arbitrary software dependencies without requiring the user to rewrite their programs. The cloud-based approach is also more resilient to failure; as long as a single VM is running, it can make progress while as soon as one MPI node fails the whole analysis job fails. In short, this initial work demonstrates that a cloud computing system is a viable platform for distributed scientific data analyses traditionally conducted on dedicated supercomputing systems.« less
NASA Astrophysics Data System (ADS)
Bolodurina, I. P.; Parfenov, D. I.
2018-01-01
We have elaborated a neural network model of virtual network flow identification based on the statistical properties of flows circulating in the network of the data center and characteristics that describe the content of packets transmitted through network objects. This enabled us to establish the optimal set of attributes to identify virtual network functions. We have established an algorithm for optimizing the placement of virtual data functions using the data obtained in our research. Our approach uses a hybrid method of visualization using virtual machines and containers, which enables to reduce the infrastructure load and the response time in the network of the virtual data center. The algorithmic solution is based on neural networks, which enables to scale it at any number of the network function copies.
ERIC Educational Resources Information Center
Liu, S.; Tang, J.; Deng, C.; Li, X.-F.; Gaudiot, J.-L.
2011-01-01
Java Virtual Machine (JVM) education has become essential in training embedded software engineers as well as virtual machine researchers and practitioners. However, due to the lack of suitable instructional tools, it is difficult for students to obtain any kind of hands-on experience and to attain any deep understanding of JVM design. To address…
A comparative analysis of dynamic grids vs. virtual grids using the A3pviGrid framework.
Shankaranarayanan, Avinas; Amaldas, Christine
2010-11-01
With the proliferation of Quad/Multi-core micro-processors in mainstream platforms such as desktops and workstations; a large number of unused CPU cycles can be utilized for running virtual machines (VMs) as dynamic nodes in distributed environments. Grid services and its service oriented business broker now termed cloud computing could deploy image based virtualization platforms enabling agent based resource management and dynamic fault management. In this paper we present an efficient way of utilizing heterogeneous virtual machines on idle desktops as an environment for consumption of high performance grid services. Spurious and exponential increases in the size of the datasets are constant concerns in medical and pharmaceutical industries due to the constant discovery and publication of large sequence databases. Traditional algorithms are not modeled at handing large data sizes under sudden and dynamic changes in the execution environment as previously discussed. This research was undertaken to compare our previous results with running the same test dataset with that of a virtual Grid platform using virtual machines (Virtualization). The implemented architecture, A3pviGrid utilizes game theoretic optimization and agent based team formation (Coalition) algorithms to improve upon scalability with respect to team formation. Due to the dynamic nature of distributed systems (as discussed in our previous work) all interactions were made local within a team transparently. This paper is a proof of concept of an experimental mini-Grid test-bed compared to running the platform on local virtual machines on a local test cluster. This was done to give every agent its own execution platform enabling anonymity and better control of the dynamic environmental parameters. We also analyze performance and scalability of Blast in a multiple virtual node setup and present our findings. This paper is an extension of our previous research on improving the BLAST application framework using dynamic Grids on virtualization platforms such as the virtual box.
Bipolar stimulation of a three-dimensional bidomain incorporating rotational anisotropy.
Muzikant, A L; Henriquez, C S
1998-04-01
A bidomain model of cardiac tissue was used to examine the effect of transmural fiber rotation during bipolar stimulation in three-dimensional (3-D) myocardium. A 3-D tissue block with unequal anisotropy and two types of fiber rotation (none and moderate) was stimulated along and across fibers via bipolar electrodes on the epicardial surface, and the resulting steady-state interstitial (phi e) and transmembrane (Vm) potentials were computed. Results demonstrate that the presence of rotated fibers does not change the amount of tissue polarized by the point surface stimuli, but does cause changes in the orientation of phi e and Vm in the depth of the tissue, away from the epicardium. Further analysis revealed a relationship between the Laplacian of phi e, regions of virtual electrodes, and fiber orientation that was dependent upon adequacy of spatial sampling and the interstitial anisotropy. These findings help to understand the role of fiber architecture during extracellular stimulation of cardiac muscle.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engert, D.E.; Raffenetti, C.
NJE is communications software developed to enable a VAX VMS system to participate as an end-node in a standard IBM network by emulating the Network Job Entry (NJE) protocol. NJE supports job networking for the operating systems used on most large IBM-compatible computers (e.g., VM/370, MVS with JES2 or JES3, SVS, MVT with ASP or HASP). Files received by the VAX can be printed or saved in user-selected disk files. Files sent to the network can be routed to any network node for printing, punching, or job submission, or to a VM/370 user's virtual reader. Files sent from the VAXmore » are queued and transmitted asynchronously. No changes are required to the IBM software.DEC VAX11/780; VAX-11 FORTRAN 77 (99%) and MACRO-11 (1%); VMS 2.5; VAX11/780 with DUP-11 UNIBUS interface and 9600 baud synchronous modem..« less
Impact of Machine Virtualization on Timing Precision for Performance-critical Tasks
NASA Astrophysics Data System (ADS)
Karpov, Kirill; Fedotova, Irina; Siemens, Eduard
2017-07-01
In this paper we present a measurement study to characterize the impact of hardware virtualization on basic software timing, as well as on precise sleep operations of an operating system. We investigated how timer hardware is shared among heavily CPU-, I/O- and Network-bound tasks on a virtual machine as well as on the host machine. VMware ESXi and QEMU/KVM have been chosen as commonly used examples of hypervisor- and host-based models. Based on statistical parameters of retrieved distributions, our results provide a very good estimation of timing behavior. It is essential for real-time and performance-critical applications such as image processing or real-time control.
Software architecture standard for simulation virtual machine, version 2.0
NASA Technical Reports Server (NTRS)
Sturtevant, Robert; Wessale, William
1994-01-01
The Simulation Virtual Machine (SBM) is an Ada architecture which eases the effort involved in the real-time software maintenance and sustaining engineering. The Software Architecture Standard defines the infrastructure which all the simulation models are built from. SVM was developed for and used in the Space Station Verification and Training Facility.
DOE Office of Scientific and Technical Information (OSTI.GOV)
David Fritz, John Floren
2013-08-27
Minimega is a simple emulytics platform for creating testbeds of networked devices. The platform consists of easily deployable tools to facilitate bringing up large networks of virtual machines including Windows, Linux, and Android. Minimega attempts to allow experiments to be brought up quickly with nearly no configuration. Minimega also includes tools for simple cluster management, as well as tools for creating Linux based virtual machine images.
Web Service Distributed Management Framework for Autonomic Server Virtualization
NASA Astrophysics Data System (ADS)
Solomon, Bogdan; Ionescu, Dan; Litoiu, Marin; Mihaescu, Mircea
Virtualization for the x86 platform has imposed itself recently as a new technology that can improve the usage of machines in data centers and decrease the cost and energy of running a high number of servers. Similar to virtualization, autonomic computing and more specifically self-optimization, aims to improve server farm usage through provisioning and deprovisioning of instances as needed by the system. Autonomic systems are able to determine the optimal number of server machines - real or virtual - to use at a given time, and add or remove servers from a cluster in order to achieve optimal usage. While provisioning and deprovisioning of servers is very important, the way the autonomic system is built is also very important, as a robust and open framework is needed. One such management framework is the Web Service Distributed Management (WSDM) system, which is an open standard of the Organization for the Advancement of Structured Information Standards (OASIS). This paper presents an open framework built on top of the WSDM specification, which aims to provide self-optimization for applications servers residing on virtual machines.
Design and Development of Functionally Operative and Visually Appealing Remote Firing Room Displays
NASA Technical Reports Server (NTRS)
Quaranto, Kristy
2014-01-01
This internship provided an opportunity for an intern to work with NASA's Ground Support Equipment (GSE) for the Spaceport Command and Control System (SCCS) at Kennedy Space Center as a remote display developer, under NASA mentor Kurt Leucht. The main focus was on creating remote displays for the hypergolic and high pressure helium subsystem team to help control the filling of the respective tanks. As a remote display developer for the GSE hypergolic and high pressure helium subsystem team the intern was responsible for creating and testing graphical remote displays to be used in the Launch Control Center (LCC) on the Firing Room's computer monitors. To become more familiar with the subsystem, the individual attended multiple project meetings and acquired their specific requirements regarding what needed to be included in the remote displays. After receiving the requirements, the next step was to create a display that had both visual appeal and logical order using the Display Editor, on the Virtual Machine (VM). In doing so, all Compact Unique Identifiers (CUI), which are associated with specific components within the subsystem, will need to be included in each respective display for the system to run properly. Then, once the display was created it needed to be tested to ensure that the display runs as intended by using the Test Driver, also found on the VM. This Test Driver is a specific application that checks to make sure all the CUIs in the display are running properly and returning the correct form of information. After creating and locally testing the display it will need to go through further testing and evaluation before deemed suitable for actual use. By the end of the semester long experience at NASA's Kennedy Space Center, the individual should have gained great knowledge and experience in various areas of display development and testing. They were able to demonstrate this new knowledge obtained by creating multiple successful remote displays that will one day be used by the hypergolic and high pressure helium subsystem team in one of the LCC's firing rooms to fill the new Orion spacecraft.
NASA Astrophysics Data System (ADS)
Delipetrev, Blagoj
2016-04-01
Presently, most of the existing software is desktop-based, designed to work on a single computer, which represents a major limitation in many ways, starting from limited computer processing, storage power, accessibility, availability, etc. The only feasible solution lies in the web and cloud. This abstract presents research and development of a cloud computing geospatial application for water resources based on free and open source software and open standards using hybrid deployment model of public - private cloud, running on two separate virtual machines (VMs). The first one (VM1) is running on Amazon web services (AWS) and the second one (VM2) is running on a Xen cloud platform. The presented cloud application is developed using free and open source software, open standards and prototype code. The cloud application presents a framework how to develop specialized cloud geospatial application that needs only a web browser to be used. This cloud application is the ultimate collaboration geospatial platform because multiple users across the globe with internet connection and browser can jointly model geospatial objects, enter attribute data and information, execute algorithms, and visualize results. The presented cloud application is: available all the time, accessible from everywhere, it is scalable, works in a distributed computer environment, it creates a real-time multiuser collaboration platform, the programing languages code and components are interoperable, and it is flexible in including additional components. The cloud geospatial application is implemented as a specialized water resources application with three web services for 1) data infrastructure (DI), 2) support for water resources modelling (WRM), 3) user management. The web services are running on two VMs that are communicating over the internet providing services to users. The application was tested on the Zletovica river basin case study with concurrent multiple users. The application is a state-of-the-art cloud geospatial collaboration platform. The presented solution is a prototype and can be used as a foundation for developing of any specialized cloud geospatial applications. Further research will be focused on distributing the cloud application on additional VMs, testing the scalability and availability of services.
Flow-Centric, Back-in-Time Debugging
NASA Astrophysics Data System (ADS)
Lienhard, Adrian; Fierz, Julien; Nierstrasz, Oscar
Conventional debugging tools present developers with means to explore the run-time context in which an error has occurred. In many cases this is enough to help the developer discover the faulty source code and correct it. However, rather often errors occur due to code that has executed in the past, leaving certain objects in an inconsistent state. The actual run-time error only occurs when these inconsistent objects are used later in the program. So-called back-in-time debuggers help developers step back through earlier states of the program and explore execution contexts not available to conventional debuggers. Nevertheless, even Back-in-Time Debuggers do not help answer the question, “Where did this object come from?” The Object-Flow Virtual Machine, which we have proposed in previous work, tracks the flow of objects to answer precisely such questions, but this VM does not provide dedicated debugging support to explore faulty programs. In this paper we present a novel debugger, called Compass, to navigate between conventional run-time stack-oriented control flow views and object flows. Compass enables a developer to effectively navigate from an object contributing to an error back-in-time through all the code that has touched the object. We present the design and implementation of Compass, and we demonstrate how flow-centric, back-in-time debugging can be used to effectively locate the source of hard-to-find bugs.
LHCbDIRAC as Apache Mesos microservices
NASA Astrophysics Data System (ADS)
Haen, Christophe; Couturier, Benjamin
2017-10-01
The LHCb experiment relies on LHCbDIRAC, an extension of DIRAC, to drive its offline computing. This middleware provides a development framework and a complete set of components for building distributed computing systems. These components are currently installed and run on virtual machines (VM) or bare metal hardware. Due to the increased workload, high availability is becoming more and more important for the LHCbDIRAC services, and the current installation model is showing its limitations. Apache Mesos is a cluster manager which aims at abstracting heterogeneous physical resources on which various tasks can be distributed thanks to so called “frameworks” The Marathon framework is suitable for long running tasks such as the DIRAC services, while the Chronos framework meets the needs of cron-like tasks like the DIRAC agents. A combination of the service discovery tool Consul together with HAProxy allows to expose the running containers to the outside world while hiding their dynamic placements. Such an architecture brings a greater flexibility in the deployment of LHCbDirac services, allowing for easier deployment maintenance and scaling of services on demand (e..g LHCbDirac relies on 138 services and 116 agents). Higher reliability is also easier, as clustering is part of the toolset, which allows constraints on the location of the services. This paper describes the investigations carried out to package the LHCbDIRAC and DIRAC components into Docker containers and orchestrate them using the previously described set of tools.
Virtual Machine Language Controls Remote Devices
NASA Technical Reports Server (NTRS)
2014-01-01
Kennedy Space Center worked with Blue Sun Enterprises, based in Boulder, Colorado, to enhance the company's virtual machine language (VML) to control the instruments on the Regolith and Environment Science and Oxygen and Lunar Volatiles Extraction mission. Now the NASA-improved VML is available for crewed and uncrewed spacecraft, and has potential applications on remote systems such as weather balloons, unmanned aerial vehicles, and submarines.
A Virtual Astronomical Research Machine in No Time (VARMiNT)
NASA Astrophysics Data System (ADS)
Beaver, John
2012-05-01
We present early results of using virtual machine software to help make astronomical research computing accessible to a wider range of individuals. Our Virtual Astronomical Research Machine in No Time (VARMiNT) is an Ubuntu Linux virtual machine with free, open-source software already installed and configured (and in many cases documented). The purpose of VARMiNT is to provide a ready-to-go astronomical research computing environment that can be freely shared between researchers, or between amateur and professional, teacher and student, etc., and to circumvent the often-difficult task of configuring a suitable computing environment from scratch. Thus we hope that VARMiNT will make it easier for individuals to engage in research computing even if they have no ready access to the facilities of a research institution. We describe our current version of VARMiNT and some of the ways it is being used at the University of Wisconsin - Fox Valley, a two-year teaching campus of the University of Wisconsin System, as a means to enhance student independent study research projects and to facilitate collaborations with researchers at other locations. We also outline some future plans and prospects.
Human-machine interface for a VR-based medical imaging environment
NASA Astrophysics Data System (ADS)
Krapichler, Christian; Haubner, Michael; Loesch, Andreas; Lang, Manfred K.; Englmeier, Karl-Hans
1997-05-01
Modern 3D scanning techniques like magnetic resonance imaging (MRI) or computed tomography (CT) produce high- quality images of the human anatomy. Virtual environments open new ways to display and to analyze those tomograms. Compared with today's inspection of 2D image sequences, physicians are empowered to recognize spatial coherencies and examine pathological regions more facile, diagnosis and therapy planning can be accelerated. For that purpose a powerful human-machine interface is required, which offers a variety of tools and features to enable both exploration and manipulation of the 3D data. Man-machine communication has to be intuitive and efficacious to avoid long accustoming times and to enhance familiarity with and acceptance of the interface. Hence, interaction capabilities in virtual worlds should be comparable to those in the real work to allow utilization of our natural experiences. In this paper the integration of hand gestures and visual focus, two important aspects in modern human-computer interaction, into a medical imaging environment is shown. With the presented human- machine interface, including virtual reality displaying and interaction techniques, radiologists can be supported in their work. Further, virtual environments can even alleviate communication between specialists from different fields or in educational and training applications.
Dynamically allocated virtual clustering management system
NASA Astrophysics Data System (ADS)
Marcus, Kelvin; Cannata, Jess
2013-05-01
The U.S Army Research Laboratory (ARL) has built a "Wireless Emulation Lab" to support research in wireless mobile networks. In our current experimentation environment, our researchers need the capability to run clusters of heterogeneous nodes to model emulated wireless tactical networks where each node could contain a different operating system, application set, and physical hardware. To complicate matters, most experiments require the researcher to have root privileges. Our previous solution of using a single shared cluster of statically deployed virtual machines did not sufficiently separate each user's experiment due to undesirable network crosstalk, thus only one experiment could be run at a time. In addition, the cluster did not make efficient use of our servers and physical networks. To address these concerns, we created the Dynamically Allocated Virtual Clustering management system (DAVC). This system leverages existing open-source software to create private clusters of nodes that are either virtual or physical machines. These clusters can be utilized for software development, experimentation, and integration with existing hardware and software. The system uses the Grid Engine job scheduler to efficiently allocate virtual machines to idle systems and networks. The system deploys stateless nodes via network booting. The system uses 802.1Q Virtual LANs (VLANs) to prevent experimentation crosstalk and to allow for complex, private networks eliminating the need to map each virtual machine to a specific switch port. The system monitors the health of the clusters and the underlying physical servers and it maintains cluster usage statistics for historical trends. Users can start private clusters of heterogeneous nodes with root privileges for the duration of the experiment. Users also control when to shutdown their clusters.
Open multi-agent control architecture to support virtual-reality-based man-machine interfaces
NASA Astrophysics Data System (ADS)
Freund, Eckhard; Rossmann, Juergen; Brasch, Marcel
2001-10-01
Projective Virtual Reality is a new and promising approach to intuitively operable man machine interfaces for the commanding and supervision of complex automation systems. The user interface part of Projective Virtual Reality heavily builds on latest Virtual Reality techniques, a task deduction component and automatic action planning capabilities. In order to realize man machine interfaces for complex applications, not only the Virtual Reality part has to be considered but also the capabilities of the underlying robot and automation controller are of great importance. This paper presents a control architecture that has proved to be an ideal basis for the realization of complex robotic and automation systems that are controlled by Virtual Reality based man machine interfaces. The architecture does not just provide a well suited framework for the real-time control of a multi robot system but also supports Virtual Reality metaphors and augmentations which facilitate the user's job to command and supervise a complex system. The developed control architecture has already been used for a number of applications. Its capability to integrate sensor information from sensors of different levels of abstraction in real-time helps to make the realized automation system very responsive to real world changes. In this paper, the architecture will be described comprehensively, its main building blocks will be discussed and one realization that is built based on an open source real-time operating system will be presented. The software design and the features of the architecture which make it generally applicable to the distributed control of automation agents in real world applications will be explained. Furthermore its application to the commanding and control of experiments in the Columbus space laboratory, the European contribution to the International Space Station (ISS), is only one example which will be described.
Staghorn: An Automated Large-Scale Distributed System Analysis Platform
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gabert, Kasimir; Burns, Ian; Elliott, Steven
2016-09-01
Conducting experiments on large-scale distributed computing systems is becoming significantly easier with the assistance of emulation. Researchers can now create a model of a distributed computing environment and then generate a virtual, laboratory copy of the entire system composed of potentially thousands of virtual machines, switches, and software. The use of real software, running at clock rate in full virtual machines, allows experiments to produce meaningful results without necessitating a full understanding of all model components. However, the ability to inspect and modify elements within these models is bound by the limitation that such modifications must compete with the model,more » either running in or alongside it. This inhibits entire classes of analyses from being conducted upon these models. We developed a mechanism to snapshot an entire emulation-based model as it is running. This allows us to \\freeze time" and subsequently fork execution, replay execution, modify arbitrary parts of the model, or deeply explore the model. This snapshot includes capturing packets in transit and other input/output state along with the running virtual machines. We were able to build this system in Linux using Open vSwitch and Kernel Virtual Machines on top of Sandia's emulation platform Firewheel. This primitive opens the door to numerous subsequent analyses on models, including state space exploration, debugging distributed systems, performance optimizations, improved training environments, and improved experiment repeatability.« less
NASA Astrophysics Data System (ADS)
Rivers, M. L.; Gualda, G. A.
2009-05-01
One of the challenges in tomography is the availability of suitable software for image processing and analysis in 3D. We present here 'tomo_display' and 'vol_tools', two packages created in IDL that enable reconstruction, processing, and visualization of tomographic data. They complement in many ways the capabilities offered by Blob3D (Ketcham 2005 - Geosphere, 1: 32-41, DOI: 10.1130/GES00001.1) and, in combination, allow users without programming knowledge to perform all steps necessary to obtain qualitative and quantitative information using tomographic data. The package 'tomo_display' was created and is maintained by Mark Rivers. It allows the user to: (1) preprocess and reconstruct parallel beam tomographic data, including removal of anomalous pixels, ring artifact reduction, and automated determination of the rotation center, (2) visualization of both raw and reconstructed data, either as individual frames, or as a series of sequential frames. The package 'vol_tools' consists of a series of small programs created and maintained by Guilherme Gualda to perform specific tasks not included in other packages. Existing modules include simple tools for cropping volumes, generating histograms of intensity, sample volume measurement (useful for porous samples like pumice), and computation of volume differences (for differential absorption tomography). The module 'vol_animate' can be used to generate 3D animations using rendered isosurfaces around objects. Both packages use the same NetCDF format '.volume' files created using code written by Mark Rivers. Currently, only 16-bit integer volumes are created and read by the packages, but floating point and 8-bit data can easily be stored in the NetCDF format as well. A simple GUI to convert sequences of tiffs into '.volume' files is available within 'vol_tools'. Both 'tomo_display' and 'vol_tools' include options to (1) generate onscreen output that allows for dynamic visualization in 3D, (2) save sequences of tiffs to disk, and (3) generate MPEG movies for inclusion in presentations, publications, websites, etc. Both are freely available as run-time ('.sav') versions that can be run using the free IDL Virtual Machine TM, available from ITT Visual Information Solutions: http://www.ittvis.com/ProductServices/IDL/VirtualMachine.aspx The run-time versions of 'tomo_display' and 'vol_tools' can be downloaded from: http://cars.uchicago.edu/software/idl/tomography.html http://sites.google.com/site/voltools/
Cho, Soo-Jin; Kim, Byung-Kun; Kim, Byung-Su; Kim, Jae-Moon; Kim, Soo-Kyoung; Moon, Heui-Soo; Song, Tae-Jin; Cha, Myoung-Jin; Park, Kwang-Yeol; Sohn, Jong-Hee
2016-04-01
Vestibular migraine (VM), the common term for recurrent vestibular symptoms with migraine features, has been recognized in the appendix criteria of the third beta edition of the International Classification of Headache Disorders (ICHD-3β). We applied the criteria for VM in a prospective, multicenter headache registry study. Nine neurologists enrolled consecutive patients visiting outpatient clinics for headache. The presenting headache disorder and additional VM diagnoses were classified according to the ICHD-3β. The rates of patients diagnosed with VM and probable VM using consensus criteria were assessed. A total of 1414 patients were enrolled. Of 631 migraineurs, 65 were classified with VM (10.3%) and 16 with probable VM (2.5%). Accompanying migraine subtypes in VM were migraine without aura (66.2%), chronic migraine (29.2%), and migraine with aura (4.6%). Probable migraine (75%) was common in those with probable VM. The most common vestibular symptom was head motion-induced dizziness with nausea in VM and spontaneous vertigo in probable VM. The clinical characteristics of VM did not differ from those of migraine without VM. We diagnosed VM in 10.3% of first-visit migraineurs in neurology clinics using the ICHD-3β. Applying the diagnosis of probable VM can increase the identification of VM. © International Headache Society 2015.
2011-06-01
metacity [ 2788] gnome-panel [ 2790] nautilus [ 2794] bonobo -activati [ 2797] gnome-vfs-daemo [ 2799] eggcups [ 2800] gnome-volume-ma [ 2809] bt...xrdb [ 2784] metacity [ 2788] gnome-panel [ 2790] nautilus [ 2794] bonobo -activati [ 2797] gnome-vfs-daemo [ 2799] eggcups [ 2800] gnome-volume...gnome-keyring-d [ 2764] gnome-settings- [ 2780] xrdb [ 2784] metacity [ 2788] gnome-panel [ 2790] nautilus [ 2794] bonobo -activati [ 2797] gnome
Slot Machines: Pursuing Responsible Gaming Practices for Virtual Reels and Near Misses
ERIC Educational Resources Information Center
Harrigan, Kevin A.
2009-01-01
Since 1983, slot machines in North America have used a computer and virtual reels to determine the odds. Since at least 1988, a technique called clustering has been used to create a high number of near misses, failures that are close to wins. The result is that what the player sees does not represent the underlying probabilities and randomness,…
Regulation of the spoVM gene of Bacillus subtilis.
Le, Ai Thi Thuy; Schumann, Wolfgang
2008-11-01
The spoVM gene of Bacillus subtilis codes for a 26 amino-acid peptide that is essential for sporulation. Analysis of the expression of the spoVM gene revealed that wild-type cells started to synthesize a spoVM-specific transcript at t2, whereas the SpoVM peptide accumulated at t4. Both the transcript and the peptide were absent from an spoVM knockout strain. The 5' untranslated region of the spoVM transcript increased expression of SpoVM. Possible regulation mechanisms are discussed.
A Tale Of 160 Scientists, Three Applications, a Workshop and a Cloud
NASA Astrophysics Data System (ADS)
Berriman, G. B.; Brinkworth, C.; Gelino, D.; Wittman, D. K.; Deelman, E.; Juve, G.; Rynge, M.; Kinney, J.
2013-10-01
The NASA Exoplanet Science Institute (NExScI) hosts the annual Sagan Workshops, thematic meetings aimed at introducing researchers to the latest tools and methodologies in exoplanet research. The theme of the Summer 2012 workshop, held from July 23 to July 27 at Caltech, was to explore the use of exoplanet light curves to study planetary system architectures and atmospheres. A major part of the workshop was to use hands-on sessions to instruct attendees in the use of three open source tools for the analysis of light curves, especially from the Kepler mission. Each hands-on session involved the 160 attendees using their laptops to follow step-by-step tutorials given by experts. One of the applications, PyKE, is a suite of Python tools designed to reduce and analyze Kepler light curves; these tools can be invoked from the Unix command line or a GUI in PyRAF. The Transit Analysis Package (TAP) uses Markov Chain Monte Carlo (MCMC) techniques to fit light curves under the Interactive Data Language (IDL) environment, and Transit Timing Variations (TTV) uses IDL tools and Java-based GUIs to confirm and detect exoplanets from timing variations in light curve fitting. Rather than attempt to run these diverse applications on the inevitable wide range of environments on attendees laptops, they were run instead on the Amazon Elastic Cloud 2 (EC2). The cloud offers features ideal for this type of short term need: computing and storage services are made available on demand for as long as needed, and a processing environment can be customized and replicated as needed. The cloud environment included an NFS file server virtual machine (VM), 20 client VMs for use by attendees, and a VM to enable ftp downloads of the attendees' results. The file server was configured with a 1 TB Elastic Block Storage (EBS) volume (network-attached storage mounted as a device) containing the application software and attendees home directories. The clients were configured to mount the applications and home directories from the server via NFS. All VMs were built with CentOS version 5.8. Attendees connected their laptops to one of the client VMs using the Virtual Network Computing (VNC) protocol, which enabled them to interact with a remote desktop GUI during the hands-on sessions. We will describe the mechanisms for handling security, failovers, and licensing of commercial software. In particular, IDL licenses were managed through a server at Caltech, connected to the IDL instances running on Amazon EC2 via a Secure Shell (ssh) tunnel. The system operated flawlessly during the workshop.
Calcium Channel Block by Cadmium in Chicken Sensory Neurons
NASA Astrophysics Data System (ADS)
Swandulla, D.; Armstrong, C. M.
1989-03-01
Cadmium block of calcium channels was studied in chicken dorsal root ganglion cells by a whole-cell patch clamp that provides high time resolution. Barium ion was the current carrier, and the channel type studied had a high threshold of activation and fast deactivation (type FD). Block of these channels by 20 μ M external Cd2+ is voltage dependent. Cd2+ ions can be cleared from blocked channels by stepping the membrane voltage (Vm) to a negative value. Clearing the channels is progressively faster and more complete as Vm is made more negative. Once cleared of Cd2+, the channels conduct transiently on reopening but reequilibrate with Cd2+ and become blocked within a few milliseconds. Cd2+ equilibrates much more slowly with closed channels, but at a holding potential of -80 mV virtually all channels are blocked at equilibrium. Cd2+ does not slow closing of the channels, as would be expected if it were necessary for Cd2+ to leave the channels before closing occurred. Instead, the data show unambiguously that the channel gate can close when the channel is Cd2+ occupied.
Virtual screening by a new Clustering-based Weighted Similarity Extreme Learning Machine approach
Kudisthalert, Wasu
2018-01-01
Machine learning techniques are becoming popular in virtual screening tasks. One of the powerful machine learning algorithms is Extreme Learning Machine (ELM) which has been applied to many applications and has recently been applied to virtual screening. We propose the Weighted Similarity ELM (WS-ELM) which is based on a single layer feed-forward neural network in a conjunction of 16 different similarity coefficients as activation function in the hidden layer. It is known that the performance of conventional ELM is not robust due to random weight selection in the hidden layer. Thus, we propose a Clustering-based WS-ELM (CWS-ELM) that deterministically assigns weights by utilising clustering algorithms i.e. k-means clustering and support vector clustering. The experiments were conducted on one of the most challenging datasets–Maximum Unbiased Validation Dataset–which contains 17 activity classes carefully selected from PubChem. The proposed algorithms were then compared with other machine learning techniques such as support vector machine, random forest, and similarity searching. The results show that CWS-ELM in conjunction with support vector clustering yields the best performance when utilised together with Sokal/Sneath(1) coefficient. Furthermore, ECFP_6 fingerprint presents the best results in our framework compared to the other types of fingerprints, namely ECFP_4, FCFP_4, and FCFP_6. PMID:29652912
Using Virtualization to Integrate Weather, Climate, and Coastal Science Education
NASA Astrophysics Data System (ADS)
Davis, J. R.; Paramygin, V. A.; Figueiredo, R.; Sheng, Y.
2012-12-01
To better understand and communicate the important roles of weather and climate on the coastal environment, a unique publically available tool is being developed to support research, education, and outreach activities. This tool uses virtualization technologies to facilitate an interactive, hands-on environment in which students, researchers, and general public can perform their own numerical modeling experiments. While prior efforts have focused solely on the study of the coastal and estuary environments, this effort incorporates the community supported weather and climate model (WRF-ARW) into the Coastal Science Educational Virtual Appliance (CSEVA), an education tool used to assist in the learning of coastal transport processes; storm surge and inundation; and evacuation modeling. The Weather Research and Forecasting (WRF) Model is a next-generation, community developed and supported, mesoscale numerical weather prediction system designed to be used internationally for research, operations, and teaching. It includes two dynamical solvers (ARW - Advanced Research WRF and NMM - Nonhydrostatic Mesoscale Model) as well as a data assimilation system. WRF-ARW is the ARW dynamics solver combined with other components of the WRF system which was developed primarily at NCAR, community support provided by the Mesoscale and Microscale Meteorology (MMM) division of National Center for Atmospheric Research (NCAR). Included with WRF is the WRF Pre-processing System (WPS) which is a set of programs to prepare input for real-data simulations. The CSEVA is based on the Grid Appliance (GA) framework and is built using virtual machine (VM) and virtual networking technologies. Virtualization supports integration of an operating system, libraries (e.g. Fortran, C, Perl, NetCDF, etc. necessary to build WRF), web server, numerical models/grids/inputs, pre-/post-processing tools (e.g. WPS / RIP4 or UPS), graphical user interfaces, "Cloud"-computing infrastructure and other tools into a single ready-to-use package. Thus, the previous ornery task of setting up and compiling these tools becomes obsolete and the research, educator or student can focus on using the tools to study the interactions between weather, climate and the coastal environment. The incorporation of WRF into the CSEVA has been designed to be synergistic with the extensive online tutorials and biannual tutorials hosted by NCAR. Included are working examples of the idealized test simulations provided with WRF (2D sea breeze and squalls, a large eddy simulation, a Held and Suarez simulation, etc.) To demonstrate the integration of weather, coastal and coastal science education, example applications are being developed to demonstrate how the system can be used to couple a coastal and estuarine circulation, transport and storm surge model with downscale reanalysis weather and future climate predictions. Documentation, tutorials and the enhanced CSEVA itself will be found on the web at: http://cseva.coastal.ufl.edu.
Virtual Facility at Fermilab: Infrastructure and Services Expand to Public Clouds
Timm, Steve; Garzoglio, Gabriele; Cooper, Glenn; ...
2016-02-18
In preparation for its new Virtual Facility Project, Fermilab has launched a program of work to determine the requirements for running a computation facility on-site, in public clouds, or a combination of both. This program builds on the work we have done to successfully run experimental workflows of 1000-VM scale both on an on-site private cloud and on Amazon AWS. To do this at scale we deployed dynamically launched and discovered caching services on the cloud. We are now testing the deployment of more complicated services on Amazon AWS using native load balancing and auto scaling features they provide. Themore » Virtual Facility Project will design and develop a facility including infrastructure and services that can live on the site of Fermilab, off-site, or a combination of both. We expect to need this capacity to meet the peak computing requirements in the future. The Virtual Facility is intended to provision resources on the public cloud on behalf of the facility as a whole instead of having each experiment or Virtual Organization do it on their own. We will describe the policy aspects of a distributed Virtual Facility, the requirements, and plans to make a detailed comparison of the relative cost of the public and private clouds. Furthermore, this talk will present the details of the technical mechanisms we have developed to date, and the plans currently taking shape for a Virtual Facility at Fermilab.« less
Virtual Facility at Fermilab: Infrastructure and Services Expand to Public Clouds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Timm, Steve; Garzoglio, Gabriele; Cooper, Glenn
In preparation for its new Virtual Facility Project, Fermilab has launched a program of work to determine the requirements for running a computation facility on-site, in public clouds, or a combination of both. This program builds on the work we have done to successfully run experimental workflows of 1000-VM scale both on an on-site private cloud and on Amazon AWS. To do this at scale we deployed dynamically launched and discovered caching services on the cloud. We are now testing the deployment of more complicated services on Amazon AWS using native load balancing and auto scaling features they provide. Themore » Virtual Facility Project will design and develop a facility including infrastructure and services that can live on the site of Fermilab, off-site, or a combination of both. We expect to need this capacity to meet the peak computing requirements in the future. The Virtual Facility is intended to provision resources on the public cloud on behalf of the facility as a whole instead of having each experiment or Virtual Organization do it on their own. We will describe the policy aspects of a distributed Virtual Facility, the requirements, and plans to make a detailed comparison of the relative cost of the public and private clouds. Furthermore, this talk will present the details of the technical mechanisms we have developed to date, and the plans currently taking shape for a Virtual Facility at Fermilab.« less
Active tactile exploration using a brain-machine-brain interface.
O'Doherty, Joseph E; Lebedev, Mikhail A; Ifft, Peter J; Zhuang, Katie Z; Shokur, Solaiman; Bleuler, Hannes; Nicolelis, Miguel A L
2011-10-05
Brain-machine interfaces use neuronal activity recorded from the brain to establish direct communication with external actuators, such as prosthetic arms. It is hoped that brain-machine interfaces can be used to restore the normal sensorimotor functions of the limbs, but so far they have lacked tactile sensation. Here we report the operation of a brain-machine-brain interface (BMBI) that both controls the exploratory reaching movements of an actuator and allows signalling of artificial tactile feedback through intracortical microstimulation (ICMS) of the primary somatosensory cortex. Monkeys performed an active exploration task in which an actuator (a computer cursor or a virtual-reality arm) was moved using a BMBI that derived motor commands from neuronal ensemble activity recorded in the primary motor cortex. ICMS feedback occurred whenever the actuator touched virtual objects. Temporal patterns of ICMS encoded the artificial tactile properties of each object. Neuronal recordings and ICMS epochs were temporally multiplexed to avoid interference. Two monkeys operated this BMBI to search for and distinguish one of three visually identical objects, using the virtual-reality arm to identify the unique artificial texture associated with each. These results suggest that clinical motor neuroprostheses might benefit from the addition of ICMS feedback to generate artificial somatic perceptions associated with mechanical, robotic or even virtual prostheses.
Will Anything Useful Come Out of Virtual Reality? Examination of a Naval Application
1993-05-01
The term virtual reality can encompass varying meanings, but some generally accepted attributes of a virtual environment are that it is immersive...technology, but at present there are few practical applications which are utilizing the broad range of virtual reality technology. This paper will discuss an...Operability, operator functions, Virtual reality , Man-machine interface, Decision aids/decision making, Decision support. ASW.
Migrating EO/IR sensors to cloud-based infrastructure as service architectures
NASA Astrophysics Data System (ADS)
Berglie, Stephen T.; Webster, Steven; May, Christopher M.
2014-06-01
The Night Vision Image Generator (NVIG), a product of US Army RDECOM CERDEC NVESD, is a visualization tool used widely throughout Army simulation environments to provide fully attributed synthesized, full motion video using physics-based sensor and environmental effects. The NVIG relies heavily on contemporary hardware-based acceleration and GPU processing techniques, which push the envelope of both enterprise and commodity-level hypervisor support for providing virtual machines with direct access to hardware resources. The NVIG has successfully been integrated into fully virtual environments where system architectures leverage cloudbased technologies to various extents in order to streamline infrastructure and service management. This paper details the challenges presented to engineers seeking to migrate GPU-bound processes, such as the NVIG, to virtual machines and, ultimately, Cloud-Based IAS architectures. In addition, it presents the path that led to success for the NVIG. A brief overview of Cloud-Based infrastructure management tool sets is provided, and several virtual desktop solutions are outlined. A discrimination is made between general purpose virtual desktop technologies compared to technologies that expose GPU-specific capabilities, including direct rendering and hard ware-based video encoding. Candidate hypervisor/virtual machine configurations that nominally satisfy the virtualized hardware-level GPU requirements of the NVIG are presented , and each is subsequently reviewed in light of its implications on higher-level Cloud management techniques. Implementation details are included from the hardware level, through the operating system, to the 3D graphics APls required by the NVIG and similar GPU-bound tools.
Examination of drivers' cell phone use behavior at intersections by using naturalistic driving data.
Xiong, Huimin; Bao, Shan; Sayer, James; Kato, Kazuma
2015-09-01
Many driving simulator studies have shown that cell phone use while driving greatly degraded driving performance. In terms of safety analysis, many factors including drivers, vehicles, and driving situations need to be considered. Controlled or simulated studies cannot always account for the full effects of these factors, especially situational factors such as road condition, traffic density, and weather and lighting conditions. Naturalistic driving by its nature provides a natural and realistic way to examine drivers' behaviors and associated factors for cell phone use while driving. In this study, driving speed while using a cell phone (conversation or visual/manual tasks) was compared to two baselines (baseline 1: normal driving condition, which only excludes driving while using a cell phone, baseline 2: driving-only condition, which excludes all types of secondary tasks) when traversing an intersection. The outcomes showed that drivers drove slower when using a cell for both conversation and visual/manual (VM) tasks compared to baseline conditions. With regard to cell phone conversations, drivers were more likely to drive faster during the day time compared to night time driving and drive slower under moderate traffic compared to under sparse traffic situations. With regard to VM tasks, there was a significant interaction between traffic and cell phone use conditions. The maximum speed with VM tasks was significantly lower than that with baseline conditions under sparse traffic conditions. In contrast, the maximum speed with VM tasks was slightly higher than that with baseline driving under dense traffic situations. This suggests that drivers might self-regulate their behavior based on the driving situations and demand for secondary tasks, which could provide insights on driver distraction guidelines. With the rapid development of in-vehicle technology, the findings in this research could lead the improvement of human-machine interface (HMI) design as well. Copyright © 2015 Elsevier Ltd and National Safety Council. All rights reserved.
Argenziano, Monica; Banche, Giuliana; Luganini, Anna; Finesso, Nicole; Allizond, Valeria; Gulino, Giulia Rossana; Khadjavi, Amina; Spagnolo, Rita; Tullio, Vivian; Giribaldi, Giuliana; Guiot, Caterina; Cuffini, Anna Maria; Prato, Mauro; Cavalli, Roberta
2017-05-15
Vancomycin (Vm) currently represents the gold standard against methicillin-resistant Staphylococcus aureus (MRSA) infections. However, it is associated with low oral bioavailability, formulation stability issues, and severe side effects upon systemic administration. These drawbacks could be overcome by Vm topical administration if properly encapsulated in a nanocarrier. Intriguingly, nanobubbles (NBs) are responsive to physical external stimuli such as ultrasound (US), promoting drug delivery. In this work, perfluoropentane (PFP)-cored NBs were loaded with Vm by coupling to the outer dextran sulfate shell. Vm-loaded NBs (VmLNBs) displayed ∼300nm sizes, anionic surfaces and good drug encapsulation efficiency. In vitro, VmLNBs showed prolonged drug release kinetics, not accompanied by cytotoxicity on human keratinocytes. Interestingly, VmLNBs were generally more effective than Vm alone in MRSA killing, with VmLNB antibacterial activity being more sustained over time as a result of prolonged drug release profile. Besides, VmLNBs were not internalized by staphylococci, opposite to Vm solution. Further US association promoted drug delivery from VmLNBs through an in vitro model of porcine skin. Taken together, these results support the hypothesis that proper Vm encapsulation in US-responsive NBs might be a promising strategy for the topical treatment of MRSA wound infections. Copyright © 2017 Elsevier B.V. All rights reserved.
Characterization of microparticles in patients with venous malformations of the head and neck.
Zhu, J-Y; Ren, J-G; Zhang, W; Wang, F-Q; Cai, Y; Zhao, J-H; Chen, G; Zhao, Y-F
2017-01-01
To determine the basic biochemical features of microparticles (MP) in patients with venous malformation (VM) of the head and neck. Microparticles were isolated from peripheral venous blood of VM patients or healthy subjects and from lesional fluid of VM patients. Flow cytometry and real-time polymerase chain reaction were employed to determine the concentration, cellular origin, and RNA expression of obtained MP. A functional coagulation test was applied to measure the coagulant activity of MP. Circulating levels of total MP, platelet-derived MP, and endothelial MP were significantly elevated in VM patients and were consistently increased in VM patients with more extensive lesions. Lesional MP (MP from lesional fluid of VM) in VM patients were more abundant than circulating MP from VM patients or healthy subjects. Moreover, MP from VM patients displayed markedly distinct mRNA and microRNA expression compared with healthy subjects. Furthermore, MP from VM patients exhibited enhanced procoagulant activity, as evidenced by significantly shorter coagulation time. This study demonstrates for the first time that patients with VM have an altered MP profile and MP may be associated with VM-associated thrombogenesis. Further studies are required to explore the precise pathophysiological roles of MP in VM. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
A Virtual Sensor for Online Fault Detection of Multitooth-Tools
Bustillo, Andres; Correa, Maritza; Reñones, Anibal
2011-01-01
The installation of suitable sensors close to the tool tip on milling centres is not possible in industrial environments. It is therefore necessary to design virtual sensors for these machines to perform online fault detection in many industrial tasks. This paper presents a virtual sensor for online fault detection of multitooth tools based on a Bayesian classifier. The device that performs this task applies mathematical models that function in conjunction with physical sensors. Only two experimental variables are collected from the milling centre that performs the machining operations: the electrical power consumption of the feed drive and the time required for machining each workpiece. The task of achieving reliable signals from a milling process is especially complex when multitooth tools are used, because each kind of cutting insert in the milling centre only works on each workpiece during a certain time window. Great effort has gone into designing a robust virtual sensor that can avoid re-calibration due to, e.g., maintenance operations. The virtual sensor developed as a result of this research is successfully validated under real conditions on a milling centre used for the mass production of automobile engine crankshafts. Recognition accuracy, calculated with a k-fold cross validation, had on average 0.957 of true positives and 0.986 of true negatives. Moreover, measured accuracy was 98%, which suggests that the virtual sensor correctly identifies new cases. PMID:22163766
NASA Astrophysics Data System (ADS)
Michaelis, A.; Nemani, R. R.; Wang, W.; Votava, P.; Hashimoto, H.
2010-12-01
Given the increasing complexity of climate modeling and analysis tools, it is often difficult and expensive to build or recreate an exact replica of the software compute environment used in past experiments. With the recent development of new technologies for hardware virtualization, an opportunity exists to create full modeling, analysis and compute environments that are “archiveable”, transferable and may be easily shared amongst a scientific community or presented to a bureaucratic body if the need arises. By encapsulating and entire modeling and analysis environment in a virtual machine image, others may quickly gain access to the fully built system used in past experiments, potentially easing the task and reducing the costs of reproducing and verify past results produced by other researchers. Moreover, these virtual machine images may be used as a pedagogical tool for others that are interested in performing an academic exercise but don't yet possess the broad expertise required. We built two virtual machine images, one with the Community Earth System Model (CESM) and one with Weather Research Forecast Model (WRF), then ran several small experiments to assess the feasibility, performance overheads costs, reusability, and transferability. We present a list of the pros and cons as well as lessoned learned from utilizing virtualization technology in the climate and earth systems modeling domain.
A virtual sensor for online fault detection of multitooth-tools.
Bustillo, Andres; Correa, Maritza; Reñones, Anibal
2011-01-01
The installation of suitable sensors close to the tool tip on milling centres is not possible in industrial environments. It is therefore necessary to design virtual sensors for these machines to perform online fault detection in many industrial tasks. This paper presents a virtual sensor for online fault detection of multitooth tools based on a bayesian classifier. The device that performs this task applies mathematical models that function in conjunction with physical sensors. Only two experimental variables are collected from the milling centre that performs the machining operations: the electrical power consumption of the feed drive and the time required for machining each workpiece. The task of achieving reliable signals from a milling process is especially complex when multitooth tools are used, because each kind of cutting insert in the milling centre only works on each workpiece during a certain time window. Great effort has gone into designing a robust virtual sensor that can avoid re-calibration due to, e.g., maintenance operations. The virtual sensor developed as a result of this research is successfully validated under real conditions on a milling centre used for the mass production of automobile engine crankshafts. Recognition accuracy, calculated with a k-fold cross validation, had on average 0.957 of true positives and 0.986 of true negatives. Moreover, measured accuracy was 98%, which suggests that the virtual sensor correctly identifies new cases.
Creating objects and object categories for studying perception and perceptual learning.
Hauffen, Karin; Bart, Eugene; Brady, Mark; Kersten, Daniel; Hegdé, Jay
2012-11-02
In order to quantitatively study object perception, be it perception by biological systems or by machines, one needs to create objects and object categories with precisely definable, preferably naturalistic, properties. Furthermore, for studies on perceptual learning, it is useful to create novel objects and object categories (or object classes) with such properties. Many innovative and useful methods currently exist for creating novel objects and object categories (also see refs. 7,8). However, generally speaking, the existing methods have three broad types of shortcomings. First, shape variations are generally imposed by the experimenter, and may therefore be different from the variability in natural categories, and optimized for a particular recognition algorithm. It would be desirable to have the variations arise independently of the externally imposed constraints. Second, the existing methods have difficulty capturing the shape complexity of natural objects. If the goal is to study natural object perception, it is desirable for objects and object categories to be naturalistic, so as to avoid possible confounds and special cases. Third, it is generally hard to quantitatively measure the available information in the stimuli created by conventional methods. It would be desirable to create objects and object categories where the available information can be precisely measured and, where necessary, systematically manipulated (or 'tuned'). This allows one to formulate the underlying object recognition tasks in quantitative terms. Here we describe a set of algorithms, or methods, that meet all three of the above criteria. Virtual morphogenesis (VM) creates novel, naturalistic virtual 3-D objects called 'digital embryos' by simulating the biological process of embryogenesis. Virtual phylogenesis (VP) creates novel, naturalistic object categories by simulating the evolutionary process of natural selection. Objects and object categories created by these simulations can be further manipulated by various morphing methods to generate systematic variations of shape characteristics. The VP and morphing methods can also be applied, in principle, to novel virtual objects other than digital embryos, or to virtual versions of real-world objects. Virtual objects created in this fashion can be rendered as visual images using a conventional graphical toolkit, with desired manipulations of surface texture, illumination, size, viewpoint and background. The virtual objects can also be 'printed' as haptic objects using a conventional 3-D prototyper. We also describe some implementations of these computational algorithms to help illustrate the potential utility of the algorithms. It is important to distinguish the algorithms from their implementations. The implementations are demonstrations offered solely as a 'proof of principle' of the underlying algorithms. It is important to note that, in general, an implementation of a computational algorithm often has limitations that the algorithm itself does not have. Together, these methods represent a set of powerful and flexible tools for studying object recognition and perceptual learning by biological and computational systems alike. With appropriate extensions, these methods may also prove useful in the study of morphogenesis and phylogenesis.
Feng, Hao; Xu, Ming; Liu, Yangyang; Dong, Ruqing; Gao, Xiaoning; Huang, Lili
2017-01-01
Valsa mali (V. mali) is the causative agent of apple tree Valsa canker, which heavily damages the production of apples in China. However, the biological roles of the RNA interfering (RNAi) pathway in the pathogenicity of V. mali remain unknown. Dicer-like proteins (DCLs) are important components that control the initiation of the RNAi pathway. In this study, VmDCL1 and VmDCL2 were isolated and functionally characterized in V. mali. VmDCL1 and VmDCL2 are orthologous in evolution to the DCLs in Cryphonectria parasitica. The deletion of VmDCL1 and VmDCL2 did not affect vegetative growth when the mutants (ΔVmDCL1, ΔVmDCL2 and ΔVmDCL1DCL2) and wild type strain 03–8 were grown on a PDA medium at 25°C in the dark. However, the colony of ΔVmDCL1 increased by 37.1% compared to the 03–8 colony in a medium containing 0.05% H2O2 3 days after inoculation, and the growth of ΔVmDCL1 was significantly inhibited in a medium containing 0.5 M KCl at a ratio of 25.7%. Meanwhile, in the presence of 0.05% H2O2, the growth of ΔVmDCL2 decreased by 34.5% compared with the growth of 03–8, but ΔVmDCL2 grew normally in the presence of 0.5 M KCl. More importantly, the expression of VmDCL2 was up-regulated 125-fold during the pathogen infection. In the infection assays using apple twigs, the pathogenicity of ΔVmDCL2 and ΔVmDCL1DCL2 was significantly reduced compared with that of 03–8 at a ratio of 24.7 and 41.3%, respectively. All defective phenotypes could be nearly rescued by re-introducing the wild type VmDCL1 and VmDCL2 alleles. Furthermore, the number and length distribution of unique small RNAs (unisRNAs) in the mutants and 03–8 were analyzed using deep sequencing. The number of unisRNAs was obviously lower in ΔVmDCL1, ΔVmDCL2 and ΔVmDCL1DCL2 than that in 03–8, and the length distribution of the sRNAs also markedly changed after the VmDCLs were deleted. These results indicated that VmDCLs function in the H2O2 and KCl stress response, pathogenicity and generation of sRNAs. PMID:28690605
NASA Astrophysics Data System (ADS)
Evans, J. D.; Hao, W.; Chettri, S.
2013-12-01
The cloud is proving to be a uniquely promising platform for scientific computing. Our experience with processing satellite data using Amazon Web Services highlights several opportunities for enhanced performance, flexibility, and cost effectiveness in the cloud relative to traditional computing -- for example: - Direct readout from a polar-orbiting satellite such as the Suomi National Polar-Orbiting Partnership (S-NPP) requires bursts of processing a few times a day, separated by quiet periods when the satellite is out of receiving range. In the cloud, by starting and stopping virtual machines in minutes, we can marshal significant computing resources quickly when needed, but not pay for them when not needed. To take advantage of this capability, we are automating a data-driven approach to the management of cloud computing resources, in which new data availability triggers the creation of new virtual machines (of variable size and processing power) which last only until the processing workflow is complete. - 'Spot instances' are virtual machines that run as long as one's asking price is higher than the provider's variable spot price. Spot instances can greatly reduce the cost of computing -- for software systems that are engineered to withstand unpredictable interruptions in service (as occurs when a spot price exceeds the asking price). We are implementing an approach to workflow management that allows data processing workflows to resume with minimal delays after temporary spot price spikes. This will allow systems to take full advantage of variably-priced 'utility computing.' - Thanks to virtual machine images, we can easily launch multiple, identical machines differentiated only by 'user data' containing individualized instructions (e.g., to fetch particular datasets or to perform certain workflows or algorithms) This is particularly useful when (as is the case with S-NPP data) we need to launch many very similar machines to process an unpredictable number of data files concurrently. Our experience shows the viability and flexibility of this approach to workflow management for scientific data processing. - Finally, cloud computing is a promising platform for distributed volunteer ('interstitial') computing, via mechanisms such as the Berkeley Open Infrastructure for Network Computing (BOINC) popularized with the SETI@Home project and others such as ClimatePrediction.net and NASA's Climate@Home. Interstitial computing faces significant challenges as commodity computing shifts from (always on) desktop computers towards smartphones and tablets (untethered and running on scarce battery power); but cloud computing offers significant slack capacity. This capacity includes virtual machines with unused RAM or underused CPUs; virtual storage volumes allocated (& paid for) but not full; and virtual machines that are paid up for the current hour but whose work is complete. We are devising ways to facilitate the reuse of these resources (i.e., cloud-based interstitial computing) for satellite data processing and related analyses. We will present our findings and research directions on these and related topics.
A discrete Fourier transform for virtual memory machines
NASA Technical Reports Server (NTRS)
Galant, David C.
1992-01-01
An algebraic theory of the Discrete Fourier Transform is developed in great detail. Examination of the details of the theory leads to a computationally efficient fast Fourier transform for the use on computers with virtual memory. Such an algorithm is of great use on modern desktop machines. A FORTRAN coded version of the algorithm is given for the case when the sequence of numbers to be transformed is a power of two.
Productive High Performance Parallel Programming with Auto-tuned Domain-Specific Embedded Languages
2013-01-02
Compilation JVM Java Virtual Machine KB Kilobyte KDT Knowledge Discovery Toolbox LAPACK Linear Algebra Package LLVM Low-Level Virtual Machine LOC Lines...different starting points. Leo Meyerovich also helped solidify some of the ideas here in discussions during Par Lab retreats. I would also like to thank...multi-timestep computations by blocking in both time and space. 88 Implementation Output Approx DSL Type Language Language Parallelism LoC Graphite
Robust Airborne Networking Extensions (RANGE)
2008-02-01
IMUNES [13] project, which provides an entire network stack virtualization and topology control inside a single FreeBSD machine . The emulated topology...Multicast versus broadcast in a manet.” in ADHOC-NOW, 2004, pp. 14–27. [9] J. Mukherjee, R. Atwood , “ Rendezvous point relocation in protocol independent...computer with an Ethernet connection, or a Linux virtual machine on some other (e.g., Windows) operating system, should work. 2.1 Patching the source code
Lifelong personal health data and application software via virtual machines in the cloud.
Van Gorp, Pieter; Comuzzi, Marco
2014-01-01
Personal Health Records (PHRs) should remain the lifelong property of patients, who should be able to show them conveniently and securely to selected caregivers and institutions. In this paper, we present MyPHRMachines, a cloud-based PHR system taking a radically new architectural solution to health record portability. In MyPHRMachines, health-related data and the application software to view and/or analyze it are separately deployed in the PHR system. After uploading their medical data to MyPHRMachines, patients can access them again from remote virtual machines that contain the right software to visualize and analyze them without any need for conversion. Patients can share their remote virtual machine session with selected caregivers, who will need only a Web browser to access the pre-loaded fragments of their lifelong PHR. We discuss a prototype of MyPHRMachines applied to two use cases, i.e., radiology image sharing and personalized medicine.
Inquiry based learning with a virtual microscope
NASA Astrophysics Data System (ADS)
Kelley, S. P.; Sharples, M.; Tindle, A.; Villasclaras-Fernández, E.
2012-12-01
As part of newly funded initiative, the Wolfson OpenScience Laboratory, we are linking a tool for inquiry based learning, nQuire (http://www.nquire.org.uk) with the virtual microscope for Earth science (http://www.virtualmicroscope.co.uk) to allow students to undertake projects and gain from inquiry based study thin sections of rocks without the need for a laboratory with expensive petrological microscopes. The Virtual Microscope (VM) was developed for undergraduate teaching of petrology and geoscience, allowing students to explore rock hand specimens and thin sections in a browser window. The system is based on HTML5 application and allows students to scan and zoom the rocks in a browser window, view in ppl and xpl conditions, and rotate specific areas to view birefringence and pleochroism. Importantly the VM allows students to gain access to rare specimens such as Moon rocks that might be too precious to suffer loss or damage. Experimentation with such specimens can inspire the learners' interest in science and allows them to investigate relevant science questions. Yet it is challenging for learners to engage in scientific processes, as they may lack scientific investigation skills or have problems in planning their activities; for teachers, managing inquiry activities is a demanding task (Quintana et al., 2004). To facilitate the realization of inquiry activities, the VM is being integrated with the nQuire tool. nQuire is a web tool that guides and supports students through the inquiry process (Mulholland et al., 2011). Learners are encouraged to construct their own personally relevant hypothesis, pose scientific questions, and plan the method to answer them. Then, the system enables users to collect and analyze data, and share their conclusions. Teachers can monitor their students' progress through inquiries, and give them access to new parts of inquiries as they advance. By means of the integration of nQuire and the VM, inquiries that involve collecting data through a microscope can be created and supported. To illustrate the possibilities of these tools, we have designed two inquiries that engage learners in the study of Moon rock samples under the microscope, starting from general questions such as comparison of Moon rocks or determining the origin of meteorites. One is aimed at undergraduate Geology students; the second has been conceived for the general public. Science teachers can reuse these inquiries, adapt them as they need, or create completely new inquiries using nQuire's authoring tool. We will report progress and demonstrate the combination of these two on-line tools to create an open educational resource allowing educators to design and run science inquiries for Earth and planetary science in a range of settings from schools to universities. Quintana, C., Reiser, B. J., Davis, E. A., Krajcik, J., Fretz, E., Duncan, R. G., Kyza, E., et al. (2004). A scaffolding design framework for software to support science inquiry. Journal of the Learning Sciences, 13(3), 337-386. Mulholland, P., Anastopoulou, S., Collins, T., FeiBt, M., Gaved, M., Kerawalla, L., Paxton, M., et al. (2011). nQuire: Technological support for personal inquiry learning. IEEE Transactions on Learning Technologies. First published online, December 5, 2011, http://doi.ieeecomputersociety.org/10.1109/TLT.2011.32.
Virtualization for Cost-Effective Teaching of Assembly Language Programming
ERIC Educational Resources Information Center
Cadenas, José O.; Sherratt, R. Simon; Howlett, Des; Guy, Chris G.; Lundqvist, Karsten O.
2015-01-01
This paper describes a virtual system that emulates an ARM-based processor machine, created to replace a traditional hardware-based system for teaching assembly language. The virtual system proposed here integrates, in a single environment, all the development tools necessary to deliver introductory or advanced courses on modern assembly language…
Proposal of Modification Strategy of NC Program in the Virtual Manufacturing Environment
NASA Astrophysics Data System (ADS)
Narita, Hirohisa; Chen, Lian-Yi; Fujimoto, Hideo; Shirase, Keiichi; Arai, Eiji
Virtual manufacturing will be a key technology in process planning, because there are no evaluation tools for cutting conditions. Therefore, virtual machining simulator (VMSim), which can predict end milling processes, has been developed. The modification strategy of NC program using VMSim is proposed in this paper.
Active Gaming: Is "Virtual" Reality Right for Your Physical Education Program?
ERIC Educational Resources Information Center
Hansen, Lisa; Sanders, Stephen W.
2012-01-01
Active gaming is growing in popularity and the idea of increasing children's physical activity by using technology is largely accepted by physical educators. Teachers nationwide have been providing active gaming equipment such as virtual bikes, rhythmic dance machines, virtual sporting games, martial arts simulators, balance boards, and other…
Simulation Platform: a cloud-based online simulation environment.
Yamazaki, Tadashi; Ikeno, Hidetoshi; Okumura, Yoshihiro; Satoh, Shunji; Kamiyama, Yoshimi; Hirata, Yutaka; Inagaki, Keiichiro; Ishihara, Akito; Kannon, Takayuki; Usui, Shiro
2011-09-01
For multi-scale and multi-modal neural modeling, it is needed to handle multiple neural models described at different levels seamlessly. Database technology will become more important for these studies, specifically for downloading and handling the neural models seamlessly and effortlessly. To date, conventional neuroinformatics databases have solely been designed to archive model files, but the databases should provide a chance for users to validate the models before downloading them. In this paper, we report our on-going project to develop a cloud-based web service for online simulation called "Simulation Platform". Simulation Platform is a cloud of virtual machines running GNU/Linux. On a virtual machine, various software including developer tools such as compilers and libraries, popular neural simulators such as GENESIS, NEURON and NEST, and scientific software such as Gnuplot, R and Octave, are pre-installed. When a user posts a request, a virtual machine is assigned to the user, and the simulation starts on that machine. The user remotely accesses to the machine through a web browser and carries out the simulation, without the need to install any software but a web browser on the user's own computer. Therefore, Simulation Platform is expected to eliminate impediments to handle multiple neural models that require multiple software. Copyright © 2011 Elsevier Ltd. All rights reserved.
Reprint of: Simulation Platform: a cloud-based online simulation environment.
Yamazaki, Tadashi; Ikeno, Hidetoshi; Okumura, Yoshihiro; Satoh, Shunji; Kamiyama, Yoshimi; Hirata, Yutaka; Inagaki, Keiichiro; Ishihara, Akito; Kannon, Takayuki; Usui, Shiro
2011-11-01
For multi-scale and multi-modal neural modeling, it is needed to handle multiple neural models described at different levels seamlessly. Database technology will become more important for these studies, specifically for downloading and handling the neural models seamlessly and effortlessly. To date, conventional neuroinformatics databases have solely been designed to archive model files, but the databases should provide a chance for users to validate the models before downloading them. In this paper, we report our on-going project to develop a cloud-based web service for online simulation called "Simulation Platform". Simulation Platform is a cloud of virtual machines running GNU/Linux. On a virtual machine, various software including developer tools such as compilers and libraries, popular neural simulators such as GENESIS, NEURON and NEST, and scientific software such as Gnuplot, R and Octave, are pre-installed. When a user posts a request, a virtual machine is assigned to the user, and the simulation starts on that machine. The user remotely accesses to the machine through a web browser and carries out the simulation, without the need to install any software but a web browser on the user's own computer. Therefore, Simulation Platform is expected to eliminate impediments to handle multiple neural models that require multiple software. Copyright © 2011 Elsevier Ltd. All rights reserved.
Comparative study of state-of-the-art myoelectric controllers for multigrasp prosthetic hands.
Segil, Jacob L; Controzzi, Marco; Weir, Richard F ff; Cipriani, Christian
2014-01-01
A myoelectric controller should provide an intuitive and effective human-machine interface that deciphers user intent in real-time and is robust enough to operate in daily life. Many myoelectric control architectures have been developed, including pattern recognition systems, finite state machines, and more recently, postural control schemes. Here, we present a comparative study of two types of finite state machines and a postural control scheme using both virtual and physical assessment procedures with seven nondisabled subjects. The Southampton Hand Assessment Procedure (SHAP) was used in order to compare the effectiveness of the controllers during activities of daily living using a multigrasp artificial hand. Also, a virtual hand posture matching task was used to compare the controllers when reproducing six target postures. The performance when using the postural control scheme was significantly better (p < 0.05) than the finite state machines during the physical assessment when comparing within-subject averages using the SHAP percent difference metric. The virtual assessment results described significantly greater completion rates (97% and 99%) for the finite state machines, but the movement time tended to be faster (2.7 s) for the postural control scheme. Our results substantiate that postural control schemes rival other state-of-the-art myoelectric controllers.
Prospects for Evidence -Based Software Assurance: Models and Analysis
2015-09-01
virtual machine is much lighter than the workstation. The virtual machine doesn’t need to run anti- virus , firewalls, intrusion preven- tion systems...34] Maiorca, D., Corona , I., and Giacinto, G. Looking at the bag is not enough to find the bomb: An evasion of structural methods for malicious PDF...CCS ’13, ACM, pp. 119–130. [35] Maiorca, D., Giacinto, G., and Corona , I. A pattern recognition system for malicious PDF files detection. In
Pang, You-Wang; Ge, Shun-Nan; Nakamura, Kouichi C; Li, Jin-Lian; Xiong, Kang-Hui; Kaneko, Takeshi; Mizuno, Noboru
2009-02-10
Little is known about the significance of the two types of glutamatergic neurons (those expressing vesicular glutamate transporter VGLUT1 or VGLUT2) in the control of jaw movements. We thus examined the origin and distribution of axon terminals with VGLUT1 or VGLUT2 immunoreactivity within the trigeminal motor nucleus (Vm) in the rat. The Vm was divided into the dorsolateral division (Vm.dl; jaw-closing motoneuron pool) and the ventromedial division (Vm.vm; jaw-opening motoneuron pool). VGLUT1-immunopositive terminals were seen within the Vm.dl only, whereas VGLUT2-immunopositive ones were distributed to both the Vm.dl and the Vm.vm. Transection of the motor root eliminated almost all VGLUT1-immunopositive axons in the Vm.dl, with no changes of VGLUT2 immunoreactivity in the two divisions, indicating that the VGLUT1- and VGLUT2-immunopositive axons came from primary afferents in the mesencephalic trigeminal nucleus and premotor neurons for the Vm, respectively. In situ hybridization histochemistry revealed that VGLUT2 neurons were much more numerous than VGLUT1 neurons in the regions corresponding to the reported premotoneuron pool for the Vm. The results of immunofluorescence labeling combined with anterograde tract tracing further indicated that premotor neurons with VGLUT2 in the trigeminal sensory nuclei, the supratrigeminal region, and the reticular region ventral to the Vm sent axon terminals contacting trigeminal motoneurons and that some of the VGLUT1-expressing premotor neurons in the reticular region ventral to the Vm sent axon terminals to jaw-closing motoneurons. The present results suggested that the roles played by glutamatergic neurons in controlling jaw movements might be different between VGLUT1- and VGLUT2-expressing neurons.
Modeled Urea Distribution Volume and Mortality in the HEMO Study
Greene, Tom; Depner, Thomas A.; Levin, Nathan W.; Chertow, Glenn M.
2011-01-01
Summary Background and objectives In the Hemodialysis (HEMO) Study, observed small decreases in achieved equilibrated Kt/Vurea were noncausally associated with markedly increased mortality. Here we examine the association of mortality with modeled volume (Vm), the denominator of equilibrated Kt/Vurea. Design, setting, participants, & measurements Parameters derived from modeled urea kinetics (including Vm) and blood pressure (BP) were obtained monthly in 1846 patients. Case mix–adjusted time-dependent Cox regressions were used to relate the relative mortality hazard at each time point to Vm and to the change in Vm over the preceding 6 months. Mixed effects models were used to relate Vm to changes in intradialytic systolic BP and to other factors at each follow-up visit. Results Mortality was associated with Vm and change in Vm over the preceding 6 months. The association between change in Vm and mortality was independent of vascular access complications. In contrast, mortality was inversely associated with V calculated from anthropometric measurements (Vant). In case mix–adjusted analysis using Vm as a time-dependent covariate, the association of mortality with Vm strengthened after statistical adjustment for Vant. After adjustment for Vant, higher Vm was associated with slightly smaller reductions in intradialytic systolic BP and with risk factors for mortality including recent hospitalization and reductions in serum albumin concentration and body weight. Conclusions An increase in Vm is a marker for illness and mortality risk in hemodialysis patients. PMID:21511841
Final Report: Enabling Exascale Hardware and Software Design through Scalable System Virtualization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bridges, Patrick G.
2015-02-01
In this grant, we enhanced the Palacios virtual machine monitor to increase its scalability and suitability for addressing exascale system software design issues. This included a wide range of research on core Palacios features, large-scale system emulation, fault injection, perfomrance monitoring, and VMM extensibility. This research resulted in large number of high-impact publications in well-known venues, the support of a number of students, and the graduation of two Ph.D. students and one M.S. student. In addition, our enhanced version of the Palacios virtual machine monitor has been adopted as a core element of the Hobbes operating system under active DOE-fundedmore » research and development.« less
New directions in the CernVM file system
NASA Astrophysics Data System (ADS)
Blomer, Jakob; Buncic, Predrag; Ganis, Gerardo; Hardi, Nikola; Meusel, Rene; Popescu, Radu
2017-10-01
The CernVM File System today is commonly used to host and distribute application software stacks. In addition to this core task, recent developments expand the scope of the file system into two new areas. Firstly, CernVM-FS emerges as a good match for container engines to distribute the container image contents. Compared to native container image distribution (e.g. through the “Docker registry”), CernVM-FS massively reduces the network traffic for image distribution. This has been shown, for instance, by a prototype integration of CernVM-FS into Mesos developed by Mesosphere, Inc. We present a path for a smooth integration of CernVM-FS and Docker. Secondly, CernVM-FS recently raised new interest as an option for the distribution of experiment conditions data. Here, the focus is on improved versioning capabilities of CernVM-FS that allows to link the conditions data of a run period to the state of a CernVM-FS repository. Lastly, CernVM-FS has been extended to provide a name space for physics data for the LIGO and CMS collaborations. Searching through a data namespace is often done by a central, experiment specific database service. A name space on CernVM-FS can particularly benefit from an existing, scalable infrastructure and from the POSIX file system interface.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crussell, Jonathan; Erickson, Jeremy; Fritz, David
minimega is an emulytics platform for creating testbeds of networked devices. The platoform consists of easily deployable tools to facilitate bringing up large networks of virtual machines including Windows, Linux, and Android. minimega allows experiments to be brought up quickly with almost no configuration. minimega also includes tools for simple cluster, management, as well as tools for creating Linux-based virtual machines. This release of minimega includes new emulated sensors for Android devices to improve the fidelity of testbeds that include mobile devices. Emulated sensors include GPS and
Phenomenology tools on cloud infrastructures using OpenStack
NASA Astrophysics Data System (ADS)
Campos, I.; Fernández-del-Castillo, E.; Heinemeyer, S.; Lopez-Garcia, A.; Pahlen, F.; Borges, G.
2013-04-01
We present a new environment for computations in particle physics phenomenology employing recent developments in cloud computing. On this environment users can create and manage "virtual" machines on which the phenomenology codes/tools can be deployed easily in an automated way. We analyze the performance of this environment based on "virtual" machines versus the utilization of physical hardware. In this way we provide a qualitative result for the influence of the host operating system on the performance of a representative set of applications for phenomenology calculations.
Enhanced networked server management with random remote backups
NASA Astrophysics Data System (ADS)
Kim, Song-Kyoo
2003-08-01
In this paper, the model is focused on available server management in network environments. The (remote) backup servers are hooked up by VPN (Virtual Private Network) and replace broken main severs immediately. A virtual private network (VPN) is a way to use a public network infrastructure and hooks up long-distance servers within a single network infrastructure. The servers can be represent as "machines" and then the system deals with main unreliable and random auxiliary spare (remote backup) machines. When the system performs a mandatory routine maintenance, auxiliary machines are being used for backups during idle periods. Unlike other existing models, the availability of auxiliary machines is changed for each activation in this enhanced model. Analytically tractable results are obtained by using several mathematical techniques and the results are demonstrated in the framework of optimized networked server allocation problems.
The potential inclusion of value management subject for postgraduate programmes in Malaysia
NASA Astrophysics Data System (ADS)
Che Mat, M.; Karim, S. B. Abd; Amran, N. A. E.
2018-02-01
The development of construction industry is increasing tremendously. To complement with this scenario, Value Management (VM) is needed to achieve the optimum function by reducing or eliminating the unnecessary cost that does not contribute to the product, system or service. As VM has been increasingly applied to enhance and improve value in construction projects, the purpose of this study is to implement VM as a subject for master’s students at selected public universities in Malaysia. The research is conducted to investigate the potential inclusion of VM as a subject at master degree programmes in Malaysia. Questionnaire survey was designed and delivered to existing master students to explore the current understanding of VM as well as the possibility of introducing VM as a subject. The results showed that the level of awareness on VM is high, yet the understanding of VM is low. This research presents the result of implementing VM as a subject learning for master’s level programme at selected public universities in Malaysia.
NASA Astrophysics Data System (ADS)
Kimura, Yukio; Sadamichi, Yucho; Maruyama, Naoki; Kato, Seizo
These days the environmental impact due to vending machines'(VM) diffusion has greatly been discussed. This paper describes the numerical evaluation of the environmental impact by using the LCA (Life Cycle Assessment) scheme and then proposes eco-improvements' strategy toward environmentally conscious products(ECP). A new objective and universal consolidated method for the LCA-evaluation, so-called LCA-NETS(Numerical Eco-load Standardization ) developed by the authors is applied to the present issue. As a result, the environmental loads at the 5years' operation and the material procurement stages are found to dominate others over the life cycle. Further eco-improvement is realized by following the order of the LCA-NETS magnitude; namely, energy saving, materials reducing, parts' re-using, and replacing with low environmental load material. Above all, parts' re-using is specially recommendable for significant reduction of the environmental loads toward ECP.
Virtual reality for intelligent and interactive operating, training, and visualization systems
NASA Astrophysics Data System (ADS)
Freund, Eckhard; Rossmann, Juergen; Schluse, Michael
2000-10-01
Virtual Reality Methods allow a new and intuitive way of communication between man and machine. The basic idea of Virtual Reality (VR) is the generation of artificial computer simulated worlds, which the user not only can look at but also can interact with actively using data glove and data helmet. The main emphasis for the use of such techniques at the IRF is the development of a new generation of operator interfaces for the control of robots and other automation components and for intelligent training systems for complex tasks. The basic idea of the methods developed at the IRF for the realization of Projective Virtual Reality is to let the user work in the virtual world as he would act in reality. The user actions are recognized by the Virtual reality System and by means of new and intelligent control software projected onto the automation components like robots which afterwards perform the necessary actions in reality to execute the users task. In this operation mode the user no longer has to be a robot expert to generate tasks for robots or to program them, because intelligent control software recognizes the users intention and generated automatically the commands for nearly every automation component. Now, Virtual Reality Methods are ideally suited for universal man-machine-interfaces for the control and supervision of a big class of automation components, interactive training and visualization systems. The Virtual Reality System of the IRF-COSIMIR/VR- forms the basis for different projects starting with the control of space automation systems in the projects CIROS, VITAL and GETEX, the realization of a comprehensive development tool for the International Space Station and last but not least with the realistic simulation fire extinguishing, forest machines and excavators which will be presented in the final paper in addition to the key ideas of this Virtual Reality System.
Dynamic provisioning of local and remote compute resources with OpenStack
NASA Astrophysics Data System (ADS)
Giffels, M.; Hauth, T.; Polgart, F.; Quast, G.
2015-12-01
Modern high-energy physics experiments rely on the extensive usage of computing resources, both for the reconstruction of measured events as well as for Monte-Carlo simulation. The Institut fur Experimentelle Kernphysik (EKP) at KIT is participating in both the CMS and Belle experiments with computing and storage resources. In the upcoming years, these requirements are expected to increase due to growing amount of recorded data and the rise in complexity of the simulated events. It is therefore essential to increase the available computing capabilities by tapping into all resource pools. At the EKP institute, powerful desktop machines are available to users. Due to the multi-core nature of modern CPUs, vast amounts of CPU time are not utilized by common desktop usage patterns. Other important providers of compute capabilities are classical HPC data centers at universities or national research centers. Due to the shared nature of these installations, the standardized software stack required by HEP applications cannot be installed. A viable way to overcome this constraint and offer a standardized software environment in a transparent manner is the usage of virtualization technologies. The OpenStack project has become a widely adopted solution to virtualize hardware and offer additional services like storage and virtual machine management. This contribution will report on the incorporation of the institute's desktop machines into a private OpenStack Cloud. The additional compute resources provisioned via the virtual machines have been used for Monte-Carlo simulation and data analysis. Furthermore, a concept to integrate shared, remote HPC centers into regular HEP job workflows will be presented. In this approach, local and remote resources are merged to form a uniform, virtual compute cluster with a single point-of-entry for the user. Evaluations of the performance and stability of this setup and operational experiences will be discussed.
Satish Bollimpelli, V; Kondapi, Anand K
2015-12-25
Rotenone induced neuronal toxicity in ventral mesencephalic (VM) dopaminergic (DA) neurons in culture is widely accepted as an important model for the investigation of Parkinson's disease (PD). However, little is known about developmental stage dependent toxic effects of rotenone on VM neurons in vitro. The objective of present study is to investigate the effect of rotenone on developing VM neurons at immature versus mature stages. Primary VM neurons were cultured in the absence of glial cells. Exposure of VM neurons to rotenone for 2 days induced cell death in both immature and mature neurons in a concentration-dependent manner, but to a greater extent in mature neurons. While rotenone-treated mature VM neurons showed α-synuclein aggregation and sensitivity to DA neurons, immature VM neurons exhibited only DA neuronal sensitivity but not α-synuclein aggregation. In addition, on rotenone treatment, enhancement of caspase-3 activity and reactive oxygen species (ROS) production were higher in mature VM neurons than in immature neurons. These results suggest that even though both mature and immature VM neurons are sensitive to rotenone, their manifestations differ from each other, with only mature VM neurons exhibiting Parkinsonian conditions. Copyright © 2015 Elsevier B.V. All rights reserved.
Virtualization for the LHCb Online system
NASA Astrophysics Data System (ADS)
Bonaccorsi, Enrico; Brarda, Loic; Moine, Gary; Neufeld, Niko
2011-12-01
Virtualization has long been advertised by the IT-industry as a way to cut down cost, optimise resource usage and manage the complexity in large data-centers. The great number and the huge heterogeneity of hardware, both industrial and custom-made, has up to now led to reluctance in the adoption of virtualization in the IT infrastructure of large experiment installations. Our experience in the LHCb experiment has shown that virtualization improves the availability and the manageability of the whole system. We have done an evaluation of available hypervisors / virtualization solutions and find that the Microsoft HV technology provides a high level of maturity and flexibility for our purpose. We present the results of these comparison tests, describing in detail, the architecture of our virtualization infrastructure with a special emphasis on the security for services visible to the outside world. Security is achieved by a sophisticated combination of VLANs, firewalls and virtual routing - the cost and benefits of this solution are analysed. We have adapted our cluster management tools, notably Quattor, for the needs of virtual machines and this allows us to migrate smoothly services on physical machines to the virtualized infrastructure. The procedures for migration will also be described. In the final part of the document we describe our recent R&D activities aiming to replacing the SAN-backend for the virtualization by a cheaper iSCSI solution - this will allow to move all servers and related services to the virtualized infrastructure, excepting the ones doing hardware control via non-commodity PCI plugin cards.
Evaluating open-source cloud computing solutions for geosciences
NASA Astrophysics Data System (ADS)
Huang, Qunying; Yang, Chaowei; Liu, Kai; Xia, Jizhe; Xu, Chen; Li, Jing; Gui, Zhipeng; Sun, Min; Li, Zhenglong
2013-09-01
Many organizations start to adopt cloud computing for better utilizing computing resources by taking advantage of its scalability, cost reduction, and easy to access characteristics. Many private or community cloud computing platforms are being built using open-source cloud solutions. However, little has been done to systematically compare and evaluate the features and performance of open-source solutions in supporting Geosciences. This paper provides a comprehensive study of three open-source cloud solutions, including OpenNebula, Eucalyptus, and CloudStack. We compared a variety of features, capabilities, technologies and performances including: (1) general features and supported services for cloud resource creation and management, (2) advanced capabilities for networking and security, and (3) the performance of the cloud solutions in provisioning and operating the cloud resources as well as the performance of virtual machines initiated and managed by the cloud solutions in supporting selected geoscience applications. Our study found that: (1) no significant performance differences in central processing unit (CPU), memory and I/O of virtual machines created and managed by different solutions, (2) OpenNebula has the fastest internal network while both Eucalyptus and CloudStack have better virtual machine isolation and security strategies, (3) Cloudstack has the fastest operations in handling virtual machines, images, snapshots, volumes and networking, followed by OpenNebula, and (4) the selected cloud computing solutions are capable for supporting concurrent intensive web applications, computing intensive applications, and small-scale model simulations without intensive data communication.
Using PVM to host CLIPS in distributed environments
NASA Technical Reports Server (NTRS)
Myers, Leonard; Pohl, Kym
1994-01-01
It is relatively easy to enhance CLIPS (C Language Integrated Production System) to support multiple expert systems running in a distributed environment with heterogeneous machines. The task is minimized by using the PVM (Parallel Virtual Machine) code from Oak Ridge Labs to provide the distributed utility. PVM is a library of C and FORTRAN subprograms that supports distributive computing on many different UNIX platforms. A PVM deamon is easily installed on each CPU that enters the virtual machine environment. Any user with rsh or rexec access to a machine can use the one PVM deamon to obtain a generous set of distributed facilities. The ready availability of both CLIPS and PVM makes the combination of software particularly attractive for budget conscious experimentation of heterogeneous distributive computing with multiple CLIPS executables. This paper presents a design that is sufficient to provide essential message passing functions in CLIPS and enable the full range of PVM facilities.
Virtual Mission Operations of Remote Sensors With Rapid Access To and From Space
NASA Technical Reports Server (NTRS)
Ivancic, William D.; Stewart, Dave; Walke, Jon; Dikeman, Larry; Sage, Steven; Miller, Eric; Northam, James; Jackson, Chris; Taylor, John; Lynch, Scott;
2010-01-01
This paper describes network-centric operations, where a virtual mission operations center autonomously receives sensor triggers, and schedules space and ground assets using Internet-based technologies and service-oriented architectures. For proof-of-concept purposes, sensor triggers are received from the United States Geological Survey (USGS) to determine targets for space-based sensors. The Surrey Satellite Technology Limited (SSTL) Disaster Monitoring Constellation satellite, the United Kingdom Disaster Monitoring Constellation (UK-DMC), is used as the space-based sensor. The UK-DMC s availability is determined via machine-to-machine communications using SSTL s mission planning system. Access to/from the UK-DMC for tasking and sensor data is via SSTL s and Universal Space Network s (USN) ground assets. The availability and scheduling of USN s assets can also be performed autonomously via machine-to-machine communications. All communication, both on the ground and between ground and space, uses open Internet standards.
The coat morphogenetic protein SpoVID is necessary for spore encasement in Bacillus subtilis.
Wang, Katherine H; Isidro, Anabela L; Domingues, Lia; Eskandarian, Haig A; McKenney, Peter T; Drew, Kevin; Grabowski, Paul; Chua, Ming-Hsiu; Barry, Samantha N; Guan, Michelle; Bonneau, Richard; Henriques, Adriano O; Eichenberger, Patrick
2009-11-01
Endospores formed by Bacillus subtilis are encased in a tough protein shell known as the coat, which consists of at least 70 different proteins. We investigated the process of spore coat morphogenesis using a library of 40 coat proteins fused to green fluorescent protein and demonstrate that two successive steps can be distinguished in coat assembly. The first step, initial localization of proteins to the spore surface, is dependent on the coat morphogenetic proteins SpoIVA and SpoVM. The second step, spore encasement, requires a third protein, SpoVID. We show that in spoVID mutant cells, most coat proteins assembled into a cap at one side of the developing spore but failed to migrate around and encase it. We also found that SpoIVA directly interacts with SpoVID. A domain analysis revealed that the N-terminus of SpoVID is required for encasement and is a structural homologue of a virion protein, whereas the C-terminus is necessary for the interaction with SpoIVA. Thus, SpoVM, SpoIVA and SpoVID are recruited to the spore surface in a concerted manner and form a tripartite machine that drives coat formation and spore encasement.
The coat morphogenetic protein SpoVID is necessary for spore encasement in Bacillus subtilis
Wang, Katherine H.; Isidro, Anabela L.; Domingues, Lia; Eskandarian, Haig A.; McKenney, Peter T.; Drew, Kevin; Grabowski, Paul; Chua, Ming-Hsiu; Barry, Samantha N.; Guan, Michelle; Bonneau, Richard; Henriques, Adriano O.; Eichenberger, Patrick
2009-01-01
SUMMARY Endospores formed by Bacillus subtilis are encased in a tough protein shell known as the coat, which consists of at least 70 different proteins. We investigated the process of spore coat morphogenesis using a library of 40 coat proteins fused to GFP and demonstrate that two successive steps can be distinguished in coat assembly. The first step, initial localization of proteins to the spore surface, is dependent on the coat morphogenetic proteins SpoIVA and SpoVM. The second step, spore encasement, requires a third protein, SpoVID. We show that in spoVID mutant cells, most coat proteins assembled into a cap at one side of the developing spore but failed to migrate around and encase it. We also found that SpoIVA directly interacts with SpoVID. A domain analysis revealed that the N-terminus of SpoVID is required for encasement and is a structural homolog of a virion protein, whereas the C-terminus is necessary for the interaction with SpoIVA. Thus, SpoVM, SpoIVA and SpoVID are recruited to the spore surface in a concerted manner and form a tripartite machine that drives coat formation and spore encasement. PMID:19775244
Creating Objects and Object Categories for Studying Perception and Perceptual Learning
Hauffen, Karin; Bart, Eugene; Brady, Mark; Kersten, Daniel; Hegdé, Jay
2012-01-01
In order to quantitatively study object perception, be it perception by biological systems or by machines, one needs to create objects and object categories with precisely definable, preferably naturalistic, properties1. Furthermore, for studies on perceptual learning, it is useful to create novel objects and object categories (or object classes) with such properties2. Many innovative and useful methods currently exist for creating novel objects and object categories3-6 (also see refs. 7,8). However, generally speaking, the existing methods have three broad types of shortcomings. First, shape variations are generally imposed by the experimenter5,9,10, and may therefore be different from the variability in natural categories, and optimized for a particular recognition algorithm. It would be desirable to have the variations arise independently of the externally imposed constraints. Second, the existing methods have difficulty capturing the shape complexity of natural objects11-13. If the goal is to study natural object perception, it is desirable for objects and object categories to be naturalistic, so as to avoid possible confounds and special cases. Third, it is generally hard to quantitatively measure the available information in the stimuli created by conventional methods. It would be desirable to create objects and object categories where the available information can be precisely measured and, where necessary, systematically manipulated (or 'tuned'). This allows one to formulate the underlying object recognition tasks in quantitative terms. Here we describe a set of algorithms, or methods, that meet all three of the above criteria. Virtual morphogenesis (VM) creates novel, naturalistic virtual 3-D objects called 'digital embryos' by simulating the biological process of embryogenesis14. Virtual phylogenesis (VP) creates novel, naturalistic object categories by simulating the evolutionary process of natural selection9,12,13. Objects and object categories created by these simulations can be further manipulated by various morphing methods to generate systematic variations of shape characteristics15,16. The VP and morphing methods can also be applied, in principle, to novel virtual objects other than digital embryos, or to virtual versions of real-world objects9,13. Virtual objects created in this fashion can be rendered as visual images using a conventional graphical toolkit, with desired manipulations of surface texture, illumination, size, viewpoint and background. The virtual objects can also be 'printed' as haptic objects using a conventional 3-D prototyper. We also describe some implementations of these computational algorithms to help illustrate the potential utility of the algorithms. It is important to distinguish the algorithms from their implementations. The implementations are demonstrations offered solely as a 'proof of principle' of the underlying algorithms. It is important to note that, in general, an implementation of a computational algorithm often has limitations that the algorithm itself does not have. Together, these methods represent a set of powerful and flexible tools for studying object recognition and perceptual learning by biological and computational systems alike. With appropriate extensions, these methods may also prove useful in the study of morphogenesis and phylogenesis. PMID:23149420
SONG, YAN-YAN; SUN, LI-DAN; LIU, MIN-LI; LIU, ZHONG-LIANG; CHEN, FEI; ZHANG, YING-ZHE; ZHENG, YAN; ZHANG, JIAN-PING
2014-01-01
Vasculogenic mimicry (VM) formation is important for invasion and metastasis of tumor cells in gastric adenocarcinoma (GAC). The present study aimed to investigate the association between signal transducer and activator of transcription-3 (STAT3), phosphor-STAT3 (p-STAT3), hypoxia-inducible factor-1α (HIF-1α) and VM formation in GAC, and discuss their clinical significance and correlation with the prognosis of patients with GAC. The expression levels of STAT3, p-STAT3, HIF-1α and VM were assessed in 60 cases of patients with GAC and 20 cases of patients with gastritis on tissue microarrays by immunohistochemical methods. The expression levels of STAT3, p-STAT3, HIF-1α and VM were higher in patients with GAC (particularly in poorly differentiated GAC) than in those with gastritis (P<0.05). The expression levels of STAT3, p-STAT3 and HIF-1α were higher in VM tissues compared with non-VM tissues (P<0.05). Positive correlations existed between STAT3, p-STAT3, HIF-1α and VM expression (P<0.05). The expression levels of STAT3, p-STAT3 and HIF-1α, VM, status of lymph node metastasis and tumor differentiation degree were associated with the overall survival time of patients with GAC (P<0.05). However, only p-STAT3 and VM expression were identified as the independent risk factors of GAC OS when analyzed with multivariate analysis. p-STAT3 and VM play a significant role in indicating the prognosis of patients with GAC. STAT3 activation may play a positive role in VM formation of GAC by the STAT3-p-STAT3-HIF-1α-VM effect axis. PMID:24959290
Sone, Teruki; Yoshikawa, Kunihiko; Mimura, Hiroaki; Hayashida, Akihiro; Wada, Nozomi; Obase, Kikuko; Imai, Koichiro; Saito, Ken; Maehama, Tomoko; Fukunaga, Masao; Yoshida, Kiyoshi
2010-01-01
Purpose In cardiac 2-[F-18]fluoro-2-deoxy-D-glucose (FDG)-positron emission tomography (PET) examination, interpretation of myocardial viability in the low uptake region (LUR) has been difficult without additional perfusion imaging. We evaluated distribution patterns of FDG at the border zone of the LUR in the cardiac FDG-PET and established a novel parameter for diagnosing myocardial viability and for discriminating the LUR of normal variants. Materials and Methods Cardiac FDG-PET was performed in patients with a myocardial ischemic event (n = 22) and in healthy volunteers (n = 22). Whether the myocardium was not a viable myocardium (not-VM) or an ischemic but viable myocardium (isch-VM) was defined by an echocardiogram under a low dose of dobutamine infusion as the gold standard. FDG images were displayed as gray scaled-bull's eye mappings. FDG-plot profiles for LUR (= true ischemic region in the patients or normal variant region in healthy subjects) were calculated. Maximal values of FDG change at the LUR border zone (a steepness index; Smax scale/pixel) were compared among not-VM, isch-VM, and normal myocardium. Results Smax was significantly higher for n-VM compared to those with isch-VM or normal myocardium (ANOVA). A cut-off value of 0.30 in Smax demonstrated 100% sensitivity and 83% specificity for diagnosing n-VM and isch-VM. Smax less than 0.23 discriminated LUR in normal myocardium from the LUR in patients with both n-VM and isch-VM with a 94% sensitivity and a 93% specificity. Conclusion Smax of the LUR in cardiac FDG-PET is a simple and useful parameter to diagnose n-VM and isch-VM, as well as to discriminate thr LUR of normal variants. PMID:20191007
A Concept for Optimizing Behavioural Effectiveness & Efficiency
NASA Astrophysics Data System (ADS)
Barca, Jan Carlo; Rumantir, Grace; Li, Raymond
Both humans and machines exhibit strengths and weaknesses that can be enhanced by merging the two entities. This research aims to provide a broader understanding of how closer interactions between these two entities can facilitate more optimal goal-directed performance through the use of artificial extensions of the human body. Such extensions may assist us in adapting to and manipulating our environments in a more effective way than any system known today. To demonstrate this concept, we have developed a simulation where a semi interactive virtual spider can be navigated through an environment consisting of several obstacles and a virtual predator capable of killing the spider. The virtual spider can be navigated through the use of three different control systems that can be used to assist in optimising overall goal directed performance. The first two control systems use, an onscreen button interface and a touch sensor, respectively to facilitate human navigation of the spider. The third control system is an autonomous navigation system through the use of machine intelligence embedded in the spider. This system enables the spider to navigate and react to changes in its local environment. The results of this study indicate that machines should be allowed to override human control in order to maximise the benefits of collaboration between man and machine. This research further indicates that the development of strong machine intelligence, sensor systems that engage all human senses, extra sensory input systems, physical remote manipulators, multiple intelligent extensions of the human body, as well as a tighter symbiosis between man and machine, can support an upgrade of the human form.
The influence of negative training set size on machine learning-based virtual screening.
Kurczab, Rafał; Smusz, Sabina; Bojarski, Andrzej J
2014-01-01
The paper presents a thorough analysis of the influence of the number of negative training examples on the performance of machine learning methods. The impact of this rather neglected aspect of machine learning methods application was examined for sets containing a fixed number of positive and a varying number of negative examples randomly selected from the ZINC database. An increase in the ratio of positive to negative training instances was found to greatly influence most of the investigated evaluating parameters of ML methods in simulated virtual screening experiments. In a majority of cases, substantial increases in precision and MCC were observed in conjunction with some decreases in hit recall. The analysis of dynamics of those variations let us recommend an optimal composition of training data. The study was performed on several protein targets, 5 machine learning algorithms (SMO, Naïve Bayes, Ibk, J48 and Random Forest) and 2 types of molecular fingerprints (MACCS and CDK FP). The most effective classification was provided by the combination of CDK FP with SMO or Random Forest algorithms. The Naïve Bayes models appeared to be hardly sensitive to changes in the number of negative instances in the training set. In conclusion, the ratio of positive to negative training instances should be taken into account during the preparation of machine learning experiments, as it might significantly influence the performance of particular classifier. What is more, the optimization of negative training set size can be applied as a boosting-like approach in machine learning-based virtual screening.
The influence of negative training set size on machine learning-based virtual screening
2014-01-01
Background The paper presents a thorough analysis of the influence of the number of negative training examples on the performance of machine learning methods. Results The impact of this rather neglected aspect of machine learning methods application was examined for sets containing a fixed number of positive and a varying number of negative examples randomly selected from the ZINC database. An increase in the ratio of positive to negative training instances was found to greatly influence most of the investigated evaluating parameters of ML methods in simulated virtual screening experiments. In a majority of cases, substantial increases in precision and MCC were observed in conjunction with some decreases in hit recall. The analysis of dynamics of those variations let us recommend an optimal composition of training data. The study was performed on several protein targets, 5 machine learning algorithms (SMO, Naïve Bayes, Ibk, J48 and Random Forest) and 2 types of molecular fingerprints (MACCS and CDK FP). The most effective classification was provided by the combination of CDK FP with SMO or Random Forest algorithms. The Naïve Bayes models appeared to be hardly sensitive to changes in the number of negative instances in the training set. Conclusions In conclusion, the ratio of positive to negative training instances should be taken into account during the preparation of machine learning experiments, as it might significantly influence the performance of particular classifier. What is more, the optimization of negative training set size can be applied as a boosting-like approach in machine learning-based virtual screening. PMID:24976867
Enterprise Cloud Architecture for Chinese Ministry of Railway
NASA Astrophysics Data System (ADS)
Shan, Xumei; Liu, Hefeng
Enterprise like PRC Ministry of Railways (MOR), is facing various challenges ranging from highly distributed computing environment and low legacy system utilization, Cloud Computing is increasingly regarded as one workable solution to address this. This article describes full scale cloud solution with Intel Tashi as virtual machine infrastructure layer, Hadoop HDFS as computing platform, and self developed SaaS interface, gluing virtual machine and HDFS with Xen hypervisor. As a result, on demand computing task application and deployment have been tackled per MOR real working scenarios at the end of article.
Sánchez-Rodríguez, Jorge E; De Santiago-Castillo, José A; Contreras-Vite, Juan Antonio; Nieto-Delgado, Pablo G; Castro-Chong, Alejandra; Arreola, Jorge
2012-01-01
The interaction of either H+ or Cl− ions with the fast gate is the major source of voltage (Vm) dependence in ClC Cl− channels. However, the mechanism by which these ions confer Vm dependence to the ClC-2 Cl− channel remains unclear. By determining the Vm dependence of normalized conductance (Gnorm(Vm)), an index of open probability, ClC-2 gating was studied at different [H+]i, [H+]o and [Cl−]i. Changing [H+]i by five orders of magnitude whilst [Cl−]i/[Cl−]o = 140/140 or 10/140 mm slightly shifted Gnorm(Vm) to negative Vm without altering the onset kinetics; however, channel closing was slower at acidic pHi. A similar change in [H+]o with [Cl−]i/[Cl−]o = 140/140 mm enhanced Gnorm in a bell-shaped manner and shifted Gnorm(Vm) curves to positive Vm. Importantly, Gnorm was >0 with [H+]o = 10−10 m but channel closing was slower when [H+]o or [Cl−]i increased implying that ClC-2 was opened without protonation and that external H+ and/or internal Cl− ions stabilized the open conformation. The analysis of kinetics and steady-state properties at different [H+]o and [Cl−]i was carried out using a gating Scheme coupled to Cl− permeation. Unlike previous results showing Vm-dependent protonation, our analysis revealed that fast gate protonation was Vm and Cl− independent and the equilibrium constant for closed–open transition of unprotonated channels was facilitated by elevated [Cl−]i in a Vm-dependent manner. Hence a Vm dependence of pore occupancy by Cl− induces a conformational change in unprotonated closed channels, before the pore opens, and the open conformation is stabilized by Cl− occupancy and Vm-independent protonation. PMID:22753549
Sun, Baocun; Zhang, Danfang; Zhao, Nan; Zhao, Xiulan
2017-05-02
Vasculogenic mimicry (VM) is a functional microcirculation pattern in malignant tumors accompanied by endothelium-dependent vessels and mosaic vessels. VM has been identified in more than 15 solid tumor types and is associated with poor differentiation, late clinical stage and poor prognosis. Classic anti-angiogenic agents do not target endothelium-dependent vessels and are not efficacious against tumors exhibiting VM. Further insight into the molecular signaling that triggers and promotes VM formation could improve anti-angiogenic therapeutics. Recent studies have shown that cancer stem cells (CSCs) and epithelium-to-endothelium transition (EET), a subtype of epithelial-to-mesenchymal transition (EMT), accelerate VM formation by stimulating tumor cell plasticity, remodeling the extracellular matrix (ECM) and connecting VM channels with host blood vessels. VM channel-lining cells originate from CSCs due to expression of EMT inducers such as Twist1, which promote EET and ECM remodeling. Hypoxia and high interstitial fluid pressure in the tumor microenvironment induce a specific type of cell death, linearly patterned programmed cell necrosis (LPPCN), which spatially guides VM and endothelium-dependent vessel networks. This review focuses on the roles of CSCs and EET in VM, and on possible novel anti-angiogenic strategies against alternative tumor vascularization.
Virtual Reality Enhanced Instructional Learning
ERIC Educational Resources Information Center
Nachimuthu, K.; Vijayakumari, G.
2009-01-01
Virtual Reality (VR) is a creation of virtual 3D world in which one can feel and sense the world as if it is real. It is allowing engineers to design machines and Educationists to design AV [audiovisual] equipment in real time but in 3-dimensional hologram as if the actual material is being made and worked upon. VR allows a least-cost (energy…
Virtual manufacturing work cell for engineering
NASA Astrophysics Data System (ADS)
Watanabe, Hideo; Ohashi, Kazushi; Takahashi, Nobuyuki; Kato, Kiyotaka; Fujita, Satoru
1997-12-01
The life cycles of products have been getting shorter. To meet this rapid turnover, manufacturing systems must be frequently changed as well. In engineering to develop manufacturing systems, there are several tasks such as process planning, layout design, programming, and final testing using actual machines. This development of manufacturing systems takes a long time and is expensive. To aid the above engineering process, we have developed the virtual manufacturing workcell (VMW). This paper describes a concept of VMW and design method through computer aided manufacturing engineering using VMW (CAME-VMW) related to the above engineering tasks. The VMW has all design data, and realizes a behavior of equipment and devices using a simulator. The simulator has logical and physical functionality. The one simulates a sequence control and the other simulates motion control, shape movement in 3D space. The simulator can execute the same control software made for actual machines. Therefore we can verify the behavior precisely before the manufacturing workcell will be constructed. The VMW creates engineering work space for several engineers and offers debugging tools such as virtual equipment and virtual controllers. We applied this VMW to development of a transfer workcell for vaporization machine in actual manufacturing system to produce plasma display panel (PDP) workcell and confirmed its effectiveness.
Identification of Program Signatures from Cloud Computing System Telemetry Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nichols, Nicole M.; Greaves, Mark T.; Smith, William P.
Malicious cloud computing activity can take many forms, including running unauthorized programs in a virtual environment. Detection of these malicious activities while preserving the privacy of the user is an important research challenge. Prior work has shown the potential viability of using cloud service billing metrics as a mechanism for proxy identification of malicious programs. Previously this novel detection method has been evaluated in a synthetic and isolated computational environment. In this paper we demonstrate the ability of billing metrics to identify programs, in an active cloud computing environment, including multiple virtual machines running on the same hypervisor. The openmore » source cloud computing platform OpenStack, is used for private cloud management at Pacific Northwest National Laboratory. OpenStack provides a billing tool (Ceilometer) to collect system telemetry measurements. We identify four different programs running on four virtual machines under the same cloud user account. Programs were identified with up to 95% accuracy. This accuracy is dependent on the distinctiveness of telemetry measurements for the specific programs we tested. Future work will examine the scalability of this approach for a larger selection of programs to better understand the uniqueness needed to identify a program. Additionally, future work should address the separation of signatures when multiple programs are running on the same virtual machine.« less
Dynamic provisioning of a HEP computing infrastructure on a shared hybrid HPC system
NASA Astrophysics Data System (ADS)
Meier, Konrad; Fleig, Georg; Hauth, Thomas; Janczyk, Michael; Quast, Günter; von Suchodoletz, Dirk; Wiebelt, Bernd
2016-10-01
Experiments in high-energy physics (HEP) rely on elaborate hardware, software and computing systems to sustain the high data rates necessary to study rare physics processes. The Institut fr Experimentelle Kernphysik (EKP) at KIT is a member of the CMS and Belle II experiments, located at the LHC and the Super-KEKB accelerators, respectively. These detectors share the requirement, that enormous amounts of measurement data must be processed and analyzed and a comparable amount of simulated events is required to compare experimental results with theoretical predictions. Classical HEP computing centers are dedicated sites which support multiple experiments and have the required software pre-installed. Nowadays, funding agencies encourage research groups to participate in shared HPC cluster models, where scientist from different domains use the same hardware to increase synergies. This shared usage proves to be challenging for HEP groups, due to their specialized software setup which includes a custom OS (often Scientific Linux), libraries and applications. To overcome this hurdle, the EKP and data center team of the University of Freiburg have developed a system to enable the HEP use case on a shared HPC cluster. To achieve this, an OpenStack-based virtualization layer is installed on top of a bare-metal cluster. While other user groups can run their batch jobs via the Moab workload manager directly on bare-metal, HEP users can request virtual machines with a specialized machine image which contains a dedicated operating system and software stack. In contrast to similar installations, in this hybrid setup, no static partitioning of the cluster into a physical and virtualized segment is required. As a unique feature, the placement of the virtual machine on the cluster nodes is scheduled by Moab and the job lifetime is coupled to the lifetime of the virtual machine. This allows for a seamless integration with the jobs sent by other user groups and honors the fairshare policies of the cluster. The developed thin integration layer between OpenStack and Moab can be adapted to other batch servers and virtualization systems, making the concept also applicable for other cluster operators. This contribution will report on the concept and implementation of an OpenStack-virtualized cluster used for HEP workflows. While the full cluster will be installed in spring 2016, a test-bed setup with 800 cores has been used to study the overall system performance and dedicated HEP jobs were run in a virtualized environment over many weeks. Furthermore, the dynamic integration of the virtualized worker nodes, depending on the workload at the institute's computing system, will be described.
Borba, Márcia; Duan, Yuanyuan; Griggs, Jason A; Cesar, Paulo F; Della Bona, Álvaro
2015-04-01
The effect of the ceramic infrastructure (IS) on the failure behavior and stress distribution of fixed partial dentures (FPDs) was evaluated. Twenty FPDs with a connector cross-section of 16 mm(2) were produced for each IS and veneered with porcelain: (YZ) Vita In-Ceram YZ/Vita VM9 porcelain; (IZ) Vita In-Ceram Zirconia/Vita VM7 porcelain; (AL) Vita In-Ceram AL/Vita VM7 porcelain. Two experimental conditions were evaluated (n = 10). For control specimens, load was applied in the center of the pontic at 0.5 mm/min until failure, using a universal testing machine, in 37°C deionized water. For mechanical cycling (MC) specimens, FPDs were subjected to MC (2 Hz, 140 N, 10(6) cycles) and subsequently tested as described for the control group. For YZ, an extra group of 10 FPDs were built with a connector cross-section of 9 mm(2) and tested until failure. Fractography and FEA were performed. Data were analyzed by ANOVA and Tukey's test (α = 0.05). YZ16 showed the greatest fracture load mean value, followed by YZ16-MC. Specimens from groups YZ9, IZ16, IZ16-MC, AL16 and AL16-MC showed no significant difference for the fracture load. The failure behavior and stress distribution of FPDs was influenced by the type of IS. AL and IZ FPDs showed similar fracture load values but different failure modes and stress distribution. YZ showed the best mechanical behavior and may be considered the material of choice to produce posterior FPDs as it was possible to obtain a good mechanical performance even with a smaller connector dimension (9 mm(2)). Copyright © 2015 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
Mehrabi, Sara; Adami, Alessia; Ventriglia, Anna; Zantedeschi, Lisa; Franchi, Massimo; Manfredi, Riccardo
2013-10-01
We evaluated the evolution of ventriculomegaly (VM) by comparing foetal magnetic resonance imaging (MRI) with postnatal transcranial ultrasonography (US) and/or encephalic MRI. Between January 2006 and April 2011, 70 foetuses with a mean gestational age of 28 weeks and 4 days (range, 18-36) weeks with VM on foetal MRI were assessed in this prospective study. Half-Fourier rapid acquisition with relaxation enhancement (RARE) T2-weighted, T1-weighted and diffusion-weighted (DWI) images along the three orthogonal planes according to the longitudinal axis of the mother, and subsequently of the foetal brain, were acquired. Quantitative image analysis included the transverse diameter of lateral ventricles in axial and coronal planes. Qualitative image analysis included searching for associated structural anomalies. Thirty-four of 70 patients with a diagnosis of VM on foetal MRI underwent postnatal imaging. Twenty-five of those 34 (73%) had mild, four (12%) had moderate and five (15%) had severe VM on MRI. Normalisation of the diameter of lateral ventricles was observed in 16 of the 34 (47%) newborns. Among these 16, 13 (81%) had mild and three (19%) had moderate VM (two isolated and one associated VM). VM stabilisation was observed in 16 of the 34 (47%) babies. Among them, 11 (69%) had mild (eight isolated and three associated), one (6%) had moderate associated and four (25%) had severe associated VM. Progression from mild to severe (associated) VM was observed in two of the 34 (6%) babies. The absence of associated anomalies and a mild VM are favourable prognostic factors in the evolution of VM.
The Virtual Xenbase: transitioning an online bioinformatics resource to a private cloud
Karimi, Kamran; Vize, Peter D.
2014-01-01
As a model organism database, Xenbase has been providing informatics and genomic data on Xenopus (Silurana) tropicalis and Xenopus laevis frogs for more than a decade. The Xenbase database contains curated, as well as community-contributed and automatically harvested literature, gene and genomic data. A GBrowse genome browser, a BLAST+ server and stock center support are available on the site. When this resource was first built, all software services and components in Xenbase ran on a single physical server, with inherent reliability, scalability and inter-dependence issues. Recent advances in networking and virtualization techniques allowed us to move Xenbase to a virtual environment, and more specifically to a private cloud. To do so we decoupled the different software services and components, such that each would run on a different virtual machine. In the process, we also upgraded many of the components. The resulting system is faster and more reliable. System maintenance is easier, as individual virtual machines can now be updated, backed up and changed independently. We are also experiencing more effective resource allocation and utilization. Database URL: www.xenbase.org PMID:25380782
Naval Applications of Virtual Reality,
1993-01-01
Expert Virtual Reality Special Report , pp. 67- 72. 14. SUBJECT TERMS 15 NUMBER o0 PAGES man-machine interface virtual reality decision support...collective and individual performance. -" Virtual reality projects could help *y by Mark Gembicki Av-t-abilty CodesA Avafllat Idt Iofe and David Rousseau...alt- 67 VIRTUAL . REALITY SPECIAl, REPORT r-OPY avcriaikxb to DD)C qg .- 154,41X~~~~~~~~~~~~j 1411 iI..:41 T a].’ 1,1 4 1111 I 4 1 * .11 ~ 4 l.~w111511 I
BioImg.org: A Catalog of Virtual Machine Images for the Life Sciences
Dahlö, Martin; Haziza, Frédéric; Kallio, Aleksi; Korpelainen, Eija; Bongcam-Rudloff, Erik; Spjuth, Ola
2015-01-01
Virtualization is becoming increasingly important in bioscience, enabling assembly and provisioning of complete computer setups, including operating system, data, software, and services packaged as virtual machine images (VMIs). We present an open catalog of VMIs for the life sciences, where scientists can share information about images and optionally upload them to a server equipped with a large file system and fast Internet connection. Other scientists can then search for and download images that can be run on the local computer or in a cloud computing environment, providing easy access to bioinformatics environments. We also describe applications where VMIs aid life science research, including distributing tools and data, supporting reproducible analysis, and facilitating education. BioImg.org is freely available at: https://bioimg.org. PMID:26401099
BioImg.org: A Catalog of Virtual Machine Images for the Life Sciences.
Dahlö, Martin; Haziza, Frédéric; Kallio, Aleksi; Korpelainen, Eija; Bongcam-Rudloff, Erik; Spjuth, Ola
2015-01-01
Virtualization is becoming increasingly important in bioscience, enabling assembly and provisioning of complete computer setups, including operating system, data, software, and services packaged as virtual machine images (VMIs). We present an open catalog of VMIs for the life sciences, where scientists can share information about images and optionally upload them to a server equipped with a large file system and fast Internet connection. Other scientists can then search for and download images that can be run on the local computer or in a cloud computing environment, providing easy access to bioinformatics environments. We also describe applications where VMIs aid life science research, including distributing tools and data, supporting reproducible analysis, and facilitating education. BioImg.org is freely available at: https://bioimg.org.
Access, Equity, and Opportunity. Women in Machining: A Model Program.
ERIC Educational Resources Information Center
Warner, Heather
The Women in Machining (WIM) program is a Machine Action Project (MAP) initiative that was developed in response to a local skilled metalworking labor shortage, despite a virtual absence of women and people of color from area shops. The project identified post-war stereotypes and other barriers that must be addressed if women are to have an equal…
Toxicity and medical countermeasure studies on the organophosphorus nerve agents VM and VX.
Rice, Helen; Dalton, Christopher H; Price, Matthew E; Graham, Stuart J; Green, A Christopher; Jenner, John; Groombridge, Helen J; Timperley, Christopher M
2015-04-08
To support the effort to eliminate the Syrian Arab Republic chemical weapons stockpile safely, there was a requirement to provide scientific advice based on experimentally derived information on both toxicity and medical countermeasures (MedCM) in the event of exposure to VM, VX or VM-VX mixtures. Complementary in vitro and in vivo studies were undertaken to inform that advice. The penetration rate of neat VM was not significantly different from that of neat VX, through either guinea pig or pig skin in vitro . The presence of VX did not affect the penetration rate of VM in mixtures of various proportions. A lethal dose of VM was approximately twice that of VX in guinea pigs poisoned via the percutaneous route. There was no interaction in mixed agent solutions which altered the in vivo toxicity of the agents. Percutaneous poisoning by VM responded to treatment with standard MedCM, although complete protection was not achieved.
A genetic algorithm for a bi-objective mathematical model for dynamic virtual cell formation problem
NASA Astrophysics Data System (ADS)
Moradgholi, Mostafa; Paydar, Mohammad Mahdi; Mahdavi, Iraj; Jouzdani, Javid
2016-09-01
Nowadays, with the increasing pressure of the competitive business environment and demand for diverse products, manufacturers are force to seek for solutions that reduce production costs and rise product quality. Cellular manufacturing system (CMS), as a means to this end, has been a point of attraction to both researchers and practitioners. Limitations of cell formation problem (CFP), as one of important topics in CMS, have led to the introduction of virtual CMS (VCMS). This research addresses a bi-objective dynamic virtual cell formation problem (DVCFP) with the objective of finding the optimal formation of cells, considering the material handling costs, fixed machine installation costs and variable production costs of machines and workforce. Furthermore, we consider different skills on different machines in workforce assignment in a multi-period planning horizon. The bi-objective model is transformed to a single-objective fuzzy goal programming model and to show its performance; numerical examples are solved using the LINGO software. In addition, genetic algorithm (GA) is customized to tackle large-scale instances of the problems to show the performance of the solution method.
DOT National Transportation Integrated Search
1996-06-01
The purpose of this report is to document the preparation of the 1994 Table VM-1, including data sources, assumptions, and estimating procedures. Table VM-1 describes vehicle distance traveled in miles, by highway category and vehicle type. VM-1 depi...
Khateb, Mohamed; Schiller, Jackie; Schiller, Yitzhak
2017-01-06
The primary vibrissae motor cortex (vM1) is responsible for generating whisking movements. In parallel, vM1 also sends information directly to the sensory barrel cortex (vS1). In this study, we investigated the effects of vM1 activation on processing of vibrissae sensory information in vS1 of the rat. To dissociate the vibrissae sensory-motor loop, we optogenetically activated vM1 and independently passively stimulated principal vibrissae. Optogenetic activation of vM1 supra-linearly amplified the response of vS1 neurons to passive vibrissa stimulation in all cortical layers measured. Maximal amplification occurred when onset of vM1 optogenetic activation preceded vibrissa stimulation by 20 ms. In addition to amplification, vM1 activation also sharpened angular tuning of vS1 neurons in all cortical layers measured. Our findings indicated that in addition to output motor signals, vM1 also sends preparatory signals to vS1 that serve to amplify and sharpen the response of neurons in the barrel cortex to incoming sensory input signals.
Investigation of vasculogenic mimicry in intracranial hemangiopericytoma.
Zhang, Zhen; Han, Yun; Zhang, Keke; Teng, Liangzhu
2011-01-01
Vasculogenic mimicry (VM) has increasingly been recognized as a form of angiogenesis. Previous studies have shown that the existence of VM is associated with poor clinical prognosis in certain malignant tumors. However, whether VM is present and clinically significant in intracranial hemangiopericytoma (HPC) is unknown. The present study was therefore designed to examine the expression of VM in intracranial HPC and its correlation with matrix metalloprotease-2 (MMP-2) and vascular endothelial growth factor (VEGF). A total of 17 intracranial HPC samples, along with complete clinical and pathological data, were collected for our study. Immunohistochemistry was performed to stain tissue sections for CD34, periodic acid-Schiff, VEGF and MMP-2. The levels of VEGF and MMP-2 were compared between tumor samples with and without VM. The results showed that VM existed in 12 of 17 (70.6%) intracranial HPC samples. The presence of VM in tumors was associated with tumor recurrence (P<0.05) and expression of MMP-2 (P<0.05). However, there was no difference in the expression of VEGF between groups with and without VM.
Sawni, Anju; Cederna-Meko, Crystal; LaChance, Jenny L; Buttigieg, Angie; Le, Quoc; Nunuk, Irene; Ang, Joyce; Burrell, Katherine M
2017-02-01
We examined the feasibility and perception of cell-based (texting, voicemail [VM], and email/social media), health-related communication with adolescents in Genesee County, MI, where 22% reside below the poverty level. Results of an anonymous survey found that 86% of respondents owned a cell phone, 87% had data, 96% texted, 90.5% emailed/used social media, and 68% had VM. Most adolescents were interested in cell-based communication via texting (52%), VM (37%), and email/social media (31%). Interest in types of health communication included appointment reminders (99% texting; 94% VM; 95% email/social media), shot reminders (84.5% texting; 74.5% VM; 81% email/social media), call for test results (71.5% texting; 75% VM; 65% email/social media), medication reminders (63% texting; 54% VM; 58% e-mail/social media), and health tips (36% texting; 18.5% VM; 73% email/social media). Cell-based health-related communication with adolescents is feasible even within low socioeconomic status populations, primarily via texting. Health providers should embrace cell-based patient communication.
Ventromedial prefrontal cortex encodes emotional value.
Winecoff, Amy; Clithero, John A; Carter, R McKell; Bergman, Sara R; Wang, Lihong; Huettel, Scott A
2013-07-03
The ventromedial prefrontal cortex (vmPFC) plays a critical role in processing appetitive stimuli. Recent investigations have shown that reward value signals in the vmPFC can be altered by emotion regulation processes; however, to what extent the processing of positive emotion relies on neural regions implicated in reward processing is unclear. Here, we investigated the effects of emotion regulation on the valuation of emotionally evocative images. Two independent experimental samples of human participants performed a cognitive reappraisal task while undergoing fMRI. The experience of positive emotions activated the vmPFC, whereas the regulation of positive emotions led to relative decreases in vmPFC activation. During the experience of positive emotions, vmPFC activation tracked participants' own subjective ratings of the valence of stimuli. Furthermore, vmPFC activation also tracked normative valence ratings of the stimuli when participants were asked to experience their emotions, but not when asked to regulate them. A separate analysis of the predictive power of vmPFC on behavior indicated that even after accounting for normative stimulus ratings and condition, increased signal in the vmPFC was associated with more positive valence ratings. These results suggest that the vmPFC encodes a domain-general value signal that tracks the value of not only external rewards, but also emotional stimuli.
McCormick, Cornelia; Ciaramelli, Elisa; De Luca, Flavia; Maguire, Eleanor A
2018-03-15
The hippocampus and ventromedial prefrontal cortex (vmPFC) are closely connected brain regions whose functions are still debated. In order to offer a fresh perspective on understanding the contributions of these two brain regions to cognition, in this review we considered cognitive tasks that usually elicit deficits in hippocampal-damaged patients (e.g., autobiographical memory retrieval), and examined the performance of vmPFC-lesioned patients on these tasks. We then took cognitive tasks where performance is typically compromised following vmPFC damage (e.g., decision making), and looked at how these are affected by hippocampal lesions. Three salient motifs emerged. First, there are surprising gaps in our knowledge about how hippocampal and vmPFC patients perform on tasks typically associated with the other group. Second, while hippocampal or vmPFC damage seems to adversely affect performance on so-called hippocampal tasks, the performance of hippocampal and vmPFC patients clearly diverges on classic vmPFC tasks. Third, although performance appears analogous on hippocampal tasks, on closer inspection, there are significant disparities between hippocampal and vmPFC patients. Based on these findings, we suggest a tentative hierarchical model to explain the functions of the hippocampus and vmPFC. We propose that the vmPFC initiates the construction of mental scenes by coordinating the curation of relevant elements from neocortical areas, which are then funneled into the hippocampus to build a scene. The vmPFC then engages in iterative re-initiation via feedback loops with neocortex and hippocampus to facilitate the flow and integration of the multiple scenes that comprise the coherent unfolding of an extended mental event. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Wang, Zhiheng; Qi, Weibo; Zhu, Yong; Lin, Ruobai
2009-10-20
Preoperative lung cancer with mediastinal lymph nodes metastasis can be diagnosed by vedio mediastinoscopy (VM) and CT. This study was to explore the value of VM and CT in the diagnosis of N staging of preoperative lung cancer, and to discuss the difference between the two methods. Forty-eight cases diagnosed of lung cancer by CT or PET-CT were examined by VM. The sensitivity, specificity, validity, positive predictive value and negative predictive value of VM and CT were speculated according to the postoperative pathological reports, and the difference between VM and CT in the diagnosis of lung cancer with mediastinal lymph nodes metastasis was discussed. (1)Under the examination of VM, 31 patients with the negative outcome received the direct operation; 14 patients with N2 received 2 courses of neoadjuvant chemotherapy before operation; 3 patients with N3 received chemotherapy and/or radiotherapy. (2)Forty-one cases with final diagnosis of lung cancer were used as samples to speculate the sensitivity, specificity, validity, positive predictive value and negative predictive value of VM. They were 93.3%, 100%, 97.6%, 100%, 96.3%, which of CT were 66.7%, 53.8%, 58.5%, 45.5%, 73.7% (Chi-square=4.083, P=0.039), the difference between VM and CT was statistically significant. (3)In this group, the complications of VM incidence rate was 2.08% (1/48), and the case was pneumothorax. VM is superior to CT in the diagnosis of N staging of preoperative lung cancer; Due to its safety and effectiveness, VM will be wildly used in the field of thoracic surgery.
Bhatt, Chhavi Raj; Thielens, Arno; Redmayne, Mary; Abramson, Michael J; Billah, Baki; Sim, Malcolm R; Vermeulen, Roel; Martens, Luc; Joseph, Wout; Benke, Geza
2016-01-01
The aims of this study were to: i) measure personal exposure in the Global System for Mobile communications (GSM) 900MHz downlink (DL) frequency band with two systems of exposimeters, a personal distributed exposimeter (PDE) and a pair of ExpoM-RFs, ii) compare the GSM 900MHz DL exposures across various microenvironments in Australia and Belgium, and iii) evaluate the correlation between the PDE and ExpoM-RFs measurements. Personal exposure data were collected using the PDE and two ExpoM-RFs simultaneously across 34 microenvironments (17 each in Australia and Belgium) located in urban, suburban and rural areas. Summary statistics of the electric field strengths (V/m) were computed and compared across similar microenvironments in Australia and Belgium. The personal exposures across urban microenvironments were higher than those in the rural or suburban microenvironments. Likewise, the exposure levels across the outdoor were higher than those for indoor microenvironments. The five highest median exposure levels were: city centre (0.248V/m), bus (0.124V/m), railway station (0.105V/m), mountain/forest (rural) (0.057V/m), and train (0.055V/m) [Australia]; and bicycle (urban) (0.238V/m), tram station (0.238V/m), city centre (0.156V/m), residential outdoor (urban) (0.139V/m) and park (0.124V/m) [Belgium]. Exposures in the GSM900 MHz frequency band across most of the microenvironments in Australia were significantly lower than the exposures across the microenvironments in Belgium. Overall correlations between the PDE and the ExpoM-RFs measurements were high. The measured exposure levels were far below the general public reference levels recommended in the guidelines of the ICNIRP and the ARPANSA. Copyright © 2016 Elsevier Ltd. All rights reserved.
Yim, Wen-Wai; Chien, Shu; Kusumoto, Yasuyuki; Date, Susumu; Haga, Jason
2010-01-01
Large-scale in-silico screening is a necessary part of drug discovery and Grid computing is one answer to this demand. A disadvantage of using Grid computing is the heterogeneous computational environments characteristic of a Grid. In our study, we have found that for the molecular docking simulation program DOCK, different clusters within a Grid organization can yield inconsistent results. Because DOCK in-silico virtual screening (VS) is currently used to help select chemical compounds to test with in-vitro experiments, such differences have little effect on the validity of using virtual screening before subsequent steps in the drug discovery process. However, it is difficult to predict whether the accumulation of these discrepancies over sequentially repeated VS experiments will significantly alter the results if VS is used as the primary means for identifying potential drugs. Moreover, such discrepancies may be unacceptable for other applications requiring more stringent thresholds. This highlights the need for establishing a more complete solution to provide the best scientific accuracy when executing an application across Grids. One possible solution to platform heterogeneity in DOCK performance explored in our study involved the use of virtual machines as a layer of abstraction. This study investigated the feasibility and practicality of using virtual machine and recent cloud computing technologies in a biological research application. We examined the differences and variations of DOCK VS variables, across a Grid environment composed of different clusters, with and without virtualization. The uniform computer environment provided by virtual machines eliminated inconsistent DOCK VS results caused by heterogeneous clusters, however, the execution time for the DOCK VS increased. In our particular experiments, overhead costs were found to be an average of 41% and 2% in execution time for two different clusters, while the actual magnitudes of the execution time costs were minimal. Despite the increase in overhead, virtual clusters are an ideal solution for Grid heterogeneity. With greater development of virtual cluster technology in Grid environments, the problem of platform heterogeneity may be eliminated through virtualization, allowing greater usage of VS, and will benefit all Grid applications in general.
An Argument for Partial Admissibility of Polygraph Results in Trials by Courts-Martial
1990-04-01
FUNDAMENTALS OF THE POLYGRAPH TECHNIQUE 6 A. THE POLYGRAPH MACHINE 7 1. THE CARDIOSPHYMOGRAPH 8 2. THE PNEUMOGRAPH 9 3. THE GALVANOMETER 10 4. THE... Machine Anyone observing a polygraph machine for the first time could easily conclude it is a survivor of the Spanish Inquisition. The lengths of...wire and coils get the immediate attention of the subject. However, the various polygraph machines in use today cause virtually no discomfort. Several
Interactive Analysis of General Beam Configurations using Finite Element Methods and JavaScript
NASA Astrophysics Data System (ADS)
Hernandez, Christopher
Advancements in computer technology have contributed to the widespread practice of modelling and solving engineering problems through the use of specialized software. The wide use of engineering software comes with the disadvantage to the user of costs from the required purchase of software licenses. The creation of accurate, trusted, and freely available applications capable of conducting meaningful analysis of engineering problems is a way to mitigate to the costs associated with every-day engineering computations. Writing applications in the JavaScript programming language allows the applications to run within any computer browser, without the need to install specialized software, since all internet browsers are equipped with virtual machines (VM) that allow the browsers to execute JavaScript code. The objective of this work is the development of an application that performs the analysis of a completely general beam through use of the finite element method. The app is written in JavaScript and embedded in a web page so it can be downloaded and executed by a user with an internet connection. This application allows the user to analyze any uniform or non-uniform beam, with any combination of applied forces, moments, distributed loads, and boundary conditions. Outputs for this application include lists the beam deformations and slopes, as well as lateral and slope deformation graphs, bending stress distributions, and shear and a moment diagrams. To validate the methodology of the GBeam finite element app, its results are verified using the results from obtained from two other established finite element solvers for fifteen separate test cases.
Teaching Cybersecurity Using the Cloud
ERIC Educational Resources Information Center
Salah, Khaled; Hammoud, Mohammad; Zeadally, Sherali
2015-01-01
Cloud computing platforms can be highly attractive to conduct course assignments and empower students with valuable and indispensable hands-on experience. In particular, the cloud can offer teaching staff and students (whether local or remote) on-demand, elastic, dedicated, isolated, (virtually) unlimited, and easily configurable virtual machines.…
Ant-Based Cyber Defense (also known as
DOE Office of Scientific and Technical Information (OSTI.GOV)
Glenn Fink, PNNL
2015-09-29
ABCD is a four-level hierarchy with human supervisors at the top, a top-level agent called a Sergeant controlling each enclave, Sentinel agents located at each monitored host, and mobile Sensor agents that swarm through the enclaves to detect cyber malice and misconfigurations. The code comprises four parts: (1) the core agent framework, (2) the user interface and visualization, (3) test-range software to create a network of virtual machines including a simulated Internet and user and host activity emulation scripts, and (4) a test harness to allow the safe running of adversarial code within the framework of monitored virtual machines.
Prototyping Faithful Execution in a Java virtual machine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tarman, Thomas David; Campbell, Philip LaRoche; Pierson, Lyndon George
2003-09-01
This report presents the implementation of a stateless scheme for Faithful Execution, the design for which is presented in a companion report, ''Principles of Faithful Execution in the Implementation of Trusted Objects'' (SAND 2003-2328). We added a simple cryptographic capability to an already simplified class loader and its associated Java Virtual Machine (JVM) to provide a byte-level implementation of Faithful Execution. The extended class loader and JVM we refer to collectively as the Sandia Faithfully Executing Java architecture (or JavaFE for short). This prototype is intended to enable exploration of more sophisticated techniques which we intend to implement in hardware.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guan, Qiang
At exascale, the challenge becomes to develop applications that run at scale and use exascale platforms reliably, efficiently, and flexibly. Workflows become much more complex because they must seamlessly integrate simulation and data analytics. They must include down-sampling, post-processing, feature extraction, and visualization. Power and data transfer limitations require these analysis tasks to be run in-situ or in-transit. We expect successful workflows will comprise multiple linked simulations along with tens of analysis routines. Users will have limited development time at scale and, therefore, must have rich tools to develop, debug, test, and deploy applications. At this scale, successful workflows willmore » compose linked computations from an assortment of reliable, well-defined computation elements, ones that can come and go as required, based on the needs of the workflow over time. We propose a novel framework that utilizes both virtual machines (VMs) and software containers to create a workflow system that establishes a uniform build and execution environment (BEE) beyond the capabilities of current systems. In this environment, applications will run reliably and repeatably across heterogeneous hardware and software. Containers, both commercial (Docker and Rocket) and open-source (LXC and LXD), define a runtime that isolates all software dependencies from the machine operating system. Workflows may contain multiple containers that run different operating systems, different software, and even different versions of the same software. We will run containers in open-source virtual machines (KVM) and emulators (QEMU) so that workflows run on any machine entirely in user-space. On this platform of containers and virtual machines, we will deliver workflow software that provides services, including repeatable execution, provenance, checkpointing, and future proofing. We will capture provenance about how containers were launched and how they interact to annotate workflows for repeatable and partial re-execution. We will coordinate the physical snapshots of virtual machines with parallel programming constructs, such as barriers, to automate checkpoint and restart. We will also integrate with HPC-specific container runtimes to gain access to accelerators and other specialized hardware to preserve native performance. Containers will link development to continuous integration. When application developers check code in, it will automatically be tested on a suite of different software and hardware architectures.« less
Load Balancing in Multi Cloud Computing Environment with Genetic Algorithm
NASA Astrophysics Data System (ADS)
Vhansure, Fularani; Deshmukh, Apurva; Sumathy, S.
2017-11-01
Cloud is a pool of resources that is available on pay per use model. It provides services to the user which is increasing rapidly. Load balancing is an issue because it cannot handle so many requests at a time. It is also known as NP complete problem. In traditional system the functions consist of various parameter values to maximise it in order to achieve best optimal individualsolutions. Challenge is when there are many parameters of solutionsin the system space. Another challenge is to optimize the function which is much more complex. In this paper, various techniques to handle load balancing virtually (VM) as well as physically (nodes) using genetic algorithm is discussed.
Investigating the role of the ventromedial prefrontal cortex in the assessment of brands.
Santos, José Paulo; Seixas, Daniela; Brandão, Sofia; Moutinho, Luiz
2011-01-01
The ventromedial prefrontal cortex (vmPFC) is believed to be important in everyday preference judgments, processing emotions during decision-making. However, there is still controversy in the literature regarding the participation of the vmPFC. To further elucidate the contribution of the vmPFC in brand preference, we designed a functional magnetic resonance imaging (fMRI) study where 18 subjects assessed positive, indifferent, and fictitious brands. Also, both the period during and after the decision process were analyzed, hoping to unravel temporally the role of the vmPFC, using modeled and model-free fMRI analysis. Considering together the period before and after decision-making, there was activation of the vmPFC when comparing positive with indifferent or fictitious brands. However, when the decision-making period was separated from the moment after the response, and especially for positive brands, the vmPFC was more active after the choice than during the decision process itself, challenging some of the existing literature. The results of the present study support the notion that the vmPFC may be unimportant in the decision stage of brand preference, questioning theories that postulate that the vmPFC is in the origin of such a choice. Further studies are needed to investigate in detail why the vmPFC seems to be involved in brand preference only after the decision process.
Investigating the Role of the Ventromedial Prefrontal Cortex in the Assessment of Brands
Santos, José Paulo; Seixas, Daniela; Brandão, Sofia; Moutinho, Luiz
2011-01-01
The ventromedial prefrontal cortex (vmPFC) is believed to be important in everyday preference judgments, processing emotions during decision-making. However, there is still controversy in the literature regarding the participation of the vmPFC. To further elucidate the contribution of the vmPFC in brand preference, we designed a functional magnetic resonance imaging (fMRI) study where 18 subjects assessed positive, indifferent, and fictitious brands. Also, both the period during and after the decision process were analyzed, hoping to unravel temporally the role of the vmPFC, using modeled and model-free fMRI analysis. Considering together the period before and after decision-making, there was activation of the vmPFC when comparing positive with indifferent or fictitious brands. However, when the decision-making period was separated from the moment after the response, and especially for positive brands, the vmPFC was more active after the choice than during the decision process itself, challenging some of the existing literature. The results of the present study support the notion that the vmPFC may be unimportant in the decision stage of brand preference, questioning theories that postulate that the vmPFC is in the origin of such a choice. Further studies are needed to investigate in detail why the vmPFC seems to be involved in brand preference only after the decision process. PMID:21687799
Virtual Environment Training: Auxiliary Machinery Room (AMR) Watchstation Trainer.
ERIC Educational Resources Information Center
Hriber, Dennis C.; And Others
1993-01-01
Describes a project implemented at Newport News Shipbuilding that used Virtual Environment Training to improve the performance of submarine crewmen. Highlights include development of the Auxiliary Machine Room (AMR) Watchstation Trainer; Digital Video Interactive (DVI); screen layout; test design and evaluation; user reactions; authoring language;…
Detection of Answer Copying Based on the Structure of a High-Stakes Test
ERIC Educational Resources Information Center
Belov, Dmitry I.
2011-01-01
This article presents the Variable Match Index (VM-Index), a new statistic for detecting answer copying. The power of the VM-Index relies on two-dimensional conditioning as well as the structure of the test. The asymptotic distribution of the VM-Index is analyzed by reduction to Poisson trials. A computational study comparing the VM-Index with the…
Design and fabrication of complete dentures using CAD/CAM technology
Han, Weili; Li, Yanfeng; Zhang, Yue; lv, Yuan; Zhang, Ying; Hu, Ping; Liu, Huanyue; Ma, Zheng; Shen, Yi
2017-01-01
Abstract The aim of the study was to test the feasibility of using commercially available computer-aided design and computer-aided manufacturing (CAD/CAM) technology including 3Shape Dental System 2013 trial version, WIELAND V2.0.049 and WIELAND ZENOTEC T1 milling machine to design and fabricate complete dentures. The modeling process of full denture available in the trial version of 3Shape Dental System 2013 was used to design virtual complete dentures on the basis of 3-dimensional (3D) digital edentulous models generated from the physical models. The virtual complete dentures designed were exported to CAM software of WIELAND V2.0.049. A WIELAND ZENOTEC T1 milling machine controlled by the CAM software was used to fabricate physical dentitions and baseplates by milling acrylic resin composite plates. The physical dentitions were bonded to the corresponding baseplates to form the maxillary and mandibular complete dentures. Virtual complete dentures were successfully designed using the software through several steps including generation of 3D digital edentulous models, model analysis, arrangement of artificial teeth, trimming relief area, and occlusal adjustment. Physical dentitions and baseplates were successfully fabricated according to the designed virtual complete dentures using milling machine controlled by a CAM software. Bonding physical dentitions to the corresponding baseplates generated the final physical complete dentures. Our study demonstrated that complete dentures could be successfully designed and fabricated by using CAD/CAM. PMID:28072686
NASA Astrophysics Data System (ADS)
Capone, V.; Esposito, R.; Pardi, S.; Taurino, F.; Tortone, G.
2012-12-01
Over the last few years we have seen an increasing number of services and applications needed to manage and maintain cloud computing facilities. This is particularly true for computing in high energy physics, which often requires complex configurations and distributed infrastructures. In this scenario a cost effective rationalization and consolidation strategy is the key to success in terms of scalability and reliability. In this work we describe an IaaS (Infrastructure as a Service) cloud computing system, with high availability and redundancy features, which is currently in production at INFN-Naples and ATLAS Tier-2 data centre. The main goal we intended to achieve was a simplified method to manage our computing resources and deliver reliable user services, reusing existing hardware without incurring heavy costs. A combined usage of virtualization and clustering technologies allowed us to consolidate our services on a small number of physical machines, reducing electric power costs. As a result of our efforts we developed a complete solution for data and computing centres that can be easily replicated using commodity hardware. Our architecture consists of 2 main subsystems: a clustered storage solution, built on top of disk servers running GlusterFS file system, and a virtual machines execution environment. GlusterFS is a network file system able to perform parallel writes on multiple disk servers, providing this way live replication of data. High availability is also achieved via a network configuration using redundant switches and multiple paths between hypervisor hosts and disk servers. We also developed a set of management scripts to easily perform basic system administration tasks such as automatic deployment of new virtual machines, adaptive scheduling of virtual machines on hypervisor hosts, live migration and automated restart in case of hypervisor failures.
The Cloud Area Padovana: from pilot to production
NASA Astrophysics Data System (ADS)
Andreetto, P.; Costa, F.; Crescente, A.; Dorigo, A.; Fantinel, S.; Fanzago, F.; Sgaravatto, M.; Traldi, S.; Verlato, M.; Zangrando, L.
2017-10-01
The Cloud Area Padovana has been running for almost two years. This is an OpenStack-based scientific cloud, spread across two different sites: the INFN Padova Unit and the INFN Legnaro National Labs. The hardware resources have been scaled horizontally and vertically, by upgrading some hypervisors and by adding new ones: currently it provides about 1100 cores. Some in-house developments were also integrated in the OpenStack dashboard, such as a tool for user and project registrations with direct support for the INFN-AAI Identity Provider as a new option for the user authentication. In collaboration with the EU-funded Indigo DataCloud project, the integration with Docker-based containers has been experimented with and will be available in production soon. This computing facility now satisfies the computational and storage demands of more than 70 users affiliated with about 20 research projects. We present here the architecture of this Cloud infrastructure, the tools and procedures used to operate it. We also focus on the lessons learnt in these two years, describing the problems that were found and the corrective actions that had to be applied. We also discuss about the chosen strategy for upgrades, which combines the need to promptly integrate the OpenStack new developments, the demand to reduce the downtimes of the infrastructure, and the need to limit the effort requested for such updates. We also discuss how this Cloud infrastructure is being used. In particular we focus on two big physics experiments which are intensively exploiting this computing facility: CMS and SPES. CMS deployed on the cloud a complex computational infrastructure, composed of several user interfaces for job submission in the Grid environment/local batch queues or for interactive processes; this is fully integrated with the local Tier-2 facility. To avoid a static allocation of the resources, an elastic cluster, based on cernVM, has been configured: it allows to automatically create and delete virtual machines according to the user needs. SPES, using a client-server system called TraceWin, exploits INFN’s virtual resources performing a very large number of simulations on about a thousand nodes elastically managed.
Wang, Lin; Lin, Li; Chen, Xi; Sun, Li; Liao, Yulin; Huang, Na; Liao, Wangjun
2015-01-01
Vasculogenic mimicry (VM) is a blood supply modality that is strongly associated with the epithelial-mesenchymal transition (EMT), TWIST1 activation and tumor progression. We previously reported that metastasis-associated in colon cancer-1 (MACC1) induced the EMT and was associated with a poor prognosis of patients with gastric cancer (GC), but it remains unknown whether MACC1 promotes VM and regulates the TWIST signaling pathway in GC. In this study, we investigated MACC1 expression and VM by immunohistochemistry in 88 patients with stage IV GC, and also investigated the role of TWIST1 and TWIST2 in MACC1-induced VM by using nude mice with GC xenografts and GC cell lines. We found that the VM density was significantly increased in the tumors of patients who died of GC and was positively correlated with MACC1 immunoreactivity (p < 0.05). The 3-year survival rate was only 8.6% in patients whose tumors showed double positive staining for MACC1 and VM, whereas it was 41.7% in patients whose tumors were negative for both MACC1 and VM. Moreover, nuclear expression of MACC1, TWIST1, and TWIST2 was upregulated in GC tissues compared with matched adjacent non-tumorous tissues (p < 0.05). Overexpression of MACC1 increased TWIST1/2 expression and induced typical VM in the GC xenografts of nude mice and in GC cell lines. MACC1 enhanced TWIST1/2 promoter activity and facilitated VM, while silencing of TWIST1 or TWIST2 inhibited VM. Hepatocyte growth factor (HGF) increased the nuclear translocation of MACC1, TWIST1, and TWIST2, while a c-Met inhibitor reduced these effects. These findings indicate that MACC1 promotes VM in GC by regulating the HGF/c-Met-TWIST1/2 signaling pathway, which means that MACC1 and this pathway are potential new therapeutic targets for GC. PMID:25895023
Hebscher, Melissa; Gilboa, Asaf
2016-09-01
The ventromedial prefrontal cortex (vmPFC) has been implicated in a wide array of functions across multiple domains. In this review, we focus on the vmPFC's involvement in mediating strategic aspects of memory retrieval, memory-related schema functions, and decision-making. We suggest that vmPFC generates a confidence signal that informs decisions and memory-guided behaviour. Confidence is central to these seemingly diverse functions: (1) Strategic retrieval: lesions to the vmPFC impair an early, automatic, and intuitive monitoring process ("feeling of rightness"; FOR) often associated with confabulation (spontaneous reporting of erroneous memories). Critically, confabulators typically demonstrate high levels of confidence in their false memories, suggesting that faulty monitoring following vmPFC damage may lead to indiscriminate confidence signals. (2) Memory schemas: the vmPFC is critically involved in instantiating and maintaining contextually relevant schemas, broadly defined as higher level knowledge structures that encapsulate lower level representational elements. The correspondence between memory retrieval cues and these activated schemas leads to FOR monitoring. Stronger, more elaborate schemas produce stronger FOR and influence confidence in the veracity of memory candidates. (3) Finally, we review evidence on the vmPFC's role in decision-making, extending this role to decision-making during memory retrieval. During non-mnemonic and mnemonic decision-making the vmPFC automatically encodes confidence. Confidence signal in the vmPFC is revealed as a non-linear relationship between a first-order monitoring assessment and second-order action or choice. Attempting to integrate the multiple functions of the vmPFC, we propose a posterior-anterior organizational principle for this region. More posterior vmPFC regions are involved in earlier, automatic, subjective, and contextually sensitive functions, while more anterior regions are involved in controlled actions based on these earlier functions. Confidence signals reflect the non-linear relationship between first-order, posterior-mediated and second-order, anterior-mediated processes and are represented along the entire axis. Copyright © 2016 Elsevier Ltd. All rights reserved.
The Virtual Xenbase: transitioning an online bioinformatics resource to a private cloud.
Karimi, Kamran; Vize, Peter D
2014-01-01
As a model organism database, Xenbase has been providing informatics and genomic data on Xenopus (Silurana) tropicalis and Xenopus laevis frogs for more than a decade. The Xenbase database contains curated, as well as community-contributed and automatically harvested literature, gene and genomic data. A GBrowse genome browser, a BLAST+ server and stock center support are available on the site. When this resource was first built, all software services and components in Xenbase ran on a single physical server, with inherent reliability, scalability and inter-dependence issues. Recent advances in networking and virtualization techniques allowed us to move Xenbase to a virtual environment, and more specifically to a private cloud. To do so we decoupled the different software services and components, such that each would run on a different virtual machine. In the process, we also upgraded many of the components. The resulting system is faster and more reliable. System maintenance is easier, as individual virtual machines can now be updated, backed up and changed independently. We are also experiencing more effective resource allocation and utilization. Database URL: www.xenbase.org. © The Author(s) 2014. Published by Oxford University Press.
Vascular mimicry in glioblastoma following anti-angiogenic and anti-20-HETE therapies.
Angara, Kartik; Rashid, Mohammad H; Shankar, Adarsh; Ara, Roxan; Iskander, Asm; Borin, Thaiz F; Jain, Meenu; Achyut, Bhagelu R; Arbab, Ali S
2017-09-01
Glioblastoma (GBM) is one hypervascular and hypoxic tumor known among solid tumors. Antiangiogenic therapeutics (AATs) have been tested as an adjuvant to normalize blood vessels and control abnormal vasculature. Evidence of relapse exemplified in the progressive tumor growth following AAT reflects development of resistance to AATs. Here, we identified that GBM following AAT (Vatalanib) acquired an alternate mechanism to support tumor growth, called vascular mimicry (VM). We observed that Vatalanib induced VM vessels are positive for periodic acid-Schiff (PAS) matrix but devoid of any endothelium on the inner side and lined by tumor cells on the outer-side. The PAS+ matrix is positive for basal laminae (laminin) indicating vascular structures. Vatalanib treated GBM displayed various stages of VM such as initiation (mosaic), sustenance, and full-blown VM. Mature VM structures contain red blood cells (RBC) and bear semblance to the functional blood vessel-like structures, which provide all growth factors to favor tumor growth. Vatalanib treatment significantly increased VM especially in the core of the tumor, where HIF-1α was highly expressed in tumor cells. VM vessels correlate with hypoxia and are characterized by co-localized MHC-1+ tumor and HIF-1α expression. Interestingly, 20-HETE synthesis inhibitor HET0016 significantly decreased GBM tumors through decreasing VM structures both at the core and at periphery of the tumors. In summary, AAT induced resistance characterized by VM is an alternative mechanism adopted by tumors to make functional vessels by transdifferentiation of tumor cells into endothelial-like cells to supply nutrients in the event of hypoxia. AAT induced VM is a potential therapeutic target of the novel formulation of HET0016. Our present study suggests that HET0016 has a potential to target therapeutic resistance and can be combined with other antitumor agents in preclinical and clinical trials.
Vastus medialis fat infiltration - a modifiable determinant of knee cartilage loss.
Teichtahl, A J; Wluka, A E; Wang, Y; Wijethilake, P N; Strauss, B J; Proietto, J; Dixon, J B; Jones, G; Forbes, A; Cicuttini, F M
2015-12-01
There is growing interest in the role of intramuscular fat and how it may influence clinical outcomes. Vastus medialis (VM) is a functionally important quadriceps muscle that helps to stabilise the knee joint. This longitudinal study examined the determinants of VM fat infiltration and whether VM fat infiltration influenced knee cartilage volume. 250 participants without any diagnosed arthropathy were assessed at baseline between 2005 and 2008, and 197 participants at follow-up between 2008 and 2010. Ambulatory and sporting activity were assessed and magnetic resonance imaging (MRI) was used to determine knee cartilage volume and VM fat infiltration. Age, female gender, BMI and weight were positively associated with baseline VM fat infiltration (P ≤ 0.03), while ambulatory and sporting activity were negatively associated with VM fat infiltration (P ≤ 0.05). After adjusting for confounders, a reduction in VM fat infiltration was associated with a reduced annual loss of medial tibial (β = -10 mm(3); 95% CI -19 to 0 mm(3); P = 0.04) and patella (β = -18 mm(3); 95% CI -36 to 0 mm(3); P = 0.04) cartilage volume. This community-based study of healthy adults has shown that VM fat infiltration can be modified by lifestyle factors including weight loss and exercise, and reducing fat infiltration in VM has beneficial effect on knee cartilage preservation. The findings suggest that modifying VM fat infiltration via lifestyle interventions may have the potential to reduce the risk of knee OA. Copyright © 2015 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.
Vasculogenic mimicry in bladder cancer and its association with the aberrant expression of ZEB1
Li, Baimou; Mao, Xiaopeng; Wang, Hua; Su, Guanyu; Mo, Chengqiang; Cao, Kaiyuan; Qiu, Shaopeng
2018-01-01
The aim of the present study was to investigate the associations between vasculogenic mimicry (VM) and zinc finger E-box binding homeobox 1 (ZEB1) in bladder cancer. VM structure and ZEB1 expression were analyzed by cluster of differentiation 34/periodic acid Schiff (PAS) double staining and immunohistochemical staining in 135 specimens from patients with bladder cancer, and a further 12 specimens from normal bladder tissues. Three-dimensional (3-D) culture was used to detect VM formation in the bladder transitional cancer cell lines UM-UC-3 and J82, and the immortalized human bladder epithelium cell line SV-HUC-1 in vitro. ZEB1 expression in these cell lines was compared by reverse transcription-quantitative polymerase chain reaction and western blot assays. In addition, small interfering RNA was used to inhibit ZEB1 in UM-UC-3 and J82 cells, followed by 3-D culturing of treated cell lines. As a result, VM was observed in 31.1% of specimens from bladder cancer tissues, and cases with high ZEB1 expression accounted for 60.0% of patients with bladder cancer. In addition, ZEB1 expression was closely associated with VM (r=0.189; P<0.05), and also increased as the grade and stage of the tumor developed. In an in vitro assay, UM-UC-3 and J82 cells exhibited VM formation, however, SV-HUC-1 did not. Furthermore, VM-forming cancer cell lines UM-UC-3 and J82 exhibited higher ZEB1 expression. Notably, VM formation was inhibited following knockdown of ZEB1. In conclusion, ZEB1 may be associated with VM in bladder cancer and serve an important role in the process of VM formation. However, its detailed mechanism requires further study. PMID:29552157
Zhang, Feng; Zhang, Chun-Mei; Li, Shu; Wang, Kun-Kun; Guo, Bin-Bin; Fu, Yao; Liu, Lu-Yang; Zhang, Yu; Jiang, Hai-Yu; Wu, Chang-Jun
2018-01-01
Hepatoblastoma (HB) is the most common type of pediatric liver malignancy, which predominantly occurs in young children (aged <5 years), and continues to be a therapeutic challenge in terms of metastasis and drug resistance. As a new pattern of tumor blood supply, vasculogenic mimicry (VM) is a channel structure lined by tumor cells rather than endothelial cells, which contribute to angiogenesis. VM occurs in a variety of solid tumor types, including liver cancer, such as hepatocellular carcinoma. The aim of the present study was to elucidate the effect of arsenic trioxide (As2O3) on VM. In vitro experiments identified that HB cell line HepG2 cells form typical VM structures on Matrigel, and the structures were markedly damaged by As2O3 at a low concentration before the cell viability significantly decreased. The western blot results indicated that As2O3 downregulated the expression level of VM-associated proteins prior to the appearance of apoptotic proteins. In vivo, VM has been observed in xenografts of HB mouse models and identified by periodic acid-Schiff+/CD105− channels lined by HepG2 cells without necrotic cells. As2O3 (2 mg/kg) markedly depresses tumor growth without causing serious adverse reactions by decreasing the number of VM channels via inhibiting the expression level of VM-associated proteins. Thus, the present data strongly indicate that low dosage As2O3 reduces the formation of VM in HB cell line HepG2 cells, independent of cell apoptosis in vivo and in vitro, and may represent as a candidate drug for HB targeting VM. PMID:29138840
Research on axisymmetric aspheric surface numerical design and manufacturing technology
NASA Astrophysics Data System (ADS)
Wang, Zhen-zhong; Guo, Yin-biao; Lin, Zheng
2006-02-01
The key technology for aspheric machining offers exact machining path and machining aspheric lens with high accuracy and efficiency, in spite of the development of traditional manual manufacturing into nowadays numerical control (NC) machining. This paper presents a mathematical model between virtual cone and aspheric surface equations, and discusses the technology of uniform wear of grinding wheel and error compensation in aspheric machining. Finally, a software system for high precision aspheric surface manufacturing is designed and realized, based on the mentioned above. This software system can work out grinding wheel path according to input parameters and generate machining NC programs of aspheric surfaces.
Pai, Yun Suen; Yap, Hwa Jen; Md Dawal, Siti Zawiah; Ramesh, S.; Phoon, Sin Ye
2016-01-01
This study presents a modular-based implementation of augmented reality to provide an immersive experience in learning or teaching the planning phase, control system, and machining parameters of a fully automated work cell. The architecture of the system consists of three code modules that can operate independently or combined to create a complete system that is able to guide engineers from the layout planning phase to the prototyping of the final product. The layout planning module determines the best possible arrangement in a layout for the placement of various machines, in this case a conveyor belt for transportation, a robot arm for pick-and-place operations, and a computer numerical control milling machine to generate the final prototype. The robotic arm module simulates the pick-and-place operation offline from the conveyor belt to a computer numerical control (CNC) machine utilising collision detection and inverse kinematics. Finally, the CNC module performs virtual machining based on the Uniform Space Decomposition method and axis aligned bounding box collision detection. The conducted case study revealed that given the situation, a semi-circle shaped arrangement is desirable, whereas the pick-and-place system and the final generated G-code produced the highest deviation of 3.83 mm and 5.8 mm respectively. PMID:27271840
Pai, Yun Suen; Yap, Hwa Jen; Md Dawal, Siti Zawiah; Ramesh, S; Phoon, Sin Ye
2016-06-07
This study presents a modular-based implementation of augmented reality to provide an immersive experience in learning or teaching the planning phase, control system, and machining parameters of a fully automated work cell. The architecture of the system consists of three code modules that can operate independently or combined to create a complete system that is able to guide engineers from the layout planning phase to the prototyping of the final product. The layout planning module determines the best possible arrangement in a layout for the placement of various machines, in this case a conveyor belt for transportation, a robot arm for pick-and-place operations, and a computer numerical control milling machine to generate the final prototype. The robotic arm module simulates the pick-and-place operation offline from the conveyor belt to a computer numerical control (CNC) machine utilising collision detection and inverse kinematics. Finally, the CNC module performs virtual machining based on the Uniform Space Decomposition method and axis aligned bounding box collision detection. The conducted case study revealed that given the situation, a semi-circle shaped arrangement is desirable, whereas the pick-and-place system and the final generated G-code produced the highest deviation of 3.83 mm and 5.8 mm respectively.
NASA Astrophysics Data System (ADS)
Pai, Yun Suen; Yap, Hwa Jen; Md Dawal, Siti Zawiah; Ramesh, S.; Phoon, Sin Ye
2016-06-01
This study presents a modular-based implementation of augmented reality to provide an immersive experience in learning or teaching the planning phase, control system, and machining parameters of a fully automated work cell. The architecture of the system consists of three code modules that can operate independently or combined to create a complete system that is able to guide engineers from the layout planning phase to the prototyping of the final product. The layout planning module determines the best possible arrangement in a layout for the placement of various machines, in this case a conveyor belt for transportation, a robot arm for pick-and-place operations, and a computer numerical control milling machine to generate the final prototype. The robotic arm module simulates the pick-and-place operation offline from the conveyor belt to a computer numerical control (CNC) machine utilising collision detection and inverse kinematics. Finally, the CNC module performs virtual machining based on the Uniform Space Decomposition method and axis aligned bounding box collision detection. The conducted case study revealed that given the situation, a semi-circle shaped arrangement is desirable, whereas the pick-and-place system and the final generated G-code produced the highest deviation of 3.83 mm and 5.8 mm respectively.
Hu, Kai; Liu, Dan; Niemann, Markus; Hatle, Liv; Herrmann, Sebastian; Voelker, Wolfram; Ertl, Georg; Bijnens, Bart; Weidemann, Frank
2011-11-01
For the clinical assessment of patients with dyspnea, the inversion of the early (E) and late (A) transmitral flow during Valsalva maneuver (VM) frequently helps to distinguish pseudonormal from normal filling pattern. However, in an important number of patients, VM fails to reveal the change from dominant early mitral flow velocity toward larger late velocity. From December 2009 to October 2010, we selected consecutive patients with abnormal filling with (n=25) and without E/A inversion (n=25) during VM. Transmitral, tricuspid, and pulmonary Doppler traces were recorded and the degree of insufficiency was estimated. After evaluating all standard echocardiographic morphological, functional, and flow-related parameters, it became evident that the failure to unmask the pseudonormal filling pattern by VM was related to the degree of the tricuspid insufficiency (TI). TI was graded as mild in 24 of 25 patients in the group with E/A inversion during VM, whereas TI was graded as moderate to severe in 24 of the 25 patients with pseudonormal diastolic function without E/A inversion during VM. Our data suggest that TI is a major factor to prevent E/A inversion during a VM in patients with pseudonormal diastolic function. This probably is due to a decrease in TI resulting in an increase in forward flow rather than the expected decrease during the VM. Thus, whenever a pseudonormal diastolic filling pattern is suspected, the use of a VM is not an informative discriminator in the presence of moderate or severe TI.
Virtual reality in surgical training.
Lange, T; Indelicato, D J; Rosen, J M
2000-01-01
Virtual reality in surgery and, more specifically, in surgical training, faces a number of challenges in the future. These challenges are building realistic models of the human body, creating interface tools to view, hear, touch, feel, and manipulate these human body models, and integrating virtual reality systems into medical education and treatment. A final system would encompass simulators specifically for surgery, performance machines, telemedicine, and telesurgery. Each of these areas will need significant improvement for virtual reality to impact medicine successfully in the next century. This article gives an overview of, and the challenges faced by, current systems in the fast-changing field of virtual reality technology, and provides a set of specific milestones for a truly realistic virtual human body.
AstroCloud, a Cyber-Infrastructure for Astronomy Research: Data Access and Interoperability
NASA Astrophysics Data System (ADS)
Fan, D.; He, B.; Xiao, J.; Li, S.; Li, C.; Cui, C.; Yu, C.; Hong, Z.; Yin, S.; Wang, C.; Cao, Z.; Fan, Y.; Mi, L.; Wan, W.; Wang, J.
2015-09-01
Data access and interoperability module connects the observation proposals, data, virtual machines and software. According to the unique identifier of PI (principal investigator), an email address or an internal ID, data can be collected by PI's proposals, or by the search interfaces, e.g. conesearch. Files associated with the searched results could be easily transported to cloud storages, including the storage with virtual machines, or several commercial platforms like Dropbox. Benefitted from the standards of IVOA (International Observatories Alliance), VOTable formatted searching result could be sent to kinds of VO software. Latter endeavor will try to integrate more data and connect archives and some other astronomical resources.
LIANG, YIMING; HUANG, MIN; LI, JIANWEN; SUN, XINLIN; JIANG, XIAODAN; LI, LIANGPING; KE, YIQUAN
2014-01-01
Glioblastomas (GBMs) are the most common and aggressive malignant primary brain tumors found in humans. In high-grade gliomas, vasculogenic mimicry (VM) is often detected. VM is the formation of de novo vascular networks by highly invasive tumor cells, instead of endothelial cells. An understanding of the mechanisms of VM formation will contribute to the targeted therapy of GBMs. In the present study, the efficacy of curcumin (CCM) on VM formation and its mechanisms were investigated. It was found that CCM inhibits the VM formation, proliferation, migration and invasion of human glioma U251 cells in a dose-dependent manner. Furthermore, CCM downregulated the protein and mRNA expression of erythropoietin-producing hepatocellular carcinoma-A2, phosphoinositide 3-kinase and matrix metalloproteinase-2, indicating that CCM may function through these factors for the inhibition of VM formation. These data provide novel insights into the use of CCM to antagonize VM, and may contribute to the angiogenesis-targeted therapy of malignant glioma. PMID:25202424
Open label study to assess the safety of VM202 in subjects with amyotrophic lateral sclerosis.
Sufit, Robert L; Ajroud-Driss, Senda; Casey, Patricia; Kessler, John A
2017-05-01
To assess safety and define efficacy measures of hepatocyte growth factor (HGF) DNA plasmid, VM202, administered by intramuscular injections in patients with amyotrophic lateral sclerosis (ALS). Eighteen participants were treated with VM202 administered in divided doses by injections alternating between the upper and lower limbs on d 0, 7, 14, and 21. Subjects were followed for nine months to evaluate possible adverse events. Functional outcome was assessed using the ALS Functional Rating Scale-Revised (ALSFRS-R) as well as by serially measuring muscle strength, muscle circumference, and forced vital capacity. Seventeen of 18 participants completed the study. All participants tolerated 64 mg of VM202 well with no serious adverse events (SAE) related to the drug. Twelve participants reported 26 mild or moderate injection site reactions. Three participants experienced five SAEs unrelated to VM202. One subject died from respiratory insufficiency secondary to ALS progression. Multiple intramuscular injection of VM202 into the limbs appears safe in ALS subjects. Future trials with retreatment after three months will determine whether VM202 treatment alters the long-term course of ALS.
Slug promoted vasculogenic mimicry in hepatocellular carcinoma
Sun, Dan; Sun, Baocun; Liu, Tieju; Zhao, Xiulan; Che, Na; Gu, Qiang; Dong, Xueyi; Yao, Zhi; Li, Rui; Li, Jing; Chi, Jiadong; Sun, Ran
2013-01-01
Vasculogenic mimicry (VM) refers to the unique capability of aggressive tumour cells to mimic the pattern of embryonic vasculogenic networks. Epithelial–mesenchymal transition (EMT) regulator slug have been implicated in the tumour invasion and metastasis of human hepatocellular carcinoma (HCC). However, the relationship between slug and VM formation is not clear. In the study, we demonstrated that slug expression was associated with EMT and cancer stem cell (CSCs) phenotype in HCC patients. Importantly, slug showed statistically correlation with VM formation. We consistently demonstrated that an overexpression of slug in HCC cells significantly increased CSCs subpopulation that was obvious by the increased clone forming efficiency in soft agar and by flowcytometry analysis. Meantime, the VM formation and VM mediator overexpression were also induced by slug induction. Finally, slug overexpression lead to the maintenance of CSCs phenotype and VM formation was demonstrated in vivo. Therefore, the results of this study indicate that slug induced the increase and maintenance of CSCs subpopulation and contributed to VM formation eventually. The related molecular pathways may be used as novel therapeutic targets for the inhibition of HCC angiogenesis and metastasis. PMID:23815612
Structural and functional identification of vasculogenic mimicry in vitro.
Racordon, Dusan; Valdivia, Andrés; Mingo, Gabriel; Erices, Rafaela; Aravena, Raúl; Santoro, Felice; Bravo, Maria Loreto; Ramirez, Carolina; Gonzalez, Pamela; Sandoval, Alejandra; González, Alfonso; Retamal, Claudio; Kogan, Marcelo J; Kato, Sumie; Cuello, Mauricio A; Osorio, German; Nualart, Francisco; Alvares, Pedro; Gago-Arias, Araceli; Fabri, Daniella; Espinoza, Ignacio; Sanchez, Beatriz; Corvalán, Alejandro H; Pinto, Mauricio P; Owen, Gareth I
2017-08-01
Vasculogenic mimicry (VM) describes a process by which cancer cells establish an alternative perfusion pathway in an endothelial cell-free manner. Despite its strong correlation with reduced patient survival, controversy still surrounds the existence of an in vitro model of VM. Furthermore, many studies that claim to demonstrate VM fail to provide solid evidence of true hollow channels, raising concerns as to whether actual VM is actually being examined. Herein, we provide a standardized in vitro assay that recreates the formation of functional hollow channels using ovarian cancer cell lines, cancer spheres and primary cultures derived from ovarian cancer ascites. X-ray microtomography 3D-reconstruction, fluorescence confocal microscopy and dye microinjection conclusively confirm the existence of functional glycoprotein-rich lined tubular structures in vitro and demonstrate that many of structures reported in the literature may not represent VM. This assay may be useful to design and test future VM-blocking anticancer therapies.
Self-replicating machines in continuous space with virtual physics.
Smith, Arnold; Turney, Peter; Ewaschuk, Robert
2003-01-01
JohnnyVon is an implementation of self-replicating machines in continuous two-dimensional space. Two types of particles drift about in a virtual liquid. The particles are automata with discrete internal states but continuous external relationships. Their internal states are governed by finite state machines, but their external relationships are governed by a simulated physics that includes Brownian motion, viscosity, and springlike attractive and repulsive forces. The particles can be assembled into patterns that can encode arbitrary strings of bits. We demonstrate that, if an arbitrary seed pattern is put in a soup of separate individual particles, the pattern will replicate by assembling the individual particles into copies of itself. We also show that, given sufficient time, a soup of separate individual particles will eventually spontaneously form self-replicating patterns. We discuss the implications of JohnnyVon for research in nanotechnology, theoretical biology, and artificial life.
Noise and Vibration Risk Prevention Virtual Web for Ubiquitous Training
ERIC Educational Resources Information Center
Redel-Macías, María Dolores; Cubero-Atienza, Antonio J.; Martínez-Valle, José Miguel; Pedrós-Pérez, Gerardo; del Pilar Martínez-Jiménez, María
2015-01-01
This paper describes a new Web portal offering experimental labs for ubiquitous training of university engineering students in work-related risk prevention. The Web-accessible computer program simulates the noise and machine vibrations met in the work environment, in a series of virtual laboratories that mimic an actual laboratory and provide the…
The assessment of fetal brain function in fetuses with ventrikulomegaly: the role of the KANET test.
Talic, Amira; Kurjak, Asim; Stanojevic, Milan; Honemeyer, Ulrich; Badreldeen, Ahmed; DiRenzo, Gian Carlo
2012-08-01
To assess differences in fetal behavior in both normal fetuses and fetuses with cerebral ventriculomegaly (VM). In a period of eighteen months, in a longitudinal prospective cohort study, Kurjak Antenatal NeuorogicalTest (KANET) was applied to assess fetal behavior in both normal pregnancies and pregnancies with cerebral VM using four-dimensional ultrasound (4D US). According to the degree of enlargement of the ventricles, VM was divided into three groups: mild, moderate and severe. Moreover fetuses with isolated VM were separated from those with additional abnormalities. According to the KANET, fetuses with scores ≥ 14 were considered normal, those with scores 6-13 borderline and abnormal if the score was ≤ 5. Differences between two groups were examined by Fisher's exact test. Differences within the subgroups were examined by Kruskal-Wallis test and contingency table test. KANET scores in normal pregnancies and pregnancies with VM showed statistically significant differences. Most of the abnormal KANET scores as well as most of the borderline-scores were found among the fetuses with severe VM associated with additional abnormalities. There were no statistically significant differences between the control group and the groups with isolated and mild and /or moderate VM. Evaluation of the fetal behavior in fetuses with cerebral VM using KANET test has the potential to detect fetuses with abnormal behavior, and to add the dimension of CNS function to the morphological criteria of VM. Long-term postnatal neurodevelopmental follow-up should confirm the data from prenatal investigation of fetal behavior.
Shalmon, Bruria; Kubi, Adva; Treves, Avraham J.; Shapira-Frommer, Ronnie; Avivi, Camilla; Ortenberg, Rona; Ben-Ami, Eytan; Schachter, Jacob; Besser, Michal J.; Markel, Gal
2013-01-01
Vasculogenic mimicry (VM) describes functional vascular channels composed only of tumor cells and its presence predicts poor prognosis in melanoma patients. Inhibition of this alternative vascularization pathway might be of clinical importance, especially as several anti-angiogenic therapies targeting endothelial cells are largely ineffective in melanoma. We show the presence of VM structures histologically in a series of human melanoma lesions and demonstrate that cell cultures derived from these lesions form tubes in 3D cultures ex vivo. We tested the ability of nicotinamide, the amide form of vitamin B3 (niacin), which acts as an epigenetic gene regulator through unique cellular pathways, to modify VM. Nicotinamide effectively inhibited the formation of VM structures and destroyed already formed ones, in a dose-dependent manner. Remarkably, VM formation capacity remained suppressed even one month after the complete withdrawal of Nicotimamid. The inhibitory effect of nicotinamide on VM formation could be at least partially explained by a nicotinamide-driven downregulation of vascular endothelial cadherin (VE-Cadherin), which is known to have a central role in VM. Further major changes in the expression profile of hundreds of genes, most of them clustered in biologically-relevant clusters, were observed. In addition, nicotinamide significantly inhibited melanoma cell proliferation, but had an opposite effect on their invasion capacity. Cell cycle analysis indicated moderate changes in apoptotic indices. Therefore, nicotinamide could be further used to unravel new biological mechanisms that drive VM and tumor progression. Targeting VM, especially in combination with anti-angiogenic strategies, is expected to be synergistic and might yield substantial anti neoplastic effects in a variety of malignancies. PMID:23451174
Using shadow page cache to improve isolated drivers performance.
Zheng, Hao; Dong, Xiaoshe; Wang, Endong; Chen, Baoke; Zhu, Zhengdong; Liu, Chengzhe
2015-01-01
With the advantage of the reusability property of the virtualization technology, users can reuse various types and versions of existing operating systems and drivers in a virtual machine, so as to customize their application environment. In order to prevent users' virtualization environments being impacted by driver faults in virtual machine, Chariot examines the correctness of driver's write operations by the method of combining a driver's write operation capture and a driver's private access control table. However, this method needs to keep the write permission of shadow page table as read-only, so as to capture isolated driver's write operations through page faults, which adversely affect the performance of the driver. Based on delaying setting frequently used shadow pages' write permissions to read-only, this paper proposes an algorithm using shadow page cache to improve the performance of isolated drivers and carefully study the relationship between the performance of drivers and the size of shadow page cache. Experimental results show that, through the shadow page cache, the performance of isolated drivers can be greatly improved without impacting Chariot's reliability too much.
Using Shadow Page Cache to Improve Isolated Drivers Performance
Dong, Xiaoshe; Wang, Endong; Chen, Baoke; Zhu, Zhengdong; Liu, Chengzhe
2015-01-01
With the advantage of the reusability property of the virtualization technology, users can reuse various types and versions of existing operating systems and drivers in a virtual machine, so as to customize their application environment. In order to prevent users' virtualization environments being impacted by driver faults in virtual machine, Chariot examines the correctness of driver's write operations by the method of combining a driver's write operation capture and a driver's private access control table. However, this method needs to keep the write permission of shadow page table as read-only, so as to capture isolated driver's write operations through page faults, which adversely affect the performance of the driver. Based on delaying setting frequently used shadow pages' write permissions to read-only, this paper proposes an algorithm using shadow page cache to improve the performance of isolated drivers and carefully study the relationship between the performance of drivers and the size of shadow page cache. Experimental results show that, through the shadow page cache, the performance of isolated drivers can be greatly improved without impacting Chariot's reliability too much. PMID:25815373
Stawarczyk, Bogna; Ozcan, Mutlu; Hämmerle, Christoph H F; Roos, Malgorzata
2012-05-01
The aim of this study was to compare the fracture load of veneered anterior zirconia crowns using normal and Weibull distribution of complete and censored data. Standardized zirconia frameworks for maxillary canines were milled using a CAD/CAM system and randomly divided into 3 groups (N=90, n=30 per group). They were veneered with three veneering ceramics, namely GC Initial ZR, Vita VM9, IPS e.max Ceram using layering technique. The crowns were cemented with glass ionomer cement on metal abutments. The specimens were then loaded to fracture (1 mm/min) in a Universal Testing Machine. The data were analyzed using classical method (normal data distribution (μ, σ); Levene test and one-way ANOVA) and according to the Weibull statistics (s, m). In addition, fracture load results were analyzed depending on complete and censored failure types (only chipping vs. total fracture together with chipping). When computed with complete data, significantly higher mean fracture loads (N) were observed for GC Initial ZR (μ=978, σ=157; s=1043, m=7.2) and VITA VM9 (μ=1074, σ=179; s=1139; m=7.8) than that of IPS e.max Ceram (μ=798, σ=174; s=859, m=5.8) (p<0.05) by classical and Weibull statistics, respectively. When the data were censored for only total fracture, IPS e.max Ceram presented the lowest fracture load for chipping with both classical distribution (μ=790, σ=160) and Weibull statistics (s=836, m=6.5). When total fracture with chipping (classical distribution) was considered as failure, IPS e.max Ceram did not show significant fracture load for total fracture (μ=1054, σ=110) compared to other groups (GC Initial ZR: μ=1039, σ=152, VITA VM9: μ=1170, σ=166). According to Weibull distributed data, VITA VM9 showed significantly higher fracture load (s=1228, m=9.4) than those of other groups. Both classical distribution and Weibull statistics for complete data yielded similar outcomes. Censored data analysis of all ceramic systems based on failure types is essential and brings additional information regarding the susceptibility to chipping or total fracture. Copyright © 2011 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
Diffusion bonding of IN 718 to VM 350 grade maraging steel
NASA Technical Reports Server (NTRS)
Crosby, S. R.; Biederman, R. R.; Reynolds, C. C.
1972-01-01
Diffusion bonding studies have been conducted on IN 718, VM 350 and the dissimilar alloy couple, IN 718 to maraging steel. The experimental processing parameters critical to obtaining consistently good diffusion bonds between IN 718 and VM 350 were determined. Interrelationships between temperature, pressure and surface preparation were explored for short bending intervals under vacuum conditions. Successful joining was achieved for a range of bonding cycle temperatures, pressures and surface preparations. The strength of the weaker parent material was used as a criterion for a successful tensile test of the heat treated bond. Studies of VM-350/VM-350 couples in the as-bonded condition showed a greater yielding and failure outside the bond region.
2013-05-01
and diazepam with and without pretreatment with pyridostigmine bromide . The 24 hr median lethal dose (MLD) of VM was determined using a sequential... pyridostigmine bromide . The 24 hr median lethal dose (MLD) of VM was determined using a sequential stage approach. The efficacy of medical...with and without pyridostigmine bromide (PB) pretreatment against lethal intoxication with VM, VR or VX. Methods Animals: Adult male Hartley
Cameron, C. Daryl; Reber, Justin; Spring, Victoria L.; Tranel, Daniel
2018-01-01
Implicit moral evaluations—spontaneous, unintentional judgments about the moral status of actions or persons—are thought to play a pivotal role in moral experience, suggesting a need for research to model these moral evaluations in clinical populations. Prior research reveals that the ventromedial prefrontal cortex (vmPFC) is a critical area underpinning affect and morality, and patients with vmPFC lesions show abnormalities in moral judgment and moral behavior. We use indirect measurement and multinomial modeling to understand differences in implicit moral evaluations among patients with vmPFC lesions. Our model quantifies multiple processes of moral judgment: implicit moral evaluations in response to distracting moral transgressions (Unintentional Judgment), accurate moral judgments about target actions (Intentional Judgment), and a directional tendency to judge actions as morally wrong (Response Bias). Compared to individuals with non-vmPFC brain damage and neurologically healthy comparisons, patients with vmPFC lesions showed a dual deficit in processes of moral judgment. First, patients with vmPFC lesions showed reduced Unintentional Judgment about moral transgressions, but not about non-moral negative affective distracters. Second, patients with vmPFC lesions showed reduced Intentional Judgment about target actions. These findings highlight the utility of a formal modeling approach in moral psychology, revealing a dual deficit in multiple component processes of moral judgment among patients with vmPFC lesions. PMID:29382558
Cameron, C Daryl; Reber, Justin; Spring, Victoria L; Tranel, Daniel
2018-03-01
Implicit moral evaluations-spontaneous, unintentional judgments about the moral status of actions or persons-are thought to play a pivotal role in moral experience, suggesting a need for research to model these moral evaluations in clinical populations. Prior research reveals that the ventromedial prefrontal cortex (vmPFC) is a critical area underpinning affect and morality, and patients with vmPFC lesions show abnormalities in moral judgment and moral behavior. We use indirect measurement and multinomial modeling to understand differences in implicit moral evaluations among patients with vmPFC lesions. Our model quantifies multiple processes of moral judgment: implicit moral evaluations in response to distracting moral transgressions (Unintentional Judgment), accurate moral judgments about target actions (Intentional Judgment), and a directional tendency to judge actions as morally wrong (Response Bias). Compared to individuals with non-vmPFC brain damage and neurologically healthy comparisons, patients with vmPFC lesions showed a dual deficit in processes of moral judgment. First, patients with vmPFC lesions showed reduced Unintentional Judgment about moral transgressions, but not about non-moral negative affective distracters. Second, patients with vmPFC lesions showed reduced Intentional Judgment about target actions. These findings highlight the utility of a formal modeling approach in moral psychology, revealing a dual deficit in multiple component processes of moral judgment among patients with vmPFC lesions. Copyright © 2018 Elsevier Ltd. All rights reserved.
VirtualSpace: A vision of a machine-learned virtual space environment
NASA Astrophysics Data System (ADS)
Bortnik, J.; Sarno-Smith, L. K.; Chu, X.; Li, W.; Ma, Q.; Angelopoulos, V.; Thorne, R. M.
2017-12-01
Space borne instrumentation tends to come and go. A typical instrument will go through a phase of design and construction, be deployed on a spacecraft for several years while it collects data, and then be decommissioned and fade into obscurity. The data collected from that instrument will typically receive much attention while it is being collected, perhaps in the form of event studies, conjunctions with other instruments, or a few statistical surveys, but once the instrument or spacecraft is decommissioned, the data will be archived and receive progressively less attention with every passing year. This is the fate of all historical data, and will be the fate of data being collected by instruments even at the present time. But what if those instruments could come alive, and all be simultaneously present at any and every point in time and space? Imagine the scientific insights, and societal gains that could be achieved with a grand (virtual) heliophysical observatory that consists of every current and historical mission ever deployed? We propose that this is not just fantasy but is imminently doable with the data currently available, with the present computational resources, and with currently available algorithms. This project revitalizes existing data resources and lays the groundwork for incorporating data from every future mission to expand the scope and refine the resolution of the virtual observatory. We call this project VirtualSpace: a machine-learned virtual space environment.
NASA Astrophysics Data System (ADS)
Stillman, David E.; Michaels, Timothy I.; Grimm, Robert E.
2017-03-01
Recurring slope lineae (RSL) are narrow (0.5-5 m) dark features on Mars that incrementally lengthen down steep slopes, fade in colder seasons, and recur annually. These traits suggest that liquid water is flowing in the shallow subsurface of Mars today. Here we describe High Resolution Imaging Science Experiment (HiRISE) observations of RSL within Valles Marineris (VM). We have identified 239 candidate and confirmed RSL sites within all the major canyons of VM, with the exception of Echus Chasma. About half of all the globally known RSL locations occur within VM and the areal density of RSL on Coprates Montes appears to be the greatest on the planet. VM RSL are heterogeneously distributed, as they are primarily clustered in certain areas while being conspicuously absent in other locations that appear otherwise favorable. RSL have been found on many of the interior layered deposits (ILDs) within VM. Such ILD RSL appear to traverse bedrock, instead of regolith like all other RSL. Forty-six of the VM RSL sites show incremental lengthening and exhibit similar behavior in most of the canyons of VM, but the RSL duration at one site in Juventae Chasma is significantly reduced. Furthermore, the lengthening seasonality depends solely on slope orientation, with typical VM RSL on a given slope lengthening for ∼42-74% of a Mars year. There are always RSL lengthening within VM, regardless of the season. If RSL are caused by water, such a long active season at hundreds of VM RSL sites suggests that an appreciable source of water must be recharging these RSL. Thermophysical modeling indicates that a melting temperature range of ∼246 - 264 K is needed to reproduce the seasonal phenomenology of the VM RSL, suggesting the involvement of a brine consisting of tens of wt% salt. The mechanism(s) by which RSL are recharged annually remain uncertain. Overall, gaining a better understanding of how RSL form and recur can benefit the search for extant life on Mars and could provide details about an in situ water resource.
Chung, Wen-Hsin; Lai, Kung-Ming; Hsu, Kuo-chiang
2010-02-10
The histological structures of the vitelline membranes (VM) of hen and duck eggs were observed by cryo-scanning electron microscopy (cryo-SEM), and the chemical characteristics were also compared. The outer layer surface (OLS) of duck egg VM showed networks constructed by fibrils and sheets (0.1-5.2 microm in width), and that of hen egg presented networks formed only by sheets (2-6 microm in width). Thicker fibrils (0.5-1.5 microm in width) with different arrangement were observed on the inner layer surface (ILS) of duck egg VM as compared to those (0.3-0.7 microm in width) of hen egg VM. Upon separation, the outer surface of the outer layer (OSOL) and the inner surface of the inner layer (ISIL) of hen and duck egg VMs were quite similar to fresh VM except that the OSOL of duck egg VM showed networks constructed only by sheets. Thin fibrils interlaced above a bumpy or flat structure were observed at the exposed surface of the outer layer (ESOL) of hen and duck egg VMs. The exposed surfaces of inner layers (ESIL) of hen and duck egg VMs showed similar structures of fibrils, which joined, branched, and ran in straight lines for long distances up to 30 microm; however, the widths of the fibrils shown in ESOL and ESIL of duck egg VM were 0.1 and 0.7-1.4 microm, respectively, and were greater than those (<0.1 and 0.5-0.8 microm) of hen egg VM. The continuous membranes of both hen and duck egg VMs were still attached to the outer layers when separated. The content of protein, the major component of VM, was higher in duck egg VM (88.6%) than in hen egg VM (81.6%). Four and six major SDS-soluble protein patterns with distinct localization were observed in hen and duck egg VMs, respectively. Overall, the different histological structures of hen and duck egg VMs were suggested to be majorly attributable to the diverse protein components.
Treg Cells Protect Dopaminergic Neurons against MPP+ Neurotoxicity via CD47-SIRPA Interaction.
Huang, Yan; Liu, Zhan; Cao, Bei-Bei; Qiu, Yi-Hua; Peng, Yu-Ping
2017-01-01
Regulatory T (Treg) cells have been associated with neuroprotection by inhibiting microglial activation in animal models of Parkinson's disease (PD), a progressive neurodegenerative disease characterized by dopaminergic neuronal loss in the nigrostriatal system. Herein, we show that Treg cells directly protect dopaminergic neurons against 1-methyl-4-phenylpyridinium (MPP+) neurotoxicity via an interaction between the two transmembrane proteins CD47 and signal regulatory protein α (SIRPA). Primary ventral mesencephalic (VM) cells or VM neurons were pretreated with Treg cells before MPP+ treatment. Transwell co-culture of Treg cells and VM neurons was used to assess the effects of the Treg cytokines transforming growth factor (TGF)-β1 and interleukin (IL)-10 on dopaminergic neurons. Live cell imaging system detected a dynamic contact of Treg cells with VM neurons that were stained with CD47 and SIRPA, respectively. Dopaminergic neuronal loss, which was assessed by the number of tyrosine hydroxylase (TH)-immunoreactive cells, was examined after silencing CD47 in Treg cells or silencing SIRPA in VM neurons. Treg cells prevented MPP+-induced dopaminergic neuronal loss and glial inflammatory responses. TGF-β1 and IL-10 secreted from Treg cells did not significantly prevent MPP+-induced dopaminergic neuronal loss in transwell co-culture of Treg cells and VM neurons. CD47 and SIRPA were expressed by Treg cells and VM neurons, respectively. CD47-labeled Treg cells dynamically contacted with SIRPA-labeled VM neurons. Silencing CD47 gene in Treg cells impaired the ability of Treg cells to protect dopaminergic neurons against MPP+ toxicity. Similarly, SIRPA knockdown in VM neurons reduced the ability of Treg cell neuroprotection. Rac1/Akt signaling pathway in VM neurons was activated by CD47-SIRPA interaction between Treg cells and the neurons. Inhibiting Rac1/Akt signaling in VM neurons compromised Treg cell neuroprotection. Treg cells protect dopaminergic neurons against MPP+ neurotoxicity by a cell-to-cell contact mechanism underlying CD47-SIRPA interaction and Rac1/Akt activation. © 2017 The Author(s)Published by S. Karger AG, Basel.
NASA Astrophysics Data System (ADS)
Grandi, C.; Italiano, A.; Salomoni, D.; Calabrese Melcarne, A. K.
2011-12-01
WNoDeS, an acronym for Worker Nodes on Demand Service, is software developed at CNAF-Tier1, the National Computing Centre of the Italian Institute for Nuclear Physics (INFN) located in Bologna. WNoDeS provides on demand, integrated access to both Grid and Cloud resources through virtualization technologies. Besides the traditional use of computing resources in batch mode, users need to have interactive and local access to a number of systems. WNoDeS can dynamically select these computers instantiating Virtual Machines, according to the requirements (computing, storage and network resources) of users through either the Open Cloud Computing Interface API, or through a web console. An interactive use is usually limited to activities in user space, i.e. where the machine configuration is not modified. In some other instances the activity concerns development and testing of services and thus implies the modification of the system configuration (and, therefore, root-access to the resource). The former use case is a simple extension of the WNoDeS approach, where the resource is provided in interactive mode. The latter implies saving the virtual image at the end of each user session so that it can be presented to the user at subsequent requests. This work describes how the LHC experiments at INFN-Bologna are testing and making use of these dynamically created ad-hoc machines via WNoDeS to support flexible, interactive analysis and software development at the INFN Tier-1 Computing Centre.
Older driver highway design handbook : recommendations and guidelines
DOT National Transportation Integrated Search
1996-06-01
The purpose of this report is to document the preparation of the 1994 Table VM-1, including data sources, assumptions, and estimating procedures. Table VM-1 describes vehicle distance traveled in miles, by highway category and vehicle type. VM-1 depi...
Electrical Machines Laminations Magnetic Properties: A Virtual Instrument Laboratory
ERIC Educational Resources Information Center
Martinez-Roman, Javier; Perez-Cruz, Juan; Pineda-Sanchez, Manuel; Puche-Panadero, Ruben; Roger-Folch, Jose; Riera-Guasp, Martin; Sapena-Baño, Angel
2015-01-01
Undergraduate courses in electrical machines often include an introduction to their magnetic circuits and to the various magnetic materials used in their construction and their properties. The students must learn to be able to recognize and compare the permeability, saturation, and losses of these magnetic materials, relate each material to its…
Korkmaz, Selcuk; Zararsiz, Gokmen; Goksuluk, Dincer
2015-01-01
Virtual screening is an important step in early-phase of drug discovery process. Since there are thousands of compounds, this step should be both fast and effective in order to distinguish drug-like and nondrug-like molecules. Statistical machine learning methods are widely used in drug discovery studies for classification purpose. Here, we aim to develop a new tool, which can classify molecules as drug-like and nondrug-like based on various machine learning methods, including discriminant, tree-based, kernel-based, ensemble and other algorithms. To construct this tool, first, performances of twenty-three different machine learning algorithms are compared by ten different measures, then, ten best performing algorithms have been selected based on principal component and hierarchical cluster analysis results. Besides classification, this application has also ability to create heat map and dendrogram for visual inspection of the molecules through hierarchical cluster analysis. Moreover, users can connect the PubChem database to download molecular information and to create two-dimensional structures of compounds. This application is freely available through www.biosoft.hacettepe.edu.tr/MLViS/. PMID:25928885
Urbinello, Damiano; Joseph, Wout; Huss, Anke; Verloock, Leen; Beekhuizen, Johan; Vermeulen, Roel; Martens, Luc; Röösli, Martin
2014-07-01
Concerns of the general public about potential adverse health effects caused by radio-frequency electromagnetic fields (RF-EMFs) led authorities to introduce precautionary exposure limits, which vary considerably between regions. It may be speculated that precautionary limits affect the base station network in a manner that mean population exposure unintentionally increases. The objectives of this multicentre study were to compare mean exposure levels in outdoor areas across four different European cities and to compare with regulatory RF-EMF exposure levels in the corresponding areas. We performed measurements in the cities of Amsterdam (the Netherlands, regulatory limits for mobile phone base station frequency bands: 41-61 V/m), Basel (Switzerland, 4-6 V/m), Ghent (Belgium, 3-4.5 V/m) and Brussels (Belgium, 2.9-4.3 V/m) using a portable measurement device. Measurements were conducted in three different types of outdoor areas (central and non-central residential areas and downtown), between 2011 and 2012 at 12 different days. On each day, measurements were taken every 4s for approximately 15 to 30 min per area. Measurements per urban environment were repeated 12 times during 1 year. Arithmetic mean values for mobile phone base station exposure ranged between 0.22 V/m (Basel) and 0.41 V/m (Amsterdam) in all outdoor areas combined. The 95th percentile for total RF-EMF exposure varied between 0.46 V/m (Basel) and 0.82 V/m (Amsterdam) and the 99th percentile between 0.81 V/m (Basel) and 1.20 V/m (Brussels). All exposure levels were far below international reference levels proposed by ICNIRP (International Commission on Non-Ionizing Radiation Protection). Our study did not find indications that lowering the regulatory limit results in higher mobile phone base station exposure levels. Copyright © 2014 Elsevier Ltd. All rights reserved.
Differential effects of insular and ventromedial prefrontal cortex lesions on risky decision-making
Bechara, A.; Damasio, H.; Aitken, M. R. F.; Sahakian, B. J.; Robbins, T. W.
2008-01-01
The ventromedial prefrontal cortex (vmPFC) and insular cortex are implicated in distributed neural circuitry that supports emotional decision-making. Previous studies of patients with vmPFC lesions have focused primarily on decision-making under uncertainty, when outcome probabilities are ambiguous (e.g. the Iowa Gambling Task). It remains unclear whether vmPFC is also necessary for decision-making under risk, when outcome probabilities are explicit. It is not known whether the effect of insular damage is analogous to the effect of vmPFC damage, or whether these regions contribute differentially to choice behaviour. Four groups of participants were compared on the Cambridge Gamble Task, a well-characterized measure of risky decision-making where outcome probabilities are presented explicitly, thus minimizing additional learning and working memory demands. Patients with focal, stable lesions to the vmPFC (n = 20) and the insular cortex (n = 13) were compared against healthy subjects (n = 41) and a group of lesion controls (n = 12) with damage predominantly affecting the dorsal and lateral frontal cortex. The vmPFC and insular cortex patients showed selective and distinctive disruptions of betting behaviour. VmPFC damage was associated with increased betting regardless of the odds of winning, consistent with a role of vmPFC in biasing healthy individuals towards conservative options under risk. In contrast, patients with insular cortex lesions failed to adjust their bets by the odds of winning, consistent with a role of the insular cortex in signalling the probability of aversive outcomes. The insular group attained a lower point score on the task and experienced more ‘bankruptcies’. There were no group differences in probability judgement. These data confirm the necessary role of the vmPFC and insular regions in decision-making under risk. Poor decision-making in clinical populations can arise via multiple routes, with functionally dissociable effects of vmPFC and insular cortex damage. PMID:18390562
Differential effects of insular and ventromedial prefrontal cortex lesions on risky decision-making.
Clark, L; Bechara, A; Damasio, H; Aitken, M R F; Sahakian, B J; Robbins, T W
2008-05-01
The ventromedial prefrontal cortex (vmPFC) and insular cortex are implicated in distributed neural circuitry that supports emotional decision-making. Previous studies of patients with vmPFC lesions have focused primarily on decision-making under uncertainty, when outcome probabilities are ambiguous (e.g. the Iowa Gambling Task). It remains unclear whether vmPFC is also necessary for decision-making under risk, when outcome probabilities are explicit. It is not known whether the effect of insular damage is analogous to the effect of vmPFC damage, or whether these regions contribute differentially to choice behaviour. Four groups of participants were compared on the Cambridge Gamble Task, a well-characterized measure of risky decision-making where outcome probabilities are presented explicitly, thus minimizing additional learning and working memory demands. Patients with focal, stable lesions to the vmPFC (n = 20) and the insular cortex (n = 13) were compared against healthy subjects (n = 41) and a group of lesion controls (n = 12) with damage predominantly affecting the dorsal and lateral frontal cortex. The vmPFC and insular cortex patients showed selective and distinctive disruptions of betting behaviour. VmPFC damage was associated with increased betting regardless of the odds of winning, consistent with a role of vmPFC in biasing healthy individuals towards conservative options under risk. In contrast, patients with insular cortex lesions failed to adjust their bets by the odds of winning, consistent with a role of the insular cortex in signalling the probability of aversive outcomes. The insular group attained a lower point score on the task and experienced more 'bankruptcies'. There were no group differences in probability judgement. These data confirm the necessary role of the vmPFC and insular regions in decision-making under risk. Poor decision-making in clinical populations can arise via multiple routes, with functionally dissociable effects of vmPFC and insular cortex damage.
Vestibular Migraine in Children and Adolescents: Clinical Findings and Laboratory Tests
Langhagen, Thyra; Lehrer, Nicole; Borggraefe, Ingo; Heinen, Florian; Jahn, Klaus
2015-01-01
Introduction: Vestibular migraine (VM) is the most common cause of episodic vertigo in children. We summarize the clinical findings and laboratory test results in a cohort of children and adolescents with VM. We discuss the limitations of current classification criteria for dizzy children. Methods: A retrospective chart analysis was performed on 118 children with migraine related vertigo at a tertiary care center. Patients were grouped in the following categories: (1) definite vestibular migraine (dVM); (2) probable vestibular migraine (pVM); (3) suspected vestibular migraine (sVM); (4) benign paroxysmal vertigo (BPV); and (5) migraine with/without aura (oM) plus vertigo/dizziness according to the International Classification of Headache Disorders, 3rd edition (beta version). Results: The mean age of all patients was 12 ± 3 years (range 3–18 years, 70 females). 36 patients (30%) fulfilled criteria for dVM, 33 (28%) for pVM, 34 (29%) for sVM, 7 (6%) for BPV, and 8 (7%) for oM. Somatoform vertigo (SV) co-occurred in 27% of patients. Episodic syndromes were reported in 8%; the family history of migraine was positive in 65%. Mild central ocular motor signs were found in 24% (most frequently horizontal saccadic pursuit). Laboratory tests showed that about 20% had pathological function of the horizontal vestibulo-ocular reflex, and almost 50% had abnormal postural sway patterns. Conclusion: Patients with definite, probable, and suspected VM do not differ in the frequency of ocular motor, vestibular, or postural abnormalities. VM is the best explanation for their symptoms. It is essential to establish diagnostic criteria in clinical studies. In clinical practice, however, the most reasonable diagnosis should be made in order to begin treatment. Such a procedure also minimizes the fear of the parents and children, reduces the need to interrupt leisure time and school activities, and prevents the development of SV. PMID:25674076
Ventromedial Prefrontal Cortex Is Critical for Helping Others Who Are Suffering.
Beadle, Janelle N; Paradiso, Sergio; Tranel, Daniel
2018-01-01
Neurological patients with damage to the ventromedial prefrontal cortex (vmPFC) are reported to display reduced empathy toward others in their daily lives in clinical case studies. However, the empathic behavior of patients with damage to the vmPFC has not been measured experimentally in response to an empathy-eliciting event. This is important because characterizing the degree to which patients with damage to the vmPFC have lower empathic behavior will allow for the development of targeted interventions to improve patients' social skills and in turn will help family members to better understand their impairments so they can provide appropriate supports. For the first time, we induced empathy using an ecologically-valid empathy induction in neurological patients with damage to the vmPFC and measured their empathic emotional responses and behavior in real time. Eight neurological patients with focal damage to the vmPFC were compared to demographically-matched brain-damaged and healthy comparison participants. Patients with damage to the vmPFC gave less money in the empathy condition to a person who was suffering (a confederate) than comparison participants. This provides the first direct experimental evidence that the vmPFC is critical for empathic behavior toward individuals who are suffering.
Reward and social valuation deficits following ventromedial prefrontal damage.
Moretti, Laura; Dragone, Davide; di Pellegrino, Giuseppe
2009-01-01
Lesion and imaging studies have implicated the ventromedial prefrontal cortex (vmPFC) in economic decisions and social interactions, yet its exact functions remain unclear. Here, we investigated the hypothesis that the vmPFC represents the subjective value or desirability of future outcomes during social decision-making. Both vmPFC-damaged patients and control participants acted as the responder in a single-round ultimatum game. To test outcome valuation, we contrasted concrete, immediately available gains with abstract, future ones. To test social valuation, we contrasted interactions with a human partner and those involving a computer. We found that, compared to controls, vmPFC patients substantially reduced their acceptance rate of unfair offers from a human partner, but only when financial gains were presented as abstract amounts to be received later. When the gains were visible and readily available, the vmPFC patients' acceptance of unfair offers was normal. Furthermore, unlike controls, vmPFC patients did not distinguish between unfair offers from a human agent and those from a computerized opponent. We conclude that the vmPFC encodes the expected value of abstract, future goals in a common neural currency that takes into account both reward and social signals in order to optimize economic decision-making.
Cristofori, Irene; Viola, Vanda; Chau, Aileen; Zhong, Wanting; Krueger, Frank; Zamboni, Giovanna; Grafman, Jordan
2015-01-01
Given the determinant role of ventromedial prefrontal cortex (vmPFC) in valuation, we examined whether vmPFC lesions also modulate how people scale political beliefs. Patients with penetrating traumatic brain injury (pTBI; N = 102) and healthy controls (HCs; N = 31) were tested on the political belief task, where they rated 75 statements expressing political opinions concerned with welfare, economy, political involvement, civil rights, war and security. Each statement was rated for level of agreement and scaled along three dimensions: radicalism, individualism and conservatism. Voxel-based lesion-symptom mapping (VLSM) analysis showed that diminished scores for the radicalism dimension (i.e. statements were rated as less radical than the norms) were associated with lesions in bilateral vmPFC. After dividing the pTBI patients into three groups, according to lesion location (i.e. vmPFC, dorsolateral prefrontal cortex [dlPFC] and parietal cortex), we found that the vmPFC, but not the dlPFC, group had reduced radicalism scores compared with parietal and HC groups. These findings highlight the crucial role of the vmPFC in appropriately valuing political behaviors and may explain certain inappropriate social judgments observed in patients with vmPFC lesions. PMID:25656509
Ventromedial prefrontal damage reduces mind-wandering and biases its temporal focus
Bertossi, Elena
2016-01-01
Mind-wandering, an ubiquitous expression of humans’ mental life, reflects a drift of attention away from the current task towards self-generated thoughts, and has been associated with activity in the brain default network. To date, however, little is understood about the contribution of individual nodes of this network to mind-wandering. Here, we investigated whether the ventromedial prefrontal cortex (vmPFC) is critically involved in mind-wandering, by studying the propensity to mind-wander in patients with lesion to the vmPFC (vmPFC patients), control patients with lesions not involving the vmPFC, and healthy individuals. Participants performed three tasks varying in cognitive demands while their thoughts were periodically sampled, and a self-report scale of daydreaming in daily life. vmPFC patients exhibited reduced mind-wandering rates across tasks, and claimed less frequent daydreaming, than both healthy and brain-damaged controls. vmPFC damage reduced off-task thoughts related to the future, while it promoted those about the present. These results indicate that vmPFC critically supports mind-wandering, possibly by helping to construct future-related scenarios and thoughts that have the potential to draw attention inward, away from the ongoing tasks. PMID:27445210
Augur, Isabel F; Wyckoff, Andrew R; Aston-Jones, Gary; Kalivas, Peter W; Peters, Jamie
2016-09-28
The ventromedial prefrontal cortex (vmPFC) has been shown to negatively regulate cocaine-seeking behavior, but the precise conditions by which vmPFC activity can be exploited to reduce cocaine relapse are currently unknown. We used viral-mediated gene transfer of designer receptors (DREADDs) to activate vmPFC neurons and examine the consequences on cocaine seeking in a rat self-administration model of relapse. Activation of vmPFC neurons with the Gq-DREADD reduced reinstatement of cocaine seeking elicited by cocaine-associated cues, but not by cocaine itself. We used a retro-DREADD approach to confine the Gq-DREADD to vmPFC neurons that project to the medial nucleus accumbens shell, confirming that these neurons are responsible for the decreased cue-induced reinstatement of cocaine seeking. The effects of vmPFC activation on cue-induced reinstatement depended on prior extinction training, consistent with the reported role of this structure in extinction memory. These data help define the conditions under which chemogenetic activation of extinction neural circuits can be exploited to reduce relapse triggered by reminder cues. The ventromedial prefrontal cortex (vmPFC) projection to the nucleus accumbens shell is important for extinction of cocaine seeking, but its anatomical proximity to the relapse-promoting projection from the dorsomedial prefrontal cortex to the nucleus accumbens core makes it difficult to selectively enhance neuronal activity in one pathway or the other using traditional pharmacotherapy (e.g., systemically administered drugs). Viral-mediated gene delivery of an activating Gq-DREADD to vmPFC and/or vmPFC projections to the nucleus accumbens shell allows the chemogenetic exploitation of this extinction neural circuit to reduce cocaine seeking and was particularly effective against relapse triggered by cocaine reminder cues. Copyright © 2016 the authors 0270-6474/16/3610174-07$15.00/0.
Wu, Hai-Bo; Yang, Shuai; Weng, Hai-Yan; Chen, Qian; Zhao, Xi-Long; Fu, Wen-Juan; Niu, Qin; Ping, Yi-Fang; Wang, Ji Ming; Zhang, Xia; Yao, Xiao-Hong; Bian, Xiu-Wu
2017-09-02
Antiangiogenesis with bevacizumab, an antibody against vascular endothelial growth factor (VEGF), has been used for devascularization to limit the growth of malignant glioma. However, the benefits are transient due to elusive mechanisms underlying resistance to the antiangiogenic therapy. Glioma stem cells (GSCs) are capable of forming vasculogenic mimicry (VM), an alternative microvascular circulation independent of VEGF-driven angiogenesis. Herein, we report that the formation of VM was promoted by bevacizumab-induced macroautophagy/autophagy in GSCs, which was associated with tumor resistance to antiangiogenic therapy. We established a 3-dimensional collagen scaffold to examine the formation of VM and autophagy by GSCs, and found that rapamycin increased the number of VM and enhanced KDR/VEGFR-2 phosphorylation. Treatment with chloroquine, or knockdown of the autophagy gene ATG5, inhibited the formation of VM and KDR phosphorylation in GSCs. Notably, neutralization of GSCs-produced VEGF with bevacizumab failed to recapitulate the effect of chloroquine treatment and ATG5 knockdown, suggesting that autophagy-promoted formation of VM was independent of tumor cell-derived VEGF. ROS was elevated when autophagy was induced in GSCs and activated KDR phosphorylation through the phosphoinositide 3-kinase (PI3K)-AKT pathway. A ROS inhibitor, N-acetylcysteine, abolished KDR phosphorylation and the formation of VM by GSCs. By examination of the specimens from 95 patients with glioblastoma, we found that ATG5 and p-KDR expression was strongly associated with the density of VM in tumors and poor clinical outcome. Our results thus demonstrate a crucial role of autophagy in the formation of VM by GSCs, which may serve as a therapeutic target in drug-resistant glioma.
de Souza, Leonardo Mendes Leal; Cabral, Hélio Veiga; de Oliveira, Liliam Fernandes; Vieira, Taian Martins
2018-04-01
Architectural differences along vastus medialis (VM) and between VM and vastus lateralis (VL) are considered functionally important for the patellar tracking, knee joint stability and knee joint extension. Whether these functional differences are associated with a differential activity of motor units between VM and VL is however unknown. In the present study, we, therefore, investigate neuroanatomical differences in the activity of motor units detected proximo-distally from VM and from the VL muscle. Nine healthy volunteers performed low-level isometric knee extension contractions (20% of their maximum voluntary contraction) following a trapezoidal trajectory. Surface electromyograms (EMGs) were recorded from VM proximal and distal regions and from VL using three linear adhesive arrays of eight electrodes. The firing rate and recruitment threshold of motor units decomposed from EMGs were then compared among muscle regions. Results show that VL motor units reached lower mean firing rates in comparison with VM motor units, regardless of their position within VM (P < .040). No significant differences in firing rate were found between proximal and distal, VM motor units (P = .997). Furthermore, no significant differences in the recruitment threshold were observed for all motor units analysed (P = .108). Our findings possibly suggest the greater potential of VL to generate force, due to its fibres arrangement, may account for the lower discharge rate observed for VL then either proximally or distally detected motor units in VM. Additionally, the present study opens new perspectives on the importance of considering muscle architecture in investigations of the neural aspects of motor behaviour. Copyright © 2017 Elsevier B.V. All rights reserved.
1001 Ways to run AutoDock Vina for virtual screening
NASA Astrophysics Data System (ADS)
Jaghoori, Mohammad Mahdi; Bleijlevens, Boris; Olabarriaga, Silvia D.
2016-03-01
Large-scale computing technologies have enabled high-throughput virtual screening involving thousands to millions of drug candidates. It is not trivial, however, for biochemical scientists to evaluate the technical alternatives and their implications for running such large experiments. Besides experience with the molecular docking tool itself, the scientist needs to learn how to run it on high-performance computing (HPC) infrastructures, and understand the impact of the choices made. Here, we review such considerations for a specific tool, AutoDock Vina, and use experimental data to illustrate the following points: (1) an additional level of parallelization increases virtual screening throughput on a multi-core machine; (2) capturing of the random seed is not enough (though necessary) for reproducibility on heterogeneous distributed computing systems; (3) the overall time spent on the screening of a ligand library can be improved by analysis of factors affecting execution time per ligand, including number of active torsions, heavy atoms and exhaustiveness. We also illustrate differences among four common HPC infrastructures: grid, Hadoop, small cluster and multi-core (virtual machine on the cloud). Our analysis shows that these platforms are suitable for screening experiments of different sizes. These considerations can guide scientists when choosing the best computing platform and set-up for their future large virtual screening experiments.
1001 Ways to run AutoDock Vina for virtual screening.
Jaghoori, Mohammad Mahdi; Bleijlevens, Boris; Olabarriaga, Silvia D
2016-03-01
Large-scale computing technologies have enabled high-throughput virtual screening involving thousands to millions of drug candidates. It is not trivial, however, for biochemical scientists to evaluate the technical alternatives and their implications for running such large experiments. Besides experience with the molecular docking tool itself, the scientist needs to learn how to run it on high-performance computing (HPC) infrastructures, and understand the impact of the choices made. Here, we review such considerations for a specific tool, AutoDock Vina, and use experimental data to illustrate the following points: (1) an additional level of parallelization increases virtual screening throughput on a multi-core machine; (2) capturing of the random seed is not enough (though necessary) for reproducibility on heterogeneous distributed computing systems; (3) the overall time spent on the screening of a ligand library can be improved by analysis of factors affecting execution time per ligand, including number of active torsions, heavy atoms and exhaustiveness. We also illustrate differences among four common HPC infrastructures: grid, Hadoop, small cluster and multi-core (virtual machine on the cloud). Our analysis shows that these platforms are suitable for screening experiments of different sizes. These considerations can guide scientists when choosing the best computing platform and set-up for their future large virtual screening experiments.
Virtual reality hardware and graphic display options for brain-machine interfaces
Marathe, Amar R.; Carey, Holle L.; Taylor, Dawn M.
2009-01-01
Virtual reality hardware and graphic displays are reviewed here as a development environment for brain-machine interfaces (BMIs). Two desktop stereoscopic monitors and one 2D monitor were compared in a visual depth discrimination task and in a 3D target-matching task where able-bodied individuals used actual hand movements to match a virtual hand to different target hands. Three graphic representations of the hand were compared: a plain sphere, a sphere attached to the fingertip of a realistic hand and arm, and a stylized pacman-like hand. Several subjects had great difficulty using either stereo monitor for depth perception when perspective size cues were removed. A mismatch in stereo and size cues generated inappropriate depth illusions. This phenomenon has implications for choosing target and virtual hand sizes in BMI experiments. Target matching accuracy was about as good with the 2D monitor as with either 3D monitor. However, users achieved this accuracy by exploring the boundaries of the hand in the target with carefully controlled movements. This method of determining relative depth may not be possible in BMI experiments if movement control is more limited. Intuitive depth cues, such as including a virtual arm, can significantly improve depth perception accuracy with or without stereo viewing. PMID:18006069
Toxicity and medical countermeasure studies on the organophosphorus nerve agents VM and VX
Rice, Helen; Dalton, Christopher H.; Price, Matthew E.; Graham, Stuart J.; Green, A. Christopher; Jenner, John; Groombridge, Helen J.; Timperley, Christopher M.
2015-01-01
To support the effort to eliminate the Syrian Arab Republic chemical weapons stockpile safely, there was a requirement to provide scientific advice based on experimentally derived information on both toxicity and medical countermeasures (MedCM) in the event of exposure to VM, VX or VM–VX mixtures. Complementary in vitro and in vivo studies were undertaken to inform that advice. The penetration rate of neat VM was not significantly different from that of neat VX, through either guinea pig or pig skin in vitro. The presence of VX did not affect the penetration rate of VM in mixtures of various proportions. A lethal dose of VM was approximately twice that of VX in guinea pigs poisoned via the percutaneous route. There was no interaction in mixed agent solutions which altered the in vivo toxicity of the agents. Percutaneous poisoning by VM responded to treatment with standard MedCM, although complete protection was not achieved. PMID:27547080
Reduced prefrontal connectivity in psychopathy.
Motzkin, Julian C; Newman, Joseph P; Kiehl, Kent A; Koenigs, Michael
2011-11-30
Linking psychopathy to a specific brain abnormality could have significant clinical, legal, and scientific implications. Theories on the neurobiological basis of the disorder typically propose dysfunction in a circuit involving ventromedial prefrontal cortex (vmPFC). However, to date there is limited brain imaging data to directly test whether psychopathy may indeed be associated with any structural or functional abnormality within this brain area. In this study, we employ two complementary imaging techniques to assess the structural and functional connectivity of vmPFC in psychopathic and non-psychopathic criminals. Using diffusion tensor imaging, we show that psychopathy is associated with reduced structural integrity in the right uncinate fasciculus, the primary white matter connection between vmPFC and anterior temporal lobe. Using functional magnetic resonance imaging, we show that psychopathy is associated with reduced functional connectivity between vmPFC and amygdala as well as between vmPFC and medial parietal cortex. Together, these data converge to implicate diminished vmPFC connectivity as a characteristic neurobiological feature of psychopathy.
Reduced Prefrontal Connectivity in Psychopathy
Motzkin, Julian C.; Newman, Joseph P.; Kiehl, Kent A.; Koenigs, Michael
2012-01-01
Linking psychopathy to a specific brain abnormality could have significant clinical, legal, and scientific implications. Theories on the neurobiological basis of the disorder typically propose dysfunction in a circuit involving ventromedial prefrontal cortex (vmPFC). However, to date there is limited brain imaging data to directly test whether psychopathy may indeed be associated with any structural or functional abnormality within this brain area. In this study, we employ two complementary imaging techniques to assess the structural and functional connectivity of vmPFC in psychopathic and non-psychopathic criminals. Using diffusion tensor imaging, we show that psychopathy is associated with reduced structural integrity in the right uncinate fasciculus, the primary white matter connection between vmPFC and anterior temporal lobe. Using functional magnetic resonance imaging, we show that psychopathy is associated with reduced functional connectivity between vmPFC and amygdala as well as between vmPFC and medial parietal cortex. Together, these data converge to implicate diminished vmPFC connectivity as a characteristic neurobiological feature of psychopathy. PMID:22131397
General-Purpose Front End for Real-Time Data Processing
NASA Technical Reports Server (NTRS)
James, Mark
2007-01-01
FRONTIER is a computer program that functions as a front end for any of a variety of other software of both the artificial intelligence (AI) and conventional data-processing types. As used here, front end signifies interface software needed for acquiring and preprocessing data and making the data available for analysis by the other software. FRONTIER is reusable in that it can be rapidly tailored to any such other software with minimum effort. Each component of FRONTIER is programmable and is executed in an embedded virtual machine. Each component can be reconfigured during execution. The virtual-machine implementation making FRONTIER independent of the type of computing hardware on which it is executed.
ERIC Educational Resources Information Center
Liao, Yuan
2011-01-01
The virtualization of computing resources, as represented by the sustained growth of cloud computing, continues to thrive. Information Technology departments are building their private clouds due to the perception of significant cost savings by managing all physical computing resources from a single point and assigning them to applications or…
New Virtual Field Trips. Revised Edition.
ERIC Educational Resources Information Center
Cooper, Gail; Cooper, Garry
This book is an annotated guidebook, arranged by subject matter, of World Wide Web sites for K-12 students. The following chapters are included: (1) Virtual Time Machine (i.e., sites that cover topics in world history); (2) Tour the World (i.e., sites that include information about countries); (3) Outer Space; (4) The Great Outdoors; (5) Aquatic…
Davidson, Robert L; Weber, Ralf J M; Liu, Haoyu; Sharma-Oates, Archana; Viant, Mark R
2016-01-01
Metabolomics is increasingly recognized as an invaluable tool in the biological, medical and environmental sciences yet lags behind the methodological maturity of other omics fields. To achieve its full potential, including the integration of multiple omics modalities, the accessibility, standardization and reproducibility of computational metabolomics tools must be improved significantly. Here we present our end-to-end mass spectrometry metabolomics workflow in the widely used platform, Galaxy. Named Galaxy-M, our workflow has been developed for both direct infusion mass spectrometry (DIMS) and liquid chromatography mass spectrometry (LC-MS) metabolomics. The range of tools presented spans from processing of raw data, e.g. peak picking and alignment, through data cleansing, e.g. missing value imputation, to preparation for statistical analysis, e.g. normalization and scaling, and principal components analysis (PCA) with associated statistical evaluation. We demonstrate the ease of using these Galaxy workflows via the analysis of DIMS and LC-MS datasets, and provide PCA scores and associated statistics to help other users to ensure that they can accurately repeat the processing and analysis of these two datasets. Galaxy and data are all provided pre-installed in a virtual machine (VM) that can be downloaded from the GigaDB repository. Additionally, source code, executables and installation instructions are available from GitHub. The Galaxy platform has enabled us to produce an easily accessible and reproducible computational metabolomics workflow. More tools could be added by the community to expand its functionality. We recommend that Galaxy-M workflow files are included within the supplementary information of publications, enabling metabolomics studies to achieve greater reproducibility.
Virtual Factory Framework for Supporting Production Planning and Control.
Kibira, Deogratias; Shao, Guodong
2017-01-01
Developing optimal production plans for smart manufacturing systems is challenging because shop floor events change dynamically. A virtual factory incorporating engineering tools, simulation, and optimization generates and communicates performance data to guide wise decision making for different control levels. This paper describes such a platform specifically for production planning. We also discuss verification and validation of the constituent models. A case study of a machine shop is used to demonstrate data generation for production planning in a virtual factory.
Williams, DeWayne P; Thayer, Julian F; Koenig, Julian
2016-12-01
Intraindividual reaction time variability (IIV), defined as the variability in trial-to-trial response times, is thought to serve as an index of central nervous system function. As such, greater IIV reflects both poorer executive brain function and cognitive control, in addition to lapses in attention. Resting-state vagally mediated heart rate variability (vmHRV), a psychophysiological index of self-regulatory abilities, has been linked with executive brain function and cognitive control such that those with greater resting-state vmHRV often perform better on cognitive tasks. However, research has yet to investigate the direct relationship between resting vmHRV and task IIV. The present study sought to examine this relationship in a sample of 104 young and healthy participants who first completed a 5-min resting-baseline period during which resting-state vmHRV was assessed. Participants then completed an attentional (target detection) task, where reaction time, accuracy, and trial-to-trial IIV were obtained. Results showed resting vmHRV to be significantly related to IIV, such that lower resting vmHRV predicted higher IIV on the task, even when controlling for several covariates (including mean reaction time and accuracy). Overall, our results provide further evidence for the link between resting vmHRV and cognitive control, and extend these notions to the domain of lapses in attention, as indexed by IIV. Implications and recommendations for future research on resting vmHRV and cognition are discussed. © 2016 Society for Psychophysiological Research.
Blankenburg, Robert; Hackert, Katarzyna; Wurster, Sebastian; Deenen, René; Seidman, J G; Seidman, Christine E; Lohse, Martin J; Schmitt, Joachim P
2014-07-07
Approximately 40% of hypertrophic cardiomyopathy (HCM) is caused by heterozygous missense mutations in β-cardiac myosin heavy chain (β-MHC). Associating disease phenotype with mutation is confounded by extensive background genetic and lifestyle/environmental differences between subjects even from the same family. To characterize disease caused by β-cardiac myosin heavy chain Val606Met substitution (VM) that has been identified in several HCM families with wide variation of clinical outcomes, in mice. Unlike 2 mouse lines bearing the malignant myosin mutations Arg453Cys (RC/+) or Arg719Trp (RW/+), VM/+ mice with an identical inbred genetic background lacked hallmarks of HCM such as left ventricular hypertrophy, disarray of myofibers, and interstitial fibrosis. Even homozygous VM/VM mice were indistinguishable from wild-type animals, whereas RC/RC- and RW/RW-mutant mice died within 9 days after birth. However, hypertrophic effects of the VM mutation were observed both in mice treated with cyclosporine, a known stimulator of the HCM response, and compound VM/RC heterozygous mice, which developed a severe HCM phenotype. In contrast to all heterozygous mutants, both systolic and diastolic function of VM/RC hearts was severely impaired already before the onset of cardiac remodeling. The VM mutation per se causes mild HCM-related phenotypes; however, in combination with other HCM activators it exacerbates the HCM phenotype. Double-mutant mice are suitable for assessing the severity of benign mutations. © 2014 American Heart Association, Inc.
Grob, Karl; Manestar, Mirjana; Filgueira, Luis; Kuster, Markus S; Gilbey, Helen; Ackland, Timothy
2018-03-01
Although the vastus medialis (VM) is closely associated with the vastus intermedius (VI), there is a lack of data regarding their functional relationship. The purpose of this study was to investigate the anatomical interaction between the VM and VI with regard to their origins, insertions, innervation and function within the extensor apparatus of the knee joint. Eighteen human cadaveric lower limbs were investigated using macro-dissection techniques. Six limbs were cut transversely in the middle third of the thigh. The mode of origin, insertion and nerve supply of the extensor apparatus of the knee joint were studied. The architecture of the VM and VI was examined in detail, as was their anatomical interaction and connective tissue linkage to the adjacent anatomical structures. The VM originated medially from a broad hammock-like structure. The attachment site of the VM always spanned over a long distance between: (1) patella, (2) rectus femoris tendon and (3) aponeurosis of the VI, with the insertion into the VI being the largest. VM units were inserted twice-once on the anterior and once on the posterior side of the VI. The VI consists of a complex multi-layered structure. The layers of the medial VI aponeurosis fused with the aponeuroses of the tensor vastus intermedius and vastus lateralis. Together, they form the two-layered intermediate layer of the quadriceps tendon. The VM and medial parts of the VI were innervated by the same medial division of the femoral nerve. The VM consists of multiple muscle units inserting into the entire VI. Together, they build a potential functional muscular complex. Therefore, the VM acts as an indirect extensor of the knee joint regulating and adjusting the length of the extensor apparatus throughout the entire range of motion. It is of clinical importance that, besides the VM, substantial parts of the VI directly contribute to the medial pull on the patella and help to maintain medial tracking of the patella during knee extension. The interaction between the VM and VI, with responsibility for the extension of the knee joint and influence on the patellofemoral function, leads readily to an understanding of common clinical problems found at the knee joint as it attempts to meet contradictory demands for both mobility and stability. Surgery or trauma in the anteromedial aspect of the quadriceps muscle group might alter a delicate interplay between the VM and VI. This would affect the extensor apparatus as a whole.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pais Pitta de Lacerda Ruivo, Tiago; Bernabeu Altayo, Gerard; Garzoglio, Gabriele
2014-11-11
has been widely accepted that software virtualization has a big negative impact on high-performance computing (HPC) application performance. This work explores the potential use of Infiniband hardware virtualization in an OpenNebula cloud towards the efficient support of MPI-based workloads. We have implemented, deployed, and tested an Infiniband network on the FermiCloud private Infrastructure-as-a-Service (IaaS) cloud. To avoid software virtualization towards minimizing the virtualization overhead, we employed a technique called Single Root Input/Output Virtualization (SRIOV). Our solution spanned modifications to the Linux’s Hypervisor as well as the OpenNebula manager. We evaluated the performance of the hardware virtualization on up to 56more » virtual machines connected by up to 8 DDR Infiniband network links, with micro-benchmarks (latency and bandwidth) as well as w a MPI-intensive application (the HPL Linpack benchmark).« less
Ventromedial prefrontal damage reduces mind-wandering and biases its temporal focus.
Bertossi, Elena; Ciaramelli, Elisa
2016-11-01
Mind-wandering, an ubiquitous expression of humans' mental life, reflects a drift of attention away from the current task towards self-generated thoughts, and has been associated with activity in the brain default network. To date, however, little is understood about the contribution of individual nodes of this network to mind-wandering. Here, we investigated whether the ventromedial prefrontal cortex (vmPFC) is critically involved in mind-wandering, by studying the propensity to mind-wander in patients with lesion to the vmPFC (vmPFC patients), control patients with lesions not involving the vmPFC, and healthy individuals. Participants performed three tasks varying in cognitive demands while their thoughts were periodically sampled, and a self-report scale of daydreaming in daily life. vmPFC patients exhibited reduced mind-wandering rates across tasks, and claimed less frequent daydreaming, than both healthy and brain-damaged controls. vmPFC damage reduced off-task thoughts related to the future, while it promoted those about the present. These results indicate that vmPFC critically supports mind-wandering, possibly by helping to construct future-related scenarios and thoughts that have the potential to draw attention inward, away from the ongoing tasks. © The Author (2016). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Biological and cognitive underpinnings of religious fundamentalism.
Zhong, Wanting; Cristofori, Irene; Bulbulia, Joseph; Krueger, Frank; Grafman, Jordan
2017-06-01
Beliefs profoundly affect people's lives, but their cognitive and neural pathways are poorly understood. Although previous research has identified the ventromedial prefrontal cortex (vmPFC) as critical to representing religious beliefs, the means by which vmPFC enables religious belief is uncertain. We hypothesized that the vmPFC represents diverse religious beliefs and that a vmPFC lesion would be associated with religious fundamentalism, or the narrowing of religious beliefs. To test this prediction, we assessed religious adherence with a widely-used religious fundamentalism scale in a large sample of 119 patients with penetrating traumatic brain injury (pTBI). If the vmPFC is crucial to modulating diverse personal religious beliefs, we predicted that pTBI patients with lesions to the vmPFC would exhibit greater fundamentalism, and that this would be modulated by cognitive flexibility and trait openness. Instead, we found that participants with dorsolateral prefrontal cortex (dlPFC) lesions have fundamentalist beliefs similar to patients with vmPFC lesions and that the effect of a dlPFC lesion on fundamentalism was significantly mediated by decreased cognitive flexibility and openness. These findings indicate that cognitive flexibility and openness are necessary for flexible and adaptive religious commitment, and that such diversity of religious thought is dependent on dlPFC functionality. Copyright © 2017 Elsevier Ltd. All rights reserved.
Lockwood Estrin, G; Kyriakopoulou, V; Makropoulos, A; Ball, G; Kuhendran, L; Chew, A; Hagberg, B; Martinez-Biarge, M; Allsop, J; Fox, M; Counsell, S J; Rutherford, M A
2016-01-01
Ventriculomegaly (VM) is the most common central nervous system abnormality diagnosed antenatally, and is associated with developmental delay in childhood. We tested the hypothesis that antenatally diagnosed isolated VM represents a biological marker for altered white matter (WM) and cortical grey matter (GM) development in neonates. 25 controls and 21 neonates with antenatally diagnosed isolated VM had magnetic resonance imaging at 41.97(± 2.94) and 45.34(± 2.14) weeks respectively. T2-weighted scans were segmented for volumetric analyses of the lateral ventricles, WM and cortical GM. Diffusion tensor imaging (DTI) measures were assessed using voxel-wise methods in WM and cortical GM; comparisons were made between cohorts. Ventricular and cortical GM volumes were increased, and WM relative volume was reduced in the VM group. Regional decreases in fractional anisotropy (FA) and increases in mean diffusivity (MD) were demonstrated in WM of the VM group compared to controls. No differences in cortical DTI metrics were observed. At 2 years, neurodevelopmental delays, especially in language, were observed in 6/12 cases in the VM cohort. WM alterations in isolated VM cases may be consistent with abnormal development of WM tracts involved in language and cognition. Alterations in WM FA and MD may represent neural correlates for later neurodevelopmental deficits.
Ciaramelli, Elisa; Braghittoni, Davide; di Pellegrino, Giuseppe
2012-11-01
Moral judgment involves considering not only the outcome of an action but also the intention with which it was pursued. Previous functional magnetic resonance imaging (fMRI) research has shown that integrating outcome and belief information for moral judgment relies on a brain network including temporo-parietal, precuneus, and medial prefrontal regions. Here, we investigated whether the ventromedial prefrontal cortex (vmPFC) plays a crucial role in this process. Patients with lesions in vmPFC (vmPFC patients), and brain-damaged and healthy controls considered scenarios in which the protagonist caused intentional harm (negative-outcome, negative-belief), accidental harm (negative-outcome, neutral-belief), attempted harm (neutral-outcome, negative-belief), or no harm (neutral-outcome, neutral-belief), and rated the moral permissibility of the protagonists' behavior. All groups responded similarly to scenarios involving intentional harm and no harm. vmPFC patients, however, judged attempted harm as more permissible, and accidental harm as less permissible, than the control groups. For vmPFC patients, outcome information, rather than belief information, shaped moral judgment. The results indicate that vmPFC is necessary for integrating outcome and belief information during moral reasoning. During moral judgment vmPFC may mediate intentions' understanding, and overriding of prepotent responses to salient outcomes.
Biological and cognitive underpinnings of religious fundamentalism
Zhong, Wanting; Cristofori, Irene; Bulbulia, Joseph; Krueger, Frank; Grafman, Jordan
2017-01-01
Beliefs profoundly affect people's lives, but their cognitive and neural pathways are poorly understood. Although previous research has identified the ventromedial prefrontal cortex (vmPFC) as critical to representing religious beliefs, the means by which vmPFC enables religious belief is uncertain. We hypothesized that the vmPFC represents diverse religious beliefs and that a vmPFC lesion would be associated with religious fundamentalism, or the narrowing ofreligious beliefs. To test this prediction, we assessed religious adherence with a widely-used religious fundamentalism scale in a large sample of 119 patients with penetrating traumatic brain injury (pTBI). If the vmPFC is crucial to modulating diverse personal religious beliefs, we predicted that pTBI patients with lesions to the vmPFC would exhibit greater fundamentalism, and that this would be modulated by cognitive flexibility and trait openness. Instead, we found that participants with dorsolateral prefrontal cortex (dlPFC) lesions have fundamentalist beliefs similar to patients with vmPFC lesions and that the effect of a dlPFC lesion on fundamentalism was significantly mediated by decreased cognitive flexibility and openness. These findings indicate that cognitive flexibility and openness are necessary for flexible and adaptive religious commitment, and that such diversity of religious thought is dependent on dlPFC functionality. PMID:28392301
Cristofori, Irene; Viola, Vanda; Chau, Aileen; Zhong, Wanting; Krueger, Frank; Zamboni, Giovanna; Grafman, Jordan
2015-08-01
Given the determinant role of ventromedial prefrontal cortex (vmPFC) in valuation, we examined whether vmPFC lesions also modulate how people scale political beliefs. Patients with penetrating traumatic brain injury (pTBI; N = 102) and healthy controls (HCs; N = 31) were tested on the political belief task, where they rated 75 statements expressing political opinions concerned with welfare, economy, political involvement, civil rights, war and security. Each statement was rated for level of agreement and scaled along three dimensions: radicalism, individualism and conservatism. Voxel-based lesion-symptom mapping (VLSM) analysis showed that diminished scores for the radicalism dimension (i.e. statements were rated as less radical than the norms) were associated with lesions in bilateral vmPFC. After dividing the pTBI patients into three groups, according to lesion location (i.e. vmPFC, dorsolateral prefrontal cortex [dlPFC] and parietal cortex), we found that the vmPFC, but not the dlPFC, group had reduced radicalism scores compared with parietal and HC groups. These findings highlight the crucial role of the vmPFC in appropriately valuing political behaviors and may explain certain inappropriate social judgments observed in patients with vmPFC lesions. © The Author (2015). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Virtual Network Configuration Management System for Data Center Operations and Management
NASA Astrophysics Data System (ADS)
Okita, Hideki; Yoshizawa, Masahiro; Uehara, Keitaro; Mizuno, Kazuhiko; Tarui, Toshiaki; Naono, Ken
Virtualization technologies are widely deployed in data centers to improve system utilization. However, they increase the workload for operators, who have to manage the structure of virtual networks in data centers. A virtual-network management system which automates the integration of the configurations of the virtual networks is provided. The proposed system collects the configurations from server virtualization platforms and VLAN-supported switches, and integrates these configurations according to a newly developed XML-based management information model for virtual-network configurations. Preliminary evaluations show that the proposed system helps operators by reducing the time to acquire the configurations from devices and correct the inconsistency of operators' configuration management database by about 40 percent. Further, they also show that the proposed system has excellent scalability; the system takes less than 20 minutes to acquire the virtual-network configurations from a large scale network that includes 300 virtual machines. These results imply that the proposed system is effective for improving the configuration management process for virtual networks in data centers.
2014-06-01
motion capture data used to determine position and orientation of a Soldier’s head, turret and the M2 machine gun • Controlling and acquiring user/weapon...data from the M2 simulation machine gun • Controlling paintball guns used to fire at the GPK during an experimental run • Sending and receiving TCP...Mounted, Armor/Cavalry, Combat Engineers, Field Artillery Cannon Crewmember, or MP duty assignment – Currently M2 .50 Caliber Machine Gun qualified
NASA Technical Reports Server (NTRS)
Begault, Durand R.; Null, Cynthia H. (Technical Monitor)
1997-01-01
This talk will overview the basic technologies related to the creation of virtual acoustic images, and the potential of including spatial auditory displays in human-machine interfaces. Research into the perceptual error inherent in both natural and virtual spatial hearing is reviewed, since the formation of improved technologies is tied to psychoacoustic research. This includes a discussion of Head Related Transfer Function (HRTF) measurement techniques (the HRTF provides important perceptual cues within a virtual acoustic display). Many commercial applications of virtual acoustics have so far focused on games and entertainment ; in this review, other types of applications are examined, including aeronautic safety, voice communications, virtual reality, and room acoustic simulation. In particular, the notion that realistic simulation is optimized within a virtual acoustic display when head motion and reverberation cues are included within a perceptual model.
Wilkins, Leanne K; Girard, Todd A; Konishi, Kyoko; King, Matthew; Herdman, Katherine A; King, Jelena; Christensen, Bruce; Bohbot, Veronique D
2013-11-01
Spatial memory is impaired among persons with schizophrenia (SCZ). However, different strategies may be used to solve most spatial memory and navigation tasks. This study investigated the hypothesis that participants with schizophrenia-spectrum disorders (SSD) would demonstrate differential impairment during acquisition and retrieval of target locations when using a hippocampal-dependent spatial strategy, but not a response strategy, which is more associated with caudate function. Healthy control (CON) and SSD participants were tested using the 4-on-8 virtual maze (4/8VM), a virtual navigation task designed to differentiate between participants' use of spatial and response strategies. Consistent with our predictions, SSD participants demonstrated a differential deficit such that those who navigated using a spatial strategy made more errors and took longer to locate targets. In contrast, SSD participants who spontaneously used a response strategy performed as well as CON participants. The differential pattern of spatial-memory impairment in SSD provides only indirect support for underlying hippocampal dysfunction. These findings emphasize the importance of considering individual strategies when investigating SSD-related memory and navigation performance. Future cognitive intervention protocols may harness SSD participants' intact ability to navigate using a response strategy and/or train the deficient ability to navigate using a spatial strategy to improve navigation and memory abilities in participants with SSD. Copyright © 2013 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Yu, Nan; Cao, Yu
2017-05-01
The traffic demand elastic is proposed as a new indicator in this study to measure the feasibility of the high-speed railway construction in a more intuitive way. The Matrix Completion (MC) and Semi-Supervised Support Vector Machine (S3VM) are used to realize the measurement and prediction of this index on the basis of the satisfaction investigation on the 326 inter-city railways in china. It is demonstrated that instead of calculating the economic benefits brought by the construction of high-speed railway, this indicator can find the most urgent railways to be improved by directly evaluate the existing railway facilities from the perspective of transportation service improvement requirements.
Enhanced emotional responses during social coordination with a virtual partner
Dumas, Guillaume; Kelso, J.A. Scott; Tognoli, Emmanuelle
2016-01-01
Emotion and motion, though seldom studied in tandem, are complementary aspects of social experience. This study investigates variations in emotional responses during movement coordination between a human and a Virtual Partner (VP), an agent whose virtual finger movements are driven by the Haken-Kelso-Bunz (HKB) equations of Coordination Dynamics. Twenty-one subjects were instructed to coordinate finger movements with the VP in either inphase or antiphase patterns. By adjusting model parameters, we manipulated the ‘intention’ of VP as cooperative or competitive with the human's instructed goal. Skin potential responses (SPR) were recorded to quantify the intensity of emotional response. At the end of each trial, subjects rated the VP's intention and whether they thought their partner was another human being or a machine. We found greater emotional responses when subjects reported that their partner was human and when coordination was stable. That emotional responses are strongly influenced by dynamic features of the VP's behavior, has implications for mental health, brain disorders and the design of socially cooperative machines. PMID:27094374
A psychophysiological investigation of moral judgment after ventromedial prefrontal damage.
Moretto, Giovanna; Làdavas, Elisabetta; Mattioli, Flavia; di Pellegrino, Giuseppe
2010-08-01
Converging evidence suggests that emotion processing mediated by ventromedial prefrontal cortex (vmPFC) is necessary to prevent personal moral violations. In moral dilemmas, for example, patients with lesions in vmPFC are more willing than normal controls to approve harmful actions that maximize good consequences (e.g., utilitarian moral judgments). Yet, none of the existing studies has measured subjects' emotional responses while they considered moral dilemmas. Therefore, a direct link between emotion processing and moral judgment is still lacking. Here, vmPFC patients and control participants considered moral dilemmas while skin conductance response (SCR) was measured as a somatic index of affective state. Replicating previous evidence, vmPFC patients approved more personal moral violations than did controls. Critically, we found that, unlike control participants, vmPFC patients failed to generate SCRs before endorsing personal moral violations. In addition, such anticipatory SCRs correlated negatively with the frequency of utilitarian judgments in normal participants. These findings provide direct support to the hypothesis that the vmPFC promotes moral behavior by mediating the anticipation of the emotional consequences of personal moral violations.
Sex differences in the functional lateralization of emotion and decision-making in the human brain
Reber, Justin; Tranel, Daniel
2016-01-01
Dating back to the case of Phineas Gage, decades of neuropsychological research have shown that the ventromedial prefrontal cortex (vmPFC) is crucial to both real-world social functioning and abstract decision-making in the laboratory (e.g., Bechara et al., 1994; Damasio et al., 1994; Stuss et al., 1983). Previous research has found that the relationship between the laterality of individuals’ vmPFC lesions and neuropsychological performance is moderated by their sex, whereby there are more severe social, emotional, and decision-making impairments in men with right-sided vmPFC lesions and in women with left-sided vmPFC lesions (Tranel et al., 2005; Sutterer et al., 2015). We conducted a selective review of studies examining the effect of vmPFC lesions on emotion and decision-making, and found further evidence of sex-related differences in the lateralization of function not only in the vmPFC, but also in other neurological structures associated with decision-making and emotion. Our review suggests that both sex and laterality effects warrant more careful consideration in the scientific literature. PMID:27870462
Mire, Chad E.; Satterfield, Benjamin A.; Geisbert, Joan B.; Agans, Krystle N.; Borisevich, Viktoriya; Yan, Lianying; Chan, Yee-Peng; Cross, Robert W.; Fenton, Karla A.; Broder, Christopher C.; Geisbert, Thomas W.
2016-01-01
Nipah virus (NiV) is a paramyxovirus that causes severe disease in humans and animals. There are two distinct strains of NiV, Malaysia (NiVM) and Bangladesh (NiVB). Differences in transmission patterns and mortality rates suggest that NiVB may be more pathogenic than NiVM. To investigate pathogenic differences between strains, 4 African green monkeys (AGM) were exposed to NiVM and 4 AGMs were exposed to NiVB. While NiVB was uniformly lethal, only 50% of NiVM-infected animals succumbed to infection. Histopathology of lungs and spleens from NiVB-infected AGMs was significantly more severe than NiVM-infected animals. Importantly, a second study utilizing 11 AGMs showed that the therapeutic window for human monoclonal antibody m102.4, previously shown to rescue AGMs from NiVM infection, was much shorter in NiVB-infected AGMs. Together, these data show that NiVB is more pathogenic in AGMs under identical experimental conditions and suggests that postexposure treatments may need to be NiV strain specific for optimal efficacy. PMID:27484128
Introduction of Virtualization Technology to Multi-Process Model Checking
NASA Technical Reports Server (NTRS)
Leungwattanakit, Watcharin; Artho, Cyrille; Hagiya, Masami; Tanabe, Yoshinori; Yamamoto, Mitsuharu
2009-01-01
Model checkers find failures in software by exploring every possible execution schedule. Java PathFinder (JPF), a Java model checker, has been extended recently to cover networked applications by caching data transferred in a communication channel. A target process is executed by JPF, whereas its peer process runs on a regular virtual machine outside. However, non-deterministic target programs may produce different output data in each schedule, causing the cache to restart the peer process to handle the different set of data. Virtualization tools could help us restore previous states of peers, eliminating peer restart. This paper proposes the application of virtualization technology to networked model checking, concentrating on JPF.
Achieving High Resolution Timer Events in Virtualized Environment.
Adamczyk, Blazej; Chydzinski, Andrzej
2015-01-01
Virtual Machine Monitors (VMM) have become popular in different application areas. Some applications may require to generate the timer events with high resolution and precision. This however may be challenging due to the complexity of VMMs. In this paper we focus on the timer functionality provided by five different VMMs-Xen, KVM, Qemu, VirtualBox and VMWare. Firstly, we evaluate resolutions and precisions of their timer events. Apparently, provided resolutions and precisions are far too low for some applications (e.g. networking applications with the quality of service). Then, using Xen virtualization we demonstrate the improved timer design that greatly enhances both the resolution and precision of achieved timer events.
Lee, Eun-Ju; Kim, Dong-Ho
2016-11-01
This study aimed to evaluate the feasibility and safety of vaginal morcellation (VM) through the posterior cul-de-sac (PCDS) using an electromechanical morcellator and to compare the perioperative outcomes of VM with those of abdominal morcellation (AM) to remove a single myoma. The characteristics of 245 consecutive patients who had undergone VM after laparoscopic myomectomy or subtotal hysterectomy were summarized. A retrospective, matched, case-control study was performed; 64 patients had a myoma weighting 100 g or more. Cases were matched with controls (ratio 1:2), who had undergone AM, by age, body mass index, specimen weight, surgical type, and surgeon. Body image questionnaires were used to assess the cosmetic outcome. Medians were analyzed using the Mann-Whitney U test. Differences between means were assessed using Student's t test. Dichotomous groupings were analyzed using either the Chi-squared test or Fisher's exact test, as appropriate. All 245 patients underwent VM without complications. The mean weight of the specimens was 197.2 g (range 78.5-1477 g), and the mean duration of morcellation was 13.0 min (range 2.0-45.0 min). Two hours after surgery, the visual analog scale (VAS) score was significantly lower in the VM group than in the AM group (P = 0.03). Moreover, the morcellator used was significantly larger in the VM group (P < 0.001), and morcellation duration was significantly shorter in the VM group (P < 0.001). Finally, cosmetic outcome was significantly better in the VM group (P < 0.02). The two groups did not differ significantly in terms of hospitalization duration, surgery duration or VAS score 24 and 47 h postoperatively. VM through the PCDS using an electromechanical morcellator is a safe and feasible technique for surgical excision. The benefits of the procedure over AM are reduced immediate postoperative pain, shorter morcellation time, and better cosmesis.
[Electromagnetic fields in the vicinity of DECT cordless telephones and mobile phones].
Mamrot, Paweł; Mariańska, Magda; Aniołczyk, Halina; Politański, Piotr
2015-01-01
Mobile telephones belong to the most frequently used personal devices. In their surroundings they produce the electromagnetic field (EMF), in which exposure range there are not only users but also nearby bystanders. The aim of the investigations and EMF measurements in the vicinity of phones was to identify the electric field levels with regard to various working modes. Twelve sets of DECT (digital enhanced cordless telecommunications) cordless phones (12 base units and 15 handsets), 21 mobile telephones produced by different manufactures, and 16 smartphones in various applications, (including multimedia) in the conditions of daily use in living rooms were measured. Measurements were taken using the point method in predetermined distances of 0.05-1 m from the devices without the presence of users. In the vicinity of DECT cordless phone handsets, electric field strength ranged from 0.26 to 2.30 V/m in the distance of 0.05 m - 0.18-0.26 V/m (1 m). In surroundings of DECT cordless telephones base units the values of EMF were from 1.78-5.44 V/m (0.05 m) to 0.19- 0.41 V/m (1 m). In the vicinity of mobile phones working in GSM mode with voice transmission, the electric field strength ranged from 2.34-9.14 V/m (0.05 m) to 0.18-0.47 V/m (1 m) while in the vicinity of mobile phones working in WCDMA (Wideband Code Division Multiple Access) mode the electric field strength ranged from 0.22-1.83 V/m (0.05 m) to 0.18-0.20 V/m (1 m). The mean values of the electric field strength for each group of devices, mobile phones and DECT wireless phones sets do not exceed the reference value of 7 V/m, adopted as the limit for general public exposure. This work is available in Open Access model and licensed under a CC BY-NC 3.0 PL license.
A video, text, and speech-driven realistic 3-d virtual head for human-machine interface.
Yu, Jun; Wang, Zeng-Fu
2015-05-01
A multiple inputs-driven realistic facial animation system based on 3-D virtual head for human-machine interface is proposed. The system can be driven independently by video, text, and speech, thus can interact with humans through diverse interfaces. The combination of parameterized model and muscular model is used to obtain a tradeoff between computational efficiency and high realism of 3-D facial animation. The online appearance model is used to track 3-D facial motion from video in the framework of particle filtering, and multiple measurements, i.e., pixel color value of input image and Gabor wavelet coefficient of illumination ratio image, are infused to reduce the influence of lighting and person dependence for the construction of online appearance model. The tri-phone model is used to reduce the computational consumption of visual co-articulation in speech synchronized viseme synthesis without sacrificing any performance. The objective and subjective experiments show that the system is suitable for human-machine interaction.
Strength computation of forged parts taking into account strain hardening and damage
NASA Astrophysics Data System (ADS)
Cristescu, Michel L.
2004-06-01
Modern non-linear simulation software, such as FORGE 3 (registered trade mark of TRANSVALOR), are able to compute the residual stresses, the strain hardening and the damage during the forging process. A thermally dependent elasto-visco-plastic law is used to simulate the behavior of the material of the hot forged piece. A modified Lemaitre law coupled with elasticiy, plasticity and thermic is used to simulate the damage. After the simulation of the different steps of the forging process, the part is cooled and then virtually machined, in order to obtain the finished part. An elastic computation is then performed to equilibrate the residual stresses, so that we obtain the true geometry of the finished part after machining. The response of the part to the loadings it will sustain during it's life is then computed, taking into account the residual stresses, the strain hardening and the damage that occur during forging. This process is illustrated by the forging, virtual machining and stress analysis of an aluminium wheel hub.
GRIDVIEW: Recent Improvements in Research and Education Software for Exploring Mars Topography
NASA Technical Reports Server (NTRS)
Roark, J. H.; Masuoka, C. M.; Frey, H. V.
2004-01-01
GRIDVIEW is being developed by the GEODYNAMICS Branch at NASA's Goddard Space Flight Center and can be downloaded on the web at http://geodynamics.gsfc.nasa.gov/gridview/. The program is very mature and has been successfully used for more than four years, but is still under development as we add new features for data analysis and visualization. The software can run on any computer supported by the IDL virtual machine application supplied by RSI. The virtual machine application is currently available for recent versions of MS Windows, MacOS X, Red Hat Linux and UNIX. Minimum system memory requirement is 32 MB, however loading large data sets may require larger amounts of RAM to function adequately.
Dynamically programmable cache
NASA Astrophysics Data System (ADS)
Nakkar, Mouna; Harding, John A.; Schwartz, David A.; Franzon, Paul D.; Conte, Thomas
1998-10-01
Reconfigurable machines have recently been used as co- processors to accelerate the execution of certain algorithms or program subroutines. The problems with the above approach include high reconfiguration time and limited partial reconfiguration. By far the most critical problems are: (1) the small on-chip memory which results in slower execution time, and (2) small FPGA areas that cannot implement large subroutines. Dynamically Programmable Cache (DPC) is a novel architecture for embedded processors which offers solutions to the above problems. To solve memory access problems, DPC processors merge reconfigurable arrays with the data cache at various cache levels to create a multi-level reconfigurable machines. As a result DPC machines have both higher data accessibility and FPGA memory bandwidth. To solve the limited FPGA resource problem, DPC processors implemented multi-context switching (Virtualization) concept. Virtualization allows implementation of large subroutines with fewer FPGA cells. Additionally, DPC processors can parallelize the execution of several operations resulting in faster execution time. In this paper, the speedup improvement for DPC machines are shown to be 5X faster than an Altera FLEX10K FPGA chip and 2X faster than a Sun Ultral SPARC station for two different algorithms (convolution and motion estimation).
Fullana, Miquel A; Zhu, Xi; Alonso, Pino; Cardoner, Narcís; Real, Eva; López-Solà, Clara; Segalàs, Cinto; Subirà, Marta; Galfalvy, Hanga; Menchón, José M; Simpson, H Blair; Marsh, Rachel; Soriano-Mas, Carles
2017-11-01
Cognitive behavioural therapy (CBT), including exposure and ritual prevention, is a first-line treatment for obsessive-compulsive disorder (OCD), but few reliable predictors of CBT outcome have been identified. Based on research in animal models, we hypothesized that individual differences in basolateral amygdala-ventromedial prefrontal cortex (BLA-vmPFC) communication would predict CBT outcome in patients with OCD. We investigated whether BLA-vmPFC resting-state functional connectivity (rs-fc) predicts CBT outcome in patients with OCD. We assessed BLA-vmPFC rs-fc in patients with OCD on a stable dose of a selective serotonin reuptake inhibitor who then received CBT and in healthy control participants. We included 73 patients with OCD and 84 healthy controls in our study. Decreased BLA-vmPFC rs-fc predicted a better CBT outcome in patients with OCD and was also detected in those with OCD compared with healthy participants. Additional analyses revealed that decreased BLA-vmPFC rs-fc uniquely characterized the patients with OCD who responded to CBT. We used a sample of convenience, and all patients were receiving pharmacological treatment for OCD. In this large sample of patients with OCD, BLA-vmPFC functional connectivity predicted CBT outcome. These results suggest that future research should investigate the potential of BLA-vmPFC pathways to inform treatment selection for CBT across patients with OCD and anxiety disorders.
Agreements in Virtual Organizations
NASA Astrophysics Data System (ADS)
Pankowska, Malgorzata
This chapter is an attempt to explain the important impact that contract theory delivers with respect to the concept of virtual organization. The author believes that not enough research has been conducted in order to transfer theoretical foundations for networking to the phenomena of virtual organizations and open autonomic computing environment to ensure the controllability and management of them. The main research problem of this chapter is to explain the significance of agreements for virtual organizations governance. The first part of this chapter comprises explanations of differences among virtual machines and virtual organizations for further descriptions of the significance of the first ones to the development of the second. Next, the virtual organization development tendencies are presented and problems of IT governance in highly distributed organizational environment are discussed. The last part of this chapter covers analysis of contracts and agreements management for governance in open computing environments.
ERIC Educational Resources Information Center
Pennington, Zachary T.; Anderson, Austin S.; Fanselow, Michael S.
2017-01-01
The ventromedial prefrontal cortex (vmPFC) has consistently appeared altered in post-traumatic stress disorder (PTSD). Although the vmPFC is thought to support the extinction of learned fear responses, several findings support a broader role for this structure in the regulation of fear. To further characterize the relationship between vmPFC…
Video Modeling and Prompting in Practice: Teaching Cooking Skills
ERIC Educational Resources Information Center
Kellems, Ryan O.; Mourra, Kjerstin; Morgan, Robert L.; Riesen, Tim; Glasgow, Malinda; Huddleston, Robin
2016-01-01
This article discusses the creation of video modeling (VM) and video prompting (VP) interventions for teaching novel multi-step tasks to individuals with disabilities. This article reviews factors to consider when selecting skills to teach, and students for whom VM/VP may be successful, as well as the difference between VM and VP and circumstances…
Aikins, Anastasia R; Kim, MiJung; Raymundo, Bernardo
2017-01-01
Vasculogenic mimicry (VM) is a non-classical mechanism recently described in many tumors, whereby cancer cells, rather than endothelial cells, form blood vessels. Transgelin is an actin-binding protein that has been implicated in multiple stages of cancer development. In this study, we investigated the role of transgelin in VM and assessed its effect on the expression of endothelial and angiogenesis-related genes during VM in MDA-MB-231 breast cancer cells. We confirmed the ability of MDA-MB-231 cells to undergo VM through a tube formation assay. Flow cytometry analysis revealed an increase in the expression of the endothelial-related markers VE-cadherin and CD34 in cells that underwent VM, compared with those growing in a monolayer, which was confirmed by immunocytochemistry. We employed siRNA to silence transgelin, and knockdown efficiency was determined by western blot analyses. Downregulation of transgelin suppressed cell proliferation and tube formation, but increased IL-8 levels in Matrigel cultures. RT-PCR analyses revealed that the expression of IL-8, VE-cadherin, and CD34 was unaffected by transgelin knockdown, indicating that increased IL-8 expression was not due to enhanced transcriptional activity. More importantly, the inhibition of IL-8/CXCR2 signaling also resulted in suppression of VM with increased IL-8 levels, confirming that increased IL-8 levels after transgelin knockdown was due to inhibition of IL-8 uptake. Our findings indicate that transgelin regulates VM by enhancing IL uptake. These observations are relevant to the future development of efficient antivascular agents. Impact statement Vasculogenic mimicry (VM) is an angiogenic-independent mechanism of blood vessel formation whereby aggressive tumor cells undergo formation of capillary-like structures. Thus, interventions aimed at angiogenesis might not target the entire tumor vasculature. A more holistic approach is therefore needed in the development of improved antivascular agents. Transgelin, an actin-binding protein, has been associated with multiple stages of cancer development such as proliferation, migration and invasion, but little is known about its role in vasculogenic mimicry. We present here, an additional mechanism by which transgelin promotes malignancy by way of its association with the occurrence of VM. Although transgelin knockdown did not affect the transcript levels of most of the angiogenesis-related genes in this study, it was associated with the inhibition of the uptake of IL-8, accompanied by suppressed VM, indicating that transgelin is required for VM. These observations are relevant to the future development of efficient antivascular agents. PMID:28058861
Warren, David E; Jones, Samuel H; Duff, Melissa C; Tranel, Daniel
2014-05-28
Schematic memory, or contextual knowledge derived from experience (Bartlett, 1932), benefits memory function by enhancing retention and speeding learning of related information (Bransford and Johnson, 1972; Tse et al., 2007). However, schematic memory can also promote memory errors, producing false memories. One demonstration is the "false memory effect" of the Deese-Roediger-McDermott (DRM) paradigm (Roediger and McDermott, 1995): studying words that fit a common schema (e.g., cold, blizzard, winter) often produces memory for a nonstudied word (e.g., snow). We propose that frontal lobe regions that contribute to complex decision-making processes by weighting various alternatives, such as ventromedial prefrontal cortex (vmPFC), may also contribute to memory processes by weighting the influence of schematic knowledge. We investigated the role of human vmPFC in false memory by combining a neuropsychological approach with the DRM task. Patients with vmPFC lesions (n = 7) and healthy comparison participants (n = 14) studied word lists that excluded a common associate (the critical item). Recall and recognition tests revealed expected high levels of false recall and recognition of critical items by healthy participants. In contrast, vmPFC patients showed consistently reduced false recall, with significantly fewer intrusions of critical items. False recognition was also marginally reduced among vmPFC patients. Our findings suggest that vmPFC increases the influence of schematically congruent memories, a contribution that may be related to the role of the vmPFC in decision making. These novel neuropsychological results highlight a role for the vmPFC as part of a memory network including the medial temporal lobes and hippocampus (Andrews-Hanna et al., 2010). Copyright © 2014 the authors 0270-6474/14/347677-06$15.00/0.
Jones, Samuel H.; Duff, Melissa C.; Tranel, Daniel
2014-01-01
Schematic memory, or contextual knowledge derived from experience (Bartlett, 1932), benefits memory function by enhancing retention and speeding learning of related information (Bransford and Johnson, 1972; Tse et al., 2007). However, schematic memory can also promote memory errors, producing false memories. One demonstration is the “false memory effect” of the Deese–Roediger–McDermott (DRM) paradigm (Roediger and McDermott, 1995): studying words that fit a common schema (e.g., cold, blizzard, winter) often produces memory for a nonstudied word (e.g., snow). We propose that frontal lobe regions that contribute to complex decision-making processes by weighting various alternatives, such as ventromedial prefrontal cortex (vmPFC), may also contribute to memory processes by weighting the influence of schematic knowledge. We investigated the role of human vmPFC in false memory by combining a neuropsychological approach with the DRM task. Patients with vmPFC lesions (n = 7) and healthy comparison participants (n = 14) studied word lists that excluded a common associate (the critical item). Recall and recognition tests revealed expected high levels of false recall and recognition of critical items by healthy participants. In contrast, vmPFC patients showed consistently reduced false recall, with significantly fewer intrusions of critical items. False recognition was also marginally reduced among vmPFC patients. Our findings suggest that vmPFC increases the influence of schematically congruent memories, a contribution that may be related to the role of the vmPFC in decision making. These novel neuropsychological results highlight a role for the vmPFC as part of a memory network including the medial temporal lobes and hippocampus (Andrews-Hanna et al., 2010). PMID:24872571
M, Ramadevi; Panwar, Deepesh; Joseph, Juby Elsa; Kaira, Gaurav Singh; Kapoor, Mukesh
2018-06-20
A hitherto unknown low molecular weight form of α-galactosidase (VM-αGal) from germinating black gram (Vigna mungo) seeds was purified to homogeneity (432 U/mg specific activity, 1542-fold purification) using ion-exchange (DEAE-cellulose, CM-sepharose) and affinity (Con-A Sepharose 4B) chromatography. VM-αGal appeared as a single band ~ 45 kDa on SDS-PAGE and showed optimal activity at pH 5 and 55 °C. Hg 2+ and SDS completely inhibited VM-αGal activity. The K m, V max and catalytic efficiency (k cat /K m ) of VM-αGal for pNPG and raffinose was found to be 0.99, 17.23 mM, 0.413 s -1 mM -1 and 1.66, 0.146 mM ml -1 min -1 , 0.0026 s -1 mM -1 , respectively. VM-αGal was competitively inhibited by galactose (K i 7.70 mM). Thermodynamic parameters [activation enthalpy (ΔH), activation entropy (ΔS) and free energy (ΔG)] of VM-αGal at 45-51 °C showed that the enzyme was in a less energetic state and had susceptibility towards denaturation. Temperature-induced structural unfolding studies of VM-αGal probed by fluorescence, and far-UV CD spectroscopy revealed significant loss in tertiary structure and a steep decline in β-sheet content at 45-65 °C, and above 55 °C, respectively. VM-αGal improved the nutritional quality of soymilk by hydrolyzing the flatulence-causing raffinose family oligosaccharides (26.5% and 18.45% decrease in stachyose and raffinose, respectively). Copyright © 2018. Published by Elsevier B.V.
Variant myopia: A new presentation?
Hussaindeen, Jameel Rizwana; Anand, Mithra; Sivaraman, Viswanathan; Ramani, Krishna Kumar; Allen, Peter M
2018-01-01
Purpose: Variant myopia (VM) presents as a discrepancy of >1 diopter (D) between subjective and objective refraction, without the presence of any accommodative dysfunction. The purpose of this study is to create a clinical profile of VM. Methods: Fourteen eyes of 12 VM patients who had a discrepancy of >1D between retinoscopy and subjective acceptance under both cycloplegic and noncycloplegic conditions were included in the study. Fourteen eyes of 14 age- and refractive error-matched participants served as controls. Potential participants underwent a comprehensive orthoptic examination followed by retinoscopy (Ret), closed-field autorefractor (CA), subjective acceptance (SA), choroidal and retinal thickness, ocular biometry, and higher order spherical aberrations measurements. Results: In the VM eyes, a statistically and clinically significant difference was noted between the Ret and CA and Ret and SA under both cycloplegic and noncycloplegic conditions (multivariate repeated measures analysis of variance, P < 0.0001). A statistically significant difference was observed between the VM eyes, non-VM eyes, and controls for choroidal thickness in all the quadrants (Univariate ANOVA P < 0.05). The VM eyes had thinner choroids (197.21 ± 13.04 μ) compared to the non-VM eyes (249.25 ± 53.70 μ) and refractive error-matched controls (264.62 ± 12.53 μ). No statistically significant differences between groups in root mean square of total higher order aberrations and spherical aberration were observed. Conclusion: Accommodative etiology does not play a role in the refractive discrepancy seen in individuals with the variant myopic presentation. These individuals have thinner choroids in the eye with variant myopic presentation compared to the fellow eyes and controls. Hypotheses and clinical implications of variant myopia are discussed. PMID:29785987
Cheng, Henry H; Wypij, David; Laussen, Peter C; Bellinger, David C; Stopp, Christian D; Soul, Janet S; Newburger, Jane W; Kussman, Barry D
2014-07-01
Cerebral blood flow velocity (CBFV) measured by transcranial Doppler sonography has provided information on cerebral perfusion in patients undergoing infant heart surgery, but no studies have reported a relationship to early postoperative and long-term neurodevelopmental outcomes. CBFV was measured in infants undergoing biventricular repair without aortic arch reconstruction as part of a trial of hemodilution during cardiopulmonary bypass (CPB); CBFV (Vm, mean; Vs, systolic; Vd, end-diastolic) in the middle cerebral artery and change in Vm (rVm) were measured intraoperatively and up to 18 hours post-CPB. Neurodevelopmental outcomes, measured at 1 year of age, included the psychomotor development index (PDI) and mental development index (MDI) of the Bayley Scales of Infant Development-II. CBFV was measured in 100 infants; 43 with D-transposition of the great arteries, 36 with tetralogy of Fallot, and 21 with ventricular septal defects. Lower Vm, Vs, Vd, and rVm at 18 hours post-CPB were independently related to longer intensive care unit duration of stay (p<0.05). In the 85 patients who returned for neurodevelopmental testing, lower Vm, Vs, Vd, and rVm at 18 hours post-CPB were independently associated with lower PDI (p<0.05) and MDI (p<0.05, except Vs: p=0.06) scores. Higher Vs and rVm at 18 hours post-CPB were independently associated with increased incidence of brain injury on magnetic resonance imaging in 39 patients. Postoperative CBFV after biventricular repair is related to early postoperative and neurodevelopmental outcomes at 1 year of age, possibly indicating that low CBFV is a marker of suboptimal postoperative hemodynamics and cerebral perfusion. Copyright © 2014 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.
Cheng, Henry H.; Wypij, David; Laussen, Peter C.; Bellinger, David C.; Stopp, Christian D.; Soul, Janet S.; Newburger, Jane W.; Kussman, Barry D.
2014-01-01
Background Cerebral blood flow velocity (CBFV) measured by transcranial Doppler sonography has provided information on cerebral perfusion in patients undergoing infant heart surgery, but no studies have reported a relationship to early postoperative and long-term neurodevelopmental outcomes. Methods CBFV was measured in infants undergoing biventricular repair without aortic arch reconstruction as part of a trial of hemodilution during cardiopulmonary bypass (CPB). CBFV (Vm, mean; Vs, systolic; Vd, end-diastolic) in the middle cerebral artery and change in Vm (rVm) were measured intraoperatively and up to 18 hours post-CPB. Neurodevelopmental outcomes, measured at 1 year of age, included the Psychomotor Development Index (PDI) and Mental Development Index (MDI) of the Bayley Scales of Infant Development-II. Results CBFV was measured in 100 infants: 43 with D-transposition of the great arteries, 36 with tetralogy of Fallot, and 21 with ventricular septal defects. Lower Vm, Vs, Vd, and rVm at18 hours post-CPB were independently related to longer ICU duration of stay (P<0.05). In the 85 patients who returned for neurodevelopmental testing, lower Vm, Vs, Vd and rVm at 18 hours post-CPB were independently associated with lower PDI (P<0.05) and MDI (P<0.05, except Vs: P=0.06) scores. Higher Vs and rVm at 18 hours post-CPB were independently associated with increased incidence of brain injury on MRI in 39 patients. Conclusions Postoperative CBFV after biventricular repair is related to early postoperative and neurodevelopmental outcomes at 1 year of age, possibly indicating that low CBFV is a marker of suboptimal postoperative hemodynamics and cerebral perfusion. PMID:24820395
Menstrual cycle mediates vastus medialis and vastus medialis oblique muscle activity.
Tenan, Matthew S; Peng, Yi-Ling; Hackney, Anthony C; Griffin, Lisa
2013-11-01
Sports medicine professionals commonly describe two functionally different units of the vastus medialis (VM), the VM, and the vastus medialis oblique (VMO), but the anatomical support is equivocal. The functional difference of the VMO is principle to rehabilitation programs designed to alleviate anterior knee pain, a pathology that is known to have a greater occurrence in women. The purpose of this study was to determine whether the motor units of the VM and VMO are differentially recruited and if this recruitment pattern has an effect of sex or menstrual cycle phase. Single motor unit recordings from the VM and VMO were obtained for men and women during an isometric ramp knee extension. Eleven men were tested once. Seven women were tested during five different phases of the menstrual cycle, determined by basal body temperature mapping. The recruitment threshold and the initial firing rate at recruitment were determined from 510 motor unit recordings. The initial firing rate was lower in the VMO than that in the VM in women (P < 0.001) but not in men. There was no difference in recruitment thresholds for the VM and VMO in either sex or across the menstrual cycle. There was a main effect of menstrual phase on initial firing rate, showing increases from the early follicular to late luteal phase (P = 0.003). The initial firing rate in the VMO was lower than that in the VM during ovulatory (P = 0.009) and midluteal (P = 0.009) phases. The relative control of the VM and VMO changes across the menstrual cycle. This could influence patellar pathologies that have a higher incidence in women.
COE inhibits vasculogenic mimicry in hepatocellular carcinoma via suppressing Notch1 signaling.
Jue, Chen; Min, Zhao; Zhisheng, Zhang; Lin, Cui; Yayun, Qian; Xuanyi, Wang; Feng, Jin; Haibo, Wang; Youyang, Shi; Tadashi, Hisamitsu; Shintaro, Ishikawa; Shiyu, Guo; Yanqing, Liu
2017-08-17
Vasculogenic mimicry (VM) has been suggested to be present in various malignant tumors and associated with tumor nutrition supply and metastasis, leading to poor prognosis of patients. Notch1 has been demonstrated to contribute to VM formation in hepathocellular carcinoma (HCC). Celastrus orbiculatus extract (COE), a mixture of 11 terpenoids isolated from the Chinese Herb Celastrus orbiculatus Vine, has been suggested to be effective in cancer treatment. In the current study, experiments were carried out to examine the effect of COE on VM formation and HCC tumor growth both in vitro and in vivo. CCK-8 assay and Nikon live-work station were used to observe the viability of malignant cells treated with COE. Cell invasion was examined using Transwell. Matrigel was used to establish a 3-D culture condition for VM formation. Changes of mRNA and protein expression were examined by RT-PCR and Western Blot respectively. Tumor growth in vivo was monitored using in vivo fluorescence imaging device. PAS-CD34 dual staining and electron microscopy were used to observe VM formation. Immunohistochemical staining (IHC) was used to examine Notch1 and Hes1 expression in tumor tissues. Results showed that COE can inhibit HCC cells proliferation and invasion in a concentration-dependent manner. VM formation induced by TGF-β1 was blocked by COE. In mouse xenograft model, COE inhibited tumor growth and VM formation. Both in vitro and in vivo studies showed that COE can downregulate expression of Notch1 and Hes1. The current results indicate that COE can inhibit VM formation and HCC tumor growth by downregulating Notch1 signaling. This study demonstrates that COE is superior to other anti-angiogenesis agents and can be considered as a promising candidate in HCC treatment. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.
Moreira, Pedro Silva; Santos, Nadine Correia; Sousa, Nuno
2015-01-01
Executive functioning (EF), which is considered to govern complex cognition, and verbal memory (VM) are constructs assumed to be related. However, it is not known the magnitude of the association between EF and VM, and how sociodemographic and psychological factors may affect this relationship, including in normal aging. In this study, we assessed different EF and VM parameters, via a battery of neurocognitive/psychological tests, and performed a Canonical Correlation Analysis (CCA) to explore the connection between these constructs, in a sample of middle-aged and older healthy individuals without cognitive impairment (N = 563, 50+ years of age). The analysis revealed a positive and moderate association between EF and VM independently of gender, age, education, global cognitive performance level, and mood. These results confirm that EF presents a significant association with VM performance. PMID:28138465
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCaskey, Alexander J.
There is a lack of state-of-the-art HPC simulation tools for simulating general quantum computing. Furthermore, there are no real software tools that integrate current quantum computers into existing classical HPC workflows. This product, the Quantum Virtual Machine (QVM), solves this problem by providing an extensible framework for pluggable virtual, or physical, quantum processing units (QPUs). It enables the execution of low level quantum assembly codes and returns the results of such executions.
2000-04-01
be an extension of Utah’s nascent Quarks system, oriented to closely coupled cluster environments. However, the grant did not actually begin until... Intel x86, implemented ten virtual machine monitors and servers, including a virtual memory manager, a checkpointer, a process manager, a file server...Fluke, we developed a novel hierarchical processor scheduling frame- work called CPU inheritance scheduling [5]. This is a framework for scheduling
Generating Contextual Descriptions of Virtual Reality (VR) Spaces
NASA Astrophysics Data System (ADS)
Olson, D. M.; Zaman, C. H.; Sutherland, A.
2017-12-01
Virtual reality holds great potential for science communication, education, and research. However, interfaces for manipulating data and environments in virtual worlds are limited and idiosyncratic. Furthermore, speech and vision are the primary modalities by which humans collect information about the world, but the linking of visual and natural language domains is a relatively new pursuit in computer vision. Machine learning techniques have been shown to be effective at image and speech classification, as well as at describing images with language (Karpathy 2016), but have not yet been used to describe potential actions. We propose a technique for creating a library of possible context-specific actions associated with 3D objects in immersive virtual worlds based on a novel dataset generated natively in virtual reality containing speech, image, gaze, and acceleration data. We will discuss the design and execution of a user study in virtual reality that enabled the collection and the development of this dataset. We will also discuss the development of a hybrid machine learning algorithm linking vision data with environmental affordances in natural language. Our findings demonstrate that it is possible to develop a model which can generate interpretable verbal descriptions of possible actions associated with recognized 3D objects within immersive VR environments. This suggests promising applications for more intuitive user interfaces through voice interaction within 3D environments. It also demonstrates the potential to apply vast bodies of embodied and semantic knowledge to enrich user interaction within VR environments. This technology would allow for applications such as expert knowledge annotation of 3D environments, complex verbal data querying and object manipulation in virtual spaces, and computer-generated, dynamic 3D object affordances and functionality during simulations.
The perception of spatial layout in real and virtual worlds.
Arthur, E J; Hancock, P A; Chrysler, S T
1997-01-01
As human-machine interfaces grow more immersive and graphically-oriented, virtual environment systems become more prominent as the medium for human-machine communication. Often, virtual environments (VE) are built to provide exact metrical representations of existing or proposed physical spaces. However, it is not known how individuals develop representational models of these spaces in which they are immersed and how those models may be distorted with respect to both the virtual and real-world equivalents. To evaluate the process of model development, the present experiment examined participant's ability to reproduce a complex spatial layout of objects having experienced them previously under different viewing conditions. The layout consisted of nine common objects arranged on a flat plane. These objects could be viewed in a free binocular virtual condition, a free binocular real-world condition, and in a static monocular view of the real world. The first two allowed active exploration of the environment while the latter condition allowed the participant only a passive opportunity to observe from a single viewpoint. Viewing conditions were a between-subject variable with 10 participants randomly assigned to each condition. Performance was assessed using mapping accuracy and triadic comparisons of relative inter-object distances. Mapping results showed a significant effect of viewing condition where, interestingly, the static monocular condition was superior to both the active virtual and real binocular conditions. Results for the triadic comparisons showed a significant interaction for gender by viewing condition in which males were more accurate than females. These results suggest that the situation model resulting from interaction with a virtual environment was indistinguishable from interaction with real objects at least within the constraints of the present procedure.
Video Modeling and Children with Autism Spectrum Disorder: A Survey of Caregiver Perspectives
ERIC Educational Resources Information Center
Cardon, Teresa A.; Guimond, Amy; Smith-Treadwell, Amanda M.
2015-01-01
Video modeling (VM) has shown promise as an effective intervention for individuals with autism spectrum disorder (ASD); however, little is known about what may promote or prevent caregivers' use of this intervention. While VM is an effective tool to support skill development among a wide range of children in research and clinical settings, VM is…
Sex differences in the functional lateralization of emotion and decision making in the human brain.
Reber, Justin; Tranel, Daniel
2017-01-02
Dating back to the case of Phineas Gage, decades of neuropsychological research have shown that the ventromedial prefrontal cortex (vmPFC) is crucial to both real-world social functioning and abstract decision making in the laboratory (see, e.g., Stuss et al., ; Bechara et al., 1994; Damasio et al., ). Previous research has shown that the relationship between the laterality of individuals' vmPFC lesions and neuropsychological performance is moderated by their sex, whereby there are more severe social, emotional, and decision-making impairments in men with right-side vmPFC lesions and in women with left-side vmPFC lesions (Tranel et al., 2005; Sutterer et al., 2015). We conducted a selective review of studies examining the effect of vmPFC lesions on emotion and decision making and found further evidence of sex-related differences in the lateralization of function not only in the vmPFC but also in other neurological structures associated with decision making and emotion. This Mini-Review suggests that both sex and laterality effects warrant more careful consideration in the scientific literature. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Chojnacki, Matthew; Burr, Devon M.; Moersch, Jeffrey E.
2014-02-01
Planetary dune field properties and their bulk bedform morphologies relate to regional wind patterns, sediment supply, climate, and topography. On Mars, major occurrences of spatially contiguous low-albedo sand dunes are primarily found in three major topographic settings: impact craters, high-latitude basins, and linear troughs or valleys, the largest being the Valles Marineris (VM) rift system. As one of the primary present day martian sediment sinks, VM holds nearly a third of the non-polar dune area on Mars. Moreover, VM differs from other regions due to its unusual geologic, topographic, and atmospheric setting. Herein, we test the overarching hypothesis that VM dune fields are compositionally, morphologically, and thermophysically distinct from other low- and mid-latitude (50°N-50°S latitude) dune fields. Topographic measurements of dune fields and their underlying terrains indicate slopes, roughnesses, and reliefs to be notably greater for those in VM. Variable VM dune morphologies are shown with topographically-related duneforms (climbing, falling, and echo dunes) located among spur-and-gully wall, landslide, and chaotic terrains, contrasting most martian dunes found in more topographically benign locations (e.g., craters, basins). VM dune fields superposed on Late Amazonian landslides are constrained to have formed and/or migrated over >10s of kilometers in the last 50 My to 1 Gy. Diversity of detected dune sand compositions, including unaltered ultramafic minerals and glasses (e.g., high and low-calcium pyroxene, olivine, Fe-bearing glass), and alteration products (hydrated sulfates, weathered Fe-bearing glass), is more pronounced in VM. Observations show heterogeneous sand compositions exist at the regional-, basinal-, dune field-, and dune-scales. Although not substantially greater than elsewhere, unambiguous evidence for recent dune activity in VM is indicated from pairs of high-resolution images that include: dune deflation, dune migration, slip face modification (e.g., alcoves), and ripple modification or migration, at varying scales (10s-100s m2). We conclude that VM dune fields are qualitatively and quantitatively distinct from other low- and mid-latitude dune fields, most readily attributable to the rift's unusual setting. Moreover, results imply dune field properties and aeolian processes on Mars can be largely influenced by regional environment, which may have their own distinctive set of boundary conditions, rather than a globally homogenous collection of aeolian sediment and bedforms.
Vertical Mandibular Range of Motion in Anesthetized Dogs and Cats
Gracis, Margherita; Zini, Eric
2016-01-01
The main movement of the temporomandibular joint of dogs and cats is in vertical dimensions (opening and closing the mouth). An objective evaluation of the vertical mandibular range of motion (vmROM) may favor early diagnosis of a number of conditions affecting the joint mobility. vmROM, corresponding to the maximum interincisal opening, was measured in 260 dogs and 127 cats anesthetized between June 2011 and April 2015 because of oral or maxillofacial problems and procedures. Animals with a known history of or having current diseases considered to hamper mandibular extension were excluded from the study. Dogs were divided into four subgroups, based on body weight: subgroup 1 (≤5.0 kg, 51 dogs), subgroup 2 (5.1–10.0 kg, 56 dogs), subgroup 3 (10.1–25 kg, 66 dogs), and subgroup 4 (>25.1 kg, 87 dogs). The mean vmROM of all dogs was 107 ± 30 mm (median 109, range 40–180); in subgroup 1 was 67 ± 15 mm (median 67, range 40–100), in subgroup 2 was 93 ± 15 mm (median 93, range 53–128), in subgroup 3 was 115 ± 19 mm (median 116, range 59–154), and in subgroup 4 was 134 ± 19 mm (median 135, range 93–180). The mean vmROM of the cats was 62 ± 8 mm (median 63, range 41–84). Correlations between vmROM, age, sex, and body weight were evaluated. In dogs, vmROM did not correlate with age, and in cats a weak positive correlation was found. vmROM and body weight were positively correlated in both populations, except dog subgroup 2. Overall, mean vmROM and body weight were significantly higher in male than in female, both in dogs and in cats. However, vmROM did not differ between sexes in any of the canine subgroups, and only in subgroup 4 male dogs were significantly heavier than females. Evaluation of vmROM should be incorporated into every diagnostic examination as it may be valuable in showing changes over time for every single patient. PMID:27446939
Gao, Yana; Yu, Hai; Liu, Yunhui; Liu, Xiaobai; Zheng, Jian; Ma, Jun; Gong, Wei; Chen, Jiajia; Zhao, Lini; Tian, Yu; Xue, Yixue
2018-01-01
Vasculogenic mimicry (VM) has been reported to be a novel glioma neovascularization process. Anti-VM therapy provides new insight into glioma clinical management. In this study, we revealed the role of the long non-coding RNA HOXA cluster antisense RNA 2 (HOXA-AS2) in malignant glioma behaviors and VM formation. Quantitative real-time PCR was performed to determine the expression levels of HOXA-AS2 in glioma samples and glioblastoma cell lines. CD34-periodic acid-Schiff dual-staining was performed to assess VM in glioma samples. CCK-8, transwell, and Matrigel tube formation assays were performed to measure the effects of HOXA-AS2 knockdown on cell viability, migration, invasion, and VM tube formation, respectively. RNA immunoprecipitation, dual-luciferase reporter and Western blot assays were performed to explore the molecular mechanisms underlying the functions of HOXS-AS2 in glioblastoma cells. A nude mouse xenograft model was used to investigate the role of HOXA-AS2 in xenograft glioma growth and VM density. Student's t-tests, one-way ANOVAs followed by Bonferroni posthoc tests, and chi-square tests were used for the statistical analyses. HOXA-AS2 was upregulated in glioma samples and cell lines and was positively correlated with VM. HOXA-AS2 knockdown attenuated cell viability, migration, invasion, and VM formation in glioma cells and inhibited the expression of vascular endothelial-cadherin (VE-cadherin), as well as the expression and activity of matrix metalloproteinase matrix metalloproteinase (MMP)-2 and MMP-9. miR-373 was downregulated in glioma samples and cell lines and suppressed malignancy in glioblastoma cells. HOXA-AS2 bound to miR-373 and negatively regulated its expression. Epidermal growth factor receptor (EGFR), a target of miR-373, increased the expression levels of VE-cadherin, as well as the expression and activity levels of MMP-2 and MMP-9, via activating phosphatidylinositol 3-kinase/serine/threonine kinase pathways. HOXA-AS2 knockdown combined with miR-373 overexpression yielded optimal tumor suppressive effects and the lowest VM density in vivo. HOXA-AS2 knockdown inhibited malignant glioma behaviors and VM formation via the miR-373/EGFR axis. © 2018 The Author(s). Published by S. Karger AG, Basel.
Performance of machine-learning scoring functions in structure-based virtual screening.
Wójcikowski, Maciej; Ballester, Pedro J; Siedlecki, Pawel
2017-04-25
Classical scoring functions have reached a plateau in their performance in virtual screening and binding affinity prediction. Recently, machine-learning scoring functions trained on protein-ligand complexes have shown great promise in small tailored studies. They have also raised controversy, specifically concerning model overfitting and applicability to novel targets. Here we provide a new ready-to-use scoring function (RF-Score-VS) trained on 15 426 active and 893 897 inactive molecules docked to a set of 102 targets. We use the full DUD-E data sets along with three docking tools, five classical and three machine-learning scoring functions for model building and performance assessment. Our results show RF-Score-VS can substantially improve virtual screening performance: RF-Score-VS top 1% provides 55.6% hit rate, whereas that of Vina only 16.2% (for smaller percent the difference is even more encouraging: RF-Score-VS top 0.1% achieves 88.6% hit rate for 27.5% using Vina). In addition, RF-Score-VS provides much better prediction of measured binding affinity than Vina (Pearson correlation of 0.56 and -0.18, respectively). Lastly, we test RF-Score-VS on an independent test set from the DEKOIS benchmark and observed comparable results. We provide full data sets to facilitate further research in this area (http://github.com/oddt/rfscorevs) as well as ready-to-use RF-Score-VS (http://github.com/oddt/rfscorevs_binary).
ERIC Educational Resources Information Center
Annetta, Leonard; Mangrum, Jennifer; Holmes, Shawn; Collazo, Kimberly; Cheng, Meng-Tzu
2009-01-01
The purpose of this study was to examine students' learning of simple machines, a fifth-grade (ages 10-11) forces and motion unit, and student engagement using a teacher-created Multiplayer Educational Gaming Application. This mixed-method study collected pre-test/post-test results to determine student knowledge about simple machines. A survey…
Fullana, Miquel A.; Zhu, Xi; Alonso, Pino; Cardoner, Narcís; Real, Eva; López-Solà, Clara; Segalàs, Cinto; Subirà, Marta; Galfalvy, Hanga; Menchón, José M.; Simpson, H. Blair; Marsh, Rachel; Soriano-Mas, Carles
2017-01-01
Background Cognitive behavioural therapy (CBT), including exposure and ritual prevention, is a first-line treatment for obsessive–compulsive disorder (OCD), but few reliable predictors of CBT outcome have been identified. Based on research in animal models, we hypothesized that individual differences in basolateral amygdala–ventromedial prefrontal cortex (BLA–vmPFC) communication would predict CBT outcome in patients with OCD. Methods We investigated whether BLA–vmPFC resting-state functional connectivity (rs-fc) predicts CBT outcome in patients with OCD. We assessed BLA–vmPFC rs-fc in patients with OCD on a stable dose of a selective serotonin reuptake inhibitor who then received CBT and in healthy control participants. Results We included 73 patients with OCD and 84 healthy controls in our study. Decreased BLA–vmPFC rs-fc predicted a better CBT outcome in patients with OCD and was also detected in those with OCD compared with healthy participants. Additional analyses revealed that decreased BLA–vmPFC rs-fc uniquely characterized the patients with OCD who responded to CBT. Limitations We used a sample of convenience, and all patients were receiving pharmacological treatment for OCD. Conclusion In this large sample of patients with OCD, BLA–vmPFC functional connectivity predicted CBT outcome. These results suggest that future research should investigate the potential of BLA–vmPFC pathways to inform treatment selection for CBT across patients with OCD and anxiety disorders. PMID:28632120
Architecture and Key Techniques of Augmented Reality Maintenance Guiding System for Civil Aircrafts
NASA Astrophysics Data System (ADS)
hong, Zhou; Wenhua, Lu
2017-01-01
Augmented reality technology is introduced into the maintenance related field for strengthened information in real-world scenarios through integration of virtual assistant maintenance information with real-world scenarios. This can lower the difficulty of maintenance, reduce maintenance errors, and improve the maintenance efficiency and quality of civil aviation crews. Architecture of augmented reality virtual maintenance guiding system is proposed on the basis of introducing the definition of augmented reality and analyzing the characteristics of augmented reality virtual maintenance. Key techniques involved, such as standardization and organization of maintenance data, 3D registration, modeling of maintenance guidance information and virtual maintenance man-machine interaction, are elaborated emphatically, and solutions are given.
Achieving High Resolution Timer Events in Virtualized Environment
Adamczyk, Blazej; Chydzinski, Andrzej
2015-01-01
Virtual Machine Monitors (VMM) have become popular in different application areas. Some applications may require to generate the timer events with high resolution and precision. This however may be challenging due to the complexity of VMMs. In this paper we focus on the timer functionality provided by five different VMMs—Xen, KVM, Qemu, VirtualBox and VMWare. Firstly, we evaluate resolutions and precisions of their timer events. Apparently, provided resolutions and precisions are far too low for some applications (e.g. networking applications with the quality of service). Then, using Xen virtualization we demonstrate the improved timer design that greatly enhances both the resolution and precision of achieved timer events. PMID:26177366
A computer-based training system combining virtual reality and multimedia
NASA Technical Reports Server (NTRS)
Stansfield, Sharon A.
1993-01-01
Training new users of complex machines is often an expensive and time-consuming process. This is particularly true for special purpose systems, such as those frequently encountered in DOE applications. This paper presents a computer-based training system intended as a partial solution to this problem. The system extends the basic virtual reality (VR) training paradigm by adding a multimedia component which may be accessed during interaction with the virtual environment. The 3D model used to create the virtual reality is also used as the primary navigation tool through the associated multimedia. This method exploits the natural mapping between a virtual world and the real world that it represents to provide a more intuitive way for the student to interact with all forms of information about the system.
NASA Astrophysics Data System (ADS)
Stagni, F.; McNab, A.; Luzzi, C.; Krzemien, W.; Consortium, DIRAC
2017-10-01
In the last few years, new types of computing models, such as IAAS (Infrastructure as a Service) and IAAC (Infrastructure as a Client), gained popularity. New resources may come as part of pledged resources, while others are in the form of opportunistic ones. Most but not all of these new infrastructures are based on virtualization techniques. In addition, some of them, present opportunities for multi-processor computing slots to the users. Virtual Organizations are therefore facing heterogeneity of the available resources and the use of an Interware software like DIRAC to provide the transparent, uniform interface has become essential. The transparent access to the underlying resources is realized by implementing the pilot model. DIRAC’s newest generation of generic pilots (the so-called Pilots 2.0) are the “pilots for all the skies”, and have been successfully released in production more than a year ago. They use a plugin mechanism that makes them easily adaptable. Pilots 2.0 have been used for fetching and running jobs on every type of resource, being it a Worker Node (WN) behind a CREAM/ARC/HTCondor/DIRAC Computing element, a Virtual Machine running on IaaC infrastructures like Vac or BOINC, on IaaS cloud resources managed by Vcycle, the LHCb High Level Trigger farm nodes, and any type of opportunistic computing resource. Make a machine a “Pilot Machine”, and all diversities between them will disappear. This contribution describes how pilots are made suitable for different resources, and the recent steps taken towards a fully unified framework, including monitoring. Also, the cases of multi-processor computing slots either on real or virtual machines, with the whole node or a partition of it, is discussed.
A virtual simulator designed for collision prevention in proton therapy.
Jung, Hyunuk; Kum, Oyeon; Han, Youngyih; Park, Hee Chul; Kim, Jin Sung; Choi, Doo Ho
2015-10-01
In proton therapy, collisions between the patient and nozzle potentially occur because of the large nozzle structure and efforts to minimize the air gap. Thus, software was developed to predict such collisions between the nozzle and patient using treatment virtual simulation. Three-dimensional (3D) modeling of a gantry inner-floor, nozzle, and robotic-couch was performed using SolidWorks based on the manufacturer's machine data. To obtain patient body information, a 3D-scanner was utilized right before CT scanning. Using the acquired images, a 3D-image of the patient's body contour was reconstructed. The accuracy of the image was confirmed against the CT image of a humanoid phantom. The machine components and the virtual patient were combined on the treatment-room coordinate system, resulting in a virtual simulator. The simulator simulated the motion of its components such as rotation and translation of the gantry, nozzle, and couch in real scale. A collision, if any, was examined both in static and dynamic modes. The static mode assessed collisions only at fixed positions of the machine's components, while the dynamic mode operated any time a component was in motion. A collision was identified if any voxels of two components, e.g., the nozzle and the patient or couch, overlapped when calculating volume locations. The event and collision point were visualized, and collision volumes were reported. All components were successfully assembled, and the motions were accurately controlled. The 3D-shape of the phantom agreed with CT images within a deviation of 2 mm. Collision situations were simulated within minutes, and the results were displayed and reported. The developed software will be useful in improving patient safety and clinical efficiency of proton therapy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duan, X; Arbique, G; Guild, J
Purpose: To evaluate the quantitative image quality of spectral reconstructions of phantom data from a spectral CT scanner. Methods: The spectral CT scanner (IQon Spectral CT, Philips Healthcare) is equipped with a dual-layer detector and generates conventional 80-140 kVp images and variety of spectral reconstructions, e.g., virtual monochromatic (VM) images, virtual non-contrast (VNC) images, iodine maps, and effective atomic number (Z) images. A cylindrical solid water phantom (Gammex 472, 33 cm diameter and 5 cm thick) with iodine (2.0-20.0 mg I/ml) and calcium (50-600 mg/ml) rod inserts was scanned at 120 kVp and 27 mGy CTDIvol. Spectral reconstructions were evaluatedmore » by comparing image measurements with theoretical values calculated from nominal rod compositions provided by the phantom manufacturer. The theoretical VNC was calculated using water and iodine basis material decomposition, and the theoretical Z was calculated using two common methods, the chemical formula method (Z1) and the dual-energy ratio method (Z2). Results: Beam-hardening-like artifacts between high-attenuation calcium rods (≥300 mg/ml, >800 HU) influenced quantitative measurements, so the quantitative analysis was only performed on iodine rods using the images from the scan with all the calcium rods removed. The CT numbers of the iodine rods in the VM images (50∼150 keV) were close to theoretical values with average difference of 2.4±6.9 HU. Compared with theoretical values, the average difference for iodine concentration, VNC CT number and effective Z of iodine rods were −0.10±0.38 mg/ml, −0.1±8.2 HU, 0.25±0.06 (Z1) and −0.23±0.07 (Z2). Conclusion: The results indicate that the spectral CT scanner generates quantitatively accurate spectral reconstructions at clinically relevant iodine concentrations. Beam-hardening-like artifacts still exist when high-attenuation objects are present and their impact on patient images needs further investigation. YY is an employee of Philips Healthcare.« less
Ichikawa, Daisuke; Saito, Toki; Ujita, Waka; Oyama, Hiroshi
2016-12-01
Our purpose was to develop a new machine-learning approach (a virtual health check-up) toward identification of those at high risk of hyperuricemia. Applying the system to general health check-ups is expected to reduce medical costs compared with administering an additional test. Data were collected during annual health check-ups performed in Japan between 2011 and 2013 (inclusive). We prepared training and test datasets from the health check-up data to build prediction models; these were composed of 43,524 and 17,789 persons, respectively. Gradient-boosting decision tree (GBDT), random forest (RF), and logistic regression (LR) approaches were trained using the training dataset and were then used to predict hyperuricemia in the test dataset. Undersampling was applied to build the prediction models to deal with the imbalanced class dataset. The results showed that the RF and GBDT approaches afforded the best performances in terms of sensitivity and specificity, respectively. The area under the curve (AUC) values of the models, which reflected the total discriminative ability of the classification, were 0.796 [95% confidence interval (CI): 0.766-0.825] for the GBDT, 0.784 [95% CI: 0.752-0.815] for the RF, and 0.785 [95% CI: 0.752-0.819] for the LR approaches. No significant differences were observed between pairs of each approach. Small changes occurred in the AUCs after applying undersampling to build the models. We developed a virtual health check-up that predicted the development of hyperuricemia using machine-learning methods. The GBDT, RF, and LR methods had similar predictive capability. Undersampling did not remarkably improve predictive power. Copyright © 2016 Elsevier Inc. All rights reserved.
The HARNESS Workbench: Unified and Adaptive Access to Diverse HPC Platforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sunderam, Vaidy S.
2012-03-20
The primary goal of the Harness WorkBench (HWB) project is to investigate innovative software environments that will help enhance the overall productivity of applications science on diverse HPC platforms. Two complementary frameworks were designed: one, a virtualized command toolkit for application building, deployment, and execution, that provides a common view across diverse HPC systems, in particular the DOE leadership computing platforms (Cray, IBM, SGI, and clusters); and two, a unified runtime environment that consolidates access to runtime services via an adaptive framework for execution-time and post processing activities. A prototype of the first was developed based on the concept ofmore » a 'system-call virtual machine' (SCVM), to enhance portability of the HPC application deployment process across heterogeneous high-end machines. The SCVM approach to portable builds is based on the insertion of toolkit-interpretable directives into original application build scripts. Modifications resulting from these directives preserve the semantics of the original build instruction flow. The execution of the build script is controlled by our toolkit that intercepts build script commands in a manner transparent to the end-user. We have applied this approach to a scientific production code (Gamess-US) on the Cray-XT5 machine. The second facet, termed Unibus, aims to facilitate provisioning and aggregation of multifaceted resources from resource providers and end-users perspectives. To achieve that, Unibus proposes a Capability Model and mediators (resource drivers) to virtualize access to diverse resources, and soft and successive conditioning to enable automatic and user-transparent resource provisioning. A proof of concept implementation has demonstrated the viability of this approach on high end machines, grid systems and computing clouds.« less
Gómez-Bombarelli, Rafael; Aguilera-Iparraguirre, Jorge; Hirzel, Timothy D; Duvenaud, David; Maclaurin, Dougal; Blood-Forsythe, Martin A; Chae, Hyun Sik; Einzinger, Markus; Ha, Dong-Gwang; Wu, Tony; Markopoulos, Georgios; Jeon, Soonok; Kang, Hosuk; Miyazaki, Hiroshi; Numata, Masaki; Kim, Sunghan; Huang, Wenliang; Hong, Seong Ik; Baldo, Marc; Adams, Ryan P; Aspuru-Guzik, Alán
2016-10-01
Virtual screening is becoming a ground-breaking tool for molecular discovery due to the exponential growth of available computer time and constant improvement of simulation and machine learning techniques. We report an integrated organic functional material design process that incorporates theoretical insight, quantum chemistry, cheminformatics, machine learning, industrial expertise, organic synthesis, molecular characterization, device fabrication and optoelectronic testing. After exploring a search space of 1.6 million molecules and screening over 400,000 of them using time-dependent density functional theory, we identified thousands of promising novel organic light-emitting diode molecules across the visible spectrum. Our team collaboratively selected the best candidates from this set. The experimentally determined external quantum efficiencies for these synthesized candidates were as large as 22%.
Mattsson, Sofia; Sjöström, Hans-Erik; Englund, Claire
2016-06-25
Objective. To develop and implement a virtual tablet machine simulation to aid distance students' understanding of the processes involved in tablet production. Design. A tablet simulation was created enabling students to study the effects different parameters have on the properties of the tablet. Once results were generated, students interpreted and explained them on the basis of current theory. Assessment. The simulation was evaluated using written questionnaires and focus group interviews. Students appreciated the exercise and considered it to be motivational. Students commented that they found the simulation, together with the online seminar and the writing of the report, was beneficial for their learning process. Conclusion. According to students' perceptions, the use of the tablet simulation contributed to their understanding of the compaction process.
Sjöström, Hans-Erik; Englund, Claire
2016-01-01
Objective. To develop and implement a virtual tablet machine simulation to aid distance students’ understanding of the processes involved in tablet production. Design. A tablet simulation was created enabling students to study the effects different parameters have on the properties of the tablet. Once results were generated, students interpreted and explained them on the basis of current theory. Assessment. The simulation was evaluated using written questionnaires and focus group interviews. Students appreciated the exercise and considered it to be motivational. Students commented that they found the simulation, together with the online seminar and the writing of the report, was beneficial for their learning process. Conclusion. According to students’ perceptions, the use of the tablet simulation contributed to their understanding of the compaction process. PMID:27402990
NASA Astrophysics Data System (ADS)
Gómez-Bombarelli, Rafael; Aguilera-Iparraguirre, Jorge; Hirzel, Timothy D.; Duvenaud, David; MacLaurin, Dougal; Blood-Forsythe, Martin A.; Chae, Hyun Sik; Einzinger, Markus; Ha, Dong-Gwang; Wu, Tony; Markopoulos, Georgios; Jeon, Soonok; Kang, Hosuk; Miyazaki, Hiroshi; Numata, Masaki; Kim, Sunghan; Huang, Wenliang; Hong, Seong Ik; Baldo, Marc; Adams, Ryan P.; Aspuru-Guzik, Alán
2016-10-01
Virtual screening is becoming a ground-breaking tool for molecular discovery due to the exponential growth of available computer time and constant improvement of simulation and machine learning techniques. We report an integrated organic functional material design process that incorporates theoretical insight, quantum chemistry, cheminformatics, machine learning, industrial expertise, organic synthesis, molecular characterization, device fabrication and optoelectronic testing. After exploring a search space of 1.6 million molecules and screening over 400,000 of them using time-dependent density functional theory, we identified thousands of promising novel organic light-emitting diode molecules across the visible spectrum. Our team collaboratively selected the best candidates from this set. The experimentally determined external quantum efficiencies for these synthesized candidates were as large as 22%.
Making extreme computations possible with virtual machines
NASA Astrophysics Data System (ADS)
Reuter, J.; Chokoufe Nejad, B.; Ohl, T.
2016-10-01
State-of-the-art algorithms generate scattering amplitudes for high-energy physics at leading order for high-multiplicity processes as compiled code (in Fortran, C or C++). For complicated processes the size of these libraries can become tremendous (many GiB). We show that amplitudes can be translated to byte-code instructions, which even reduce the size by one order of magnitude. The byte-code is interpreted by a Virtual Machine with runtimes comparable to compiled code and a better scaling with additional legs. We study the properties of this algorithm, as an extension of the Optimizing Matrix Element Generator (O'Mega). The bytecode matrix elements are available as alternative input for the event generator WHIZARD. The bytecode interpreter can be implemented very compactly, which will help with a future implementation on massively parallel GPUs.
ARC2D - EFFICIENT SOLUTION METHODS FOR THE NAVIER-STOKES EQUATIONS (DEC RISC ULTRIX VERSION)
NASA Technical Reports Server (NTRS)
Biyabani, S. R.
1994-01-01
ARC2D is a computational fluid dynamics program developed at the NASA Ames Research Center specifically for airfoil computations. The program uses implicit finite-difference techniques to solve two-dimensional Euler equations and thin layer Navier-Stokes equations. It is based on the Beam and Warming implicit approximate factorization algorithm in generalized coordinates. The methods are either time accurate or accelerated non-time accurate steady state schemes. The evolution of the solution through time is physically realistic; good solution accuracy is dependent on mesh spacing and boundary conditions. The mathematical development of ARC2D begins with the strong conservation law form of the two-dimensional Navier-Stokes equations in Cartesian coordinates, which admits shock capturing. The Navier-Stokes equations can be transformed from Cartesian coordinates to generalized curvilinear coordinates in a manner that permits one computational code to serve a wide variety of physical geometries and grid systems. ARC2D includes an algebraic mixing length model to approximate the effect of turbulence. In cases of high Reynolds number viscous flows, thin layer approximation can be applied. ARC2D allows for a variety of solutions to stability boundaries, such as those encountered in flows with shocks. The user has considerable flexibility in assigning geometry and developing grid patterns, as well as in assigning boundary conditions. However, the ARC2D model is most appropriate for attached and mildly separated boundary layers; no attempt is made to model wake regions and widely separated flows. The techniques have been successfully used for a variety of inviscid and viscous flowfield calculations. The Cray version of ARC2D is written in FORTRAN 77 for use on Cray series computers and requires approximately 5Mb memory. The program is fully vectorized. The tape includes variations for the COS and UNICOS operating systems. Also included is a sample routine for CONVEX computers to emulate Cray system time calls, which should be easy to modify for other machines as well. The standard distribution media for this version is a 9-track 1600 BPI ASCII Card Image format magnetic tape. The Cray version was developed in 1987. The IBM ES/3090 version is an IBM port of the Cray version. It is written in IBM VS FORTRAN and has the capability of executing in both vector and parallel modes on the MVS/XA operating system and in vector mode on the VM/XA operating system. Various options of the IBM VS FORTRAN compiler provide new features for the ES/3090 version, including 64-bit arithmetic and up to 2 GB of virtual addressability. The IBM ES/3090 version is available only as a 9-track, 1600 BPI IBM IEBCOPY format magnetic tape. The IBM ES/3090 version was developed in 1989. The DEC RISC ULTRIX version is a DEC port of the Cray version. It is written in FORTRAN 77 for RISC-based Digital Equipment platforms. The memory requirement is approximately 7Mb of main memory. It is available in UNIX tar format on TK50 tape cartridge. The port to DEC RISC ULTRIX was done in 1990. COS and UNICOS are trademarks and Cray is a registered trademark of Cray Research, Inc. IBM, ES/3090, VS FORTRAN, MVS/XA, and VM/XA are registered trademarks of International Business Machines. DEC and ULTRIX are registered trademarks of Digital Equipment Corporation.
ARC2D - EFFICIENT SOLUTION METHODS FOR THE NAVIER-STOKES EQUATIONS (CRAY VERSION)
NASA Technical Reports Server (NTRS)
Pulliam, T. H.
1994-01-01
ARC2D is a computational fluid dynamics program developed at the NASA Ames Research Center specifically for airfoil computations. The program uses implicit finite-difference techniques to solve two-dimensional Euler equations and thin layer Navier-Stokes equations. It is based on the Beam and Warming implicit approximate factorization algorithm in generalized coordinates. The methods are either time accurate or accelerated non-time accurate steady state schemes. The evolution of the solution through time is physically realistic; good solution accuracy is dependent on mesh spacing and boundary conditions. The mathematical development of ARC2D begins with the strong conservation law form of the two-dimensional Navier-Stokes equations in Cartesian coordinates, which admits shock capturing. The Navier-Stokes equations can be transformed from Cartesian coordinates to generalized curvilinear coordinates in a manner that permits one computational code to serve a wide variety of physical geometries and grid systems. ARC2D includes an algebraic mixing length model to approximate the effect of turbulence. In cases of high Reynolds number viscous flows, thin layer approximation can be applied. ARC2D allows for a variety of solutions to stability boundaries, such as those encountered in flows with shocks. The user has considerable flexibility in assigning geometry and developing grid patterns, as well as in assigning boundary conditions. However, the ARC2D model is most appropriate for attached and mildly separated boundary layers; no attempt is made to model wake regions and widely separated flows. The techniques have been successfully used for a variety of inviscid and viscous flowfield calculations. The Cray version of ARC2D is written in FORTRAN 77 for use on Cray series computers and requires approximately 5Mb memory. The program is fully vectorized. The tape includes variations for the COS and UNICOS operating systems. Also included is a sample routine for CONVEX computers to emulate Cray system time calls, which should be easy to modify for other machines as well. The standard distribution media for this version is a 9-track 1600 BPI ASCII Card Image format magnetic tape. The Cray version was developed in 1987. The IBM ES/3090 version is an IBM port of the Cray version. It is written in IBM VS FORTRAN and has the capability of executing in both vector and parallel modes on the MVS/XA operating system and in vector mode on the VM/XA operating system. Various options of the IBM VS FORTRAN compiler provide new features for the ES/3090 version, including 64-bit arithmetic and up to 2 GB of virtual addressability. The IBM ES/3090 version is available only as a 9-track, 1600 BPI IBM IEBCOPY format magnetic tape. The IBM ES/3090 version was developed in 1989. The DEC RISC ULTRIX version is a DEC port of the Cray version. It is written in FORTRAN 77 for RISC-based Digital Equipment platforms. The memory requirement is approximately 7Mb of main memory. It is available in UNIX tar format on TK50 tape cartridge. The port to DEC RISC ULTRIX was done in 1990. COS and UNICOS are trademarks and Cray is a registered trademark of Cray Research, Inc. IBM, ES/3090, VS FORTRAN, MVS/XA, and VM/XA are registered trademarks of International Business Machines. DEC and ULTRIX are registered trademarks of Digital Equipment Corporation.
ERIC Educational Resources Information Center
Morales, Teresa M.; Bang, EunJin; Andre, Thomas
2013-01-01
This paper presents a qualitative case analysis of a new and unique, high school, student-directed, project-based learning (PBL), virtual reality (VR) class. In order to create projects, students learned, on an independent basis, how to program an industrial-level VR machine. A constraint was that students were required to produce at least one…
ERIC Educational Resources Information Center
Kanderakis, Nikos E.
2009-01-01
According to the principle of virtual velocities, if on a simple machine in equilibrium we suppose a slight virtual movement, then the ratio of weights or forces equals the inverse ratio of velocities or displacements. The product of the weight raised or force applied multiplied by the height or displacement plays a central role there. British…
Methods For Self-Organizing Software
Bouchard, Ann M.; Osbourn, Gordon C.
2005-10-18
A method for dynamically self-assembling and executing software is provided, containing machines that self-assemble execution sequences and data structures. In addition to ordered functions calls (found commonly in other software methods), mutual selective bonding between bonding sites of machines actuates one or more of the bonding machines. Two or more machines can be virtually isolated by a construct, called an encapsulant, containing a population of machines and potentially other encapsulants that can only bond with each other. A hierarchical software structure can be created using nested encapsulants. Multi-threading is implemented by populations of machines in different encapsulants that are interacting concurrently. Machines and encapsulants can move in and out of other encapsulants, thereby changing the functionality. Bonding between machines' sites can be deterministic or stochastic with bonding triggering a sequence of actions that can be implemented by each machine. A self-assembled execution sequence occurs as a sequence of stochastic binding between machines followed by their deterministic actuation. It is the sequence of bonding of machines that determines the execution sequence, so that the sequence of instructions need not be contiguous in memory.
Ribis, John W; Ravichandran, Priyanka; Putnam, Emily E; Pishdadian, Keyan; Shen, Aimee
2017-01-01
The spore-forming bacterial pathogen Clostridium difficile is a leading cause of health care-associated infections in the United States. In order for this obligate anaerobe to transmit infection, it must form metabolically dormant spores prior to exiting the host. A key step during this process is the assembly of a protective, multilayered proteinaceous coat around the spore. Coat assembly depends on coat morphogenetic proteins recruiting distinct subsets of coat proteins to the developing spore. While 10 coat morphogenetic proteins have been identified in Bacillus subtilis , only two of these morphogenetic proteins have homologs in the Clostridia : SpoIVA and SpoVM. C. difficile SpoIVA is critical for proper coat assembly and functional spore formation, but the requirement for SpoVM during this process was unknown. Here, we show that SpoVM is largely dispensable for C. difficile spore formation, in contrast with B. subtilis . Loss of C. difficile SpoVM resulted in modest decreases (~3-fold) in heat- and chloroform-resistant spore formation, while morphological defects such as coat detachment from the forespore and abnormal cortex thickness were observed in ~30% of spoVM mutant cells. Biochemical analyses revealed that C. difficile SpoIVA and SpoVM directly interact, similarly to their B. subtilis counterparts. However, in contrast with B. subtilis , C. difficile SpoVM was not essential for SpoIVA to encase the forespore. Since C. difficile coat morphogenesis requires SpoIVA-interacting protein L (SipL), which is conserved exclusively in the Clostridia , but not the more broadly conserved SpoVM, our results reveal another key difference between C. difficile and B. subtilis spore assembly pathways. IMPORTANCE The spore-forming obligate anaerobe Clostridium difficile is the leading cause of antibiotic-associated diarrheal disease in the United States. When C. difficile spores are ingested by susceptible individuals, they germinate within the gut and transform into vegetative, toxin-secreting cells. During infection, C. difficile must also induce spore formation to survive exit from the host. Since spore formation is essential for transmission, understanding the basic mechanisms underlying sporulation in C. difficile could inform the development of therapeutic strategies targeting spores. In this study, we determine the requirement of the C. difficile homolog of SpoVM, a protein that is essential for spore formation in Bacillus subtilis due to its regulation of coat and cortex formation. We observed that SpoVM plays a minor role in C. difficile spore formation, in contrast with B. subtilis , indicating that this protein would not be a good target for inhibiting spore formation.
Ribis, John W.; Ravichandran, Priyanka; Putnam, Emily E.; Pishdadian, Keyan
2017-01-01
ABSTRACT The spore-forming bacterial pathogen Clostridium difficile is a leading cause of health care-associated infections in the United States. In order for this obligate anaerobe to transmit infection, it must form metabolically dormant spores prior to exiting the host. A key step during this process is the assembly of a protective, multilayered proteinaceous coat around the spore. Coat assembly depends on coat morphogenetic proteins recruiting distinct subsets of coat proteins to the developing spore. While 10 coat morphogenetic proteins have been identified in Bacillus subtilis, only two of these morphogenetic proteins have homologs in the Clostridia: SpoIVA and SpoVM. C. difficile SpoIVA is critical for proper coat assembly and functional spore formation, but the requirement for SpoVM during this process was unknown. Here, we show that SpoVM is largely dispensable for C. difficile spore formation, in contrast with B. subtilis. Loss of C. difficile SpoVM resulted in modest decreases (~3-fold) in heat- and chloroform-resistant spore formation, while morphological defects such as coat detachment from the forespore and abnormal cortex thickness were observed in ~30% of spoVM mutant cells. Biochemical analyses revealed that C. difficile SpoIVA and SpoVM directly interact, similarly to their B. subtilis counterparts. However, in contrast with B. subtilis, C. difficile SpoVM was not essential for SpoIVA to encase the forespore. Since C. difficile coat morphogenesis requires SpoIVA-interacting protein L (SipL), which is conserved exclusively in the Clostridia, but not the more broadly conserved SpoVM, our results reveal another key difference between C. difficile and B. subtilis spore assembly pathways. IMPORTANCE The spore-forming obligate anaerobe Clostridium difficile is the leading cause of antibiotic-associated diarrheal disease in the United States. When C. difficile spores are ingested by susceptible individuals, they germinate within the gut and transform into vegetative, toxin-secreting cells. During infection, C. difficile must also induce spore formation to survive exit from the host. Since spore formation is essential for transmission, understanding the basic mechanisms underlying sporulation in C. difficile could inform the development of therapeutic strategies targeting spores. In this study, we determine the requirement of the C. difficile homolog of SpoVM, a protein that is essential for spore formation in Bacillus subtilis due to its regulation of coat and cortex formation. We observed that SpoVM plays a minor role in C. difficile spore formation, in contrast with B. subtilis, indicating that this protein would not be a good target for inhibiting spore formation. PMID:28959733
Ventromedial prefrontal cortex activity and pathological worry in generalised anxiety disorder.
Via, E; Fullana, M A; Goldberg, X; Tinoco-González, D; Martínez-Zalacaín, I; Soriano-Mas, C; Davey, C G; Menchón, J M; Straube, B; Kircher, T; Pujol, J; Cardoner, N; Harrison, B J
2018-05-09
Pathological worry is a hallmark feature of generalised anxiety disorder (GAD), associated with dysfunctional emotional processing. The ventromedial prefrontal cortex (vmPFC) is involved in the regulation of such processes, but the link between vmPFC emotional responses and pathological v. adaptive worry has not yet been examined.AimsTo study the association between worry and vmPFC activity evoked by the processing of learned safety and threat signals. In total, 27 unmedicated patients with GAD and 56 healthy controls (HC) underwent a differential fear conditioning paradigm during functional magnetic resonance imaging. Compared to HC, the GAD group demonstrated reduced vmPFC activation to safety signals and no safety-threat processing differentiation. This response was positively correlated with worry severity in GAD, whereas the same variables showed a negative and weak correlation in HC. Poor vmPFC safety-threat differentiation might characterise GAD, and its distinctive association with GAD worries suggests a neural-based qualitative difference between healthy and pathological worries.Declaration of interestNone.
Gao, Yan; Zhang, Fu-qiang; He, Fan
2011-10-01
To evaluate the interface compatibility between tooth-like yttria-stabilized tetragonal zirconia polycrystal(Y-TZP) by adding rare-earth oxide and Vita VM9 veneering porcelain. Six kinds(S1,S2,S3,S4,S5,S6) of tooth-like yttria stabilized tetragonal zirconia polycrystal were made by introducing internal colorating technology to detect the thermal shock resistance and interface bonding strength with Vita VM9 Bsaedentin. Statistical analysis was performed using SAS6.12 software package. There was no gap between the layers via hot shocking test.The shear bonding strength between Y-TZP and VitaVM9 was higher and the value was (36.03±3.82) to (37.98±4.89) MPa. By adding rare-earth oxide to yttria-stabilized tetragonal zirconia polycrystal ,better compatibility between the layer (TZP and Vita VM9) can be formed which is of better interface integrate and available for clinical applications.
Reward value comparison via mutual inhibition in ventromedial prefrontal cortex
Strait, Caleb E.; Blanchard, Tommy C.; Hayden, Benjamin Y.
2014-01-01
Recent theories suggest that reward-based choice reflects competition between value signals in the ventromedial prefrontal cortex (vmPFC). We tested this idea by recording vmPFC neurons while macaques performed a gambling task with asynchronous offer presentation. We found that neuronal activity shows four patterns consistent with selection via mutual inhibition. (1) Correlated tuning for probability and reward size, suggesting that vmPFC carries an integrated value signal, (2) anti-correlated tuning curves for the two options, suggesting mutual inhibition, (3) neurons rapidly come to signal the value of the chosen offer, suggesting the circuit serves to produce a choice, (4) after regressing out the effects of option values, firing rates still could predict choice – a choice probability signal. In addition, neurons signaled gamble outcomes, suggesting that vmPFC contributes to both monitoring and choice processes. These data suggest a possible mechanism for reward-based choice and endorse the centrality of vmPFC in that process. PMID:24881835
Analysis of Crosstalk in 3D Circularly Polarized LCDs Depending on the Vertical Viewing Location.
Zeng, Menglin; Nguyen, Truong Q
2016-03-01
Crosstalk in circularly polarized (CP) liquid crystal display (LCD) with polarized glasses (passive 3D glasses) is mainly caused by two factors: 1) the polarizing system including wave retarders and 2) the vertical misalignment (VM) of light between the LC module and the patterned retarder. We show that the latter, which is highly dependent on the vertical viewing location, is a much more significant factor of crosstalk in CP LCD than the former. There are three contributions in this paper. Initially, a display model for CP LCD, which accurately characterizes VM, is proposed. A novel display calibration method for the VM characterization that only requires pictures of the screen taken at four viewing locations. In addition, we prove that the VM-based crosstalk cannot be efficiently reduced by either preprocessing the input images or optimizing the polarizing system. Furthermore, we derive the analytic solution for the viewing zone, where the entire screen does not have the VM-based crosstalk.
Human Machine Interfaces for Teleoperators and Virtual Environments Conference
NASA Technical Reports Server (NTRS)
1990-01-01
In a teleoperator system the human operator senses, moves within, and operates upon a remote or hazardous environment by means of a slave mechanism (a mechanism often referred to as a teleoperator). In a virtual environment system the interactive human machine interface is retained but the slave mechanism and its environment are replaced by a computer simulation. Video is replaced by computer graphics. The auditory and force sensations imparted to the human operator are similarly computer generated. In contrast to a teleoperator system, where the purpose is to extend the operator's sensorimotor system in a manner that facilitates exploration and manipulation of the physical environment, in a virtual environment system, the purpose is to train, inform, alter, or study the human operator to modify the state of the computer and the information environment. A major application in which the human operator is the target is that of flight simulation. Although flight simulators have been around for more than a decade, they had little impact outside aviation presumably because the application was so specialized and so expensive.
Mathematical defense method of networked servers with controlled remote backups
NASA Astrophysics Data System (ADS)
Kim, Song-Kyoo
2006-05-01
The networked server defense model is focused on reliability and availability in security respects. The (remote) backup servers are hooked up by VPN (Virtual Private Network) with high-speed optical network and replace broken main severs immediately. The networked server can be represent as "machines" and then the system deals with main unreliable, spare, and auxiliary spare machine. During vacation periods, when the system performs a mandatory routine maintenance, auxiliary machines are being used for back-ups; the information on the system is naturally delayed. Analog of the N-policy to restrict the usage of auxiliary machines to some reasonable quantity. The results are demonstrated in the network architecture by using the stochastic optimization techniques.
Does the vaginal microbiota play a role in the development of cervical cancer?
Kyrgiou, Maria; Mitra, Anita; Moscicki, Anna-Barbara
2017-01-01
Persistent infection with oncogenic human papillomavirus (HPV) is necessary but not sufficient for the development of cervical cancer. The factors promoting persistence as well those triggering carcinogenetic pathways are incompletely understood. Rapidly evolving evidence indicates that the vaginal microbiome (VM) may play a functional role (both protective and harmful) in the acquisition and persistence of HPV, and subsequent development of cervical cancer. The first studies examining the VM and the presence of an HPV infection using next-generation sequencing techniques identified higher microbial diversity in HPV-positive as opposed to HPV-negative women. Furthermore, there appears to be a temporal relationship between the VM and HPV infection in that specific community state types may be correlated with a higher chance of progression or regression of the infection. Studies describing the VM in women with preinvasive disease (squamous intraepithelial neoplasia [SIL]) consistently demonstrate a dysbiosis in women with the more severe disease. Although it is plausible that the composition of the VM may influence the host's innate immune response, susceptibility to infection, and the development of cervical disease, the studies to date do not prove causality. Future studies should explore the causal link between the VM and the clinical outcome in longitudinal samples from existing biobanks. Copyright © 2016 Elsevier Inc. All rights reserved.
TU, DOM-GENE; YU, YUN; LEE, CHE-HSIN; KUO, YU-LIANG; LU, YIN-CHE; TU, CHI-WEN; CHANG, WEN-WEI
2016-01-01
Hinokitiol, alternatively known as β-thujaplicin, is a tropolone-associated natural compound with antimicrobial, anti-inflammatory and antitumor activity. Breast cancer stem/progenitor cells (BCSCs) are a subpopulation of breast cancer cells associated with tumor initiation, chemoresistance and metastatic behavior, and may be enriched by mammosphere cultivation. Previous studies have demonstrated that BCSCs exhibit vasculogenic mimicry (VM) activity via the epidermal growth factor receptor (EGFR) signaling pathway. The present study investigated the anti-VM activity of hinokitiol in BCSCs. At a concentration below the half maximal inhibitory concentration, hinokitiol inhibited VM formation of mammosphere cells derived from two human breast cancer cell lines. Hinokitiol was additionally indicated to downregulate EGFR protein expression in mammosphere-forming BCSCs without affecting the expression of messenger RNA. The protein stability of EGFR in BCSCs was also decreased by hinokitiol. The EGFR protein expression and VM formation capability of hinokitiol-treated BCSCs were restored by co-treatment with MG132, a proteasome inhibitor. In conclusion, the present study indicated that hinokitiol may inhibit the VM activity of BCSCs through stimulating proteasome-mediated EGFR degradation. Hinokitiol may act as an anti-VM agent, and may be useful for the development of novel breast cancer therapeutic agents. PMID:27073579
Comparison of vaginal microbiota sampling techniques: cytobrush versus swab.
Mitra, Anita; MacIntyre, David A; Mahajan, Vishakha; Lee, Yun S; Smith, Ann; Marchesi, Julian R; Lyons, Deirdre; Bennett, Phillip R; Kyrgiou, Maria
2017-08-29
Evidence suggests the vaginal microbiota (VM) may influence risk of persistent Human Papillomavirus (HPV) infection and cervical carcinogenesis. Established cytology biobanks, typically collected with a cytobrush, constitute a unique resource to study such associations longitudinally. It is plausible that compared to rayon swabs; the most commonly used sampling devices, cytobrushes may disrupt biofilms leading to variation in VM composition. Cervico-vaginal samples were collected with cytobrush and rayon swabs from 30 women with high-grade cervical precancer. Quantitative PCR was used to compare bacterial load and Illumina MiSeq sequencing of the V1-V3 regions of the 16S rRNA gene used to compare VM composition. Cytobrushes collected a higher total bacterial load. Relative abundance of bacterial species was highly comparable between sampling devices (R 2 = 0.993). However, in women with a Lactobacillus-depleted, high-diversity VM, significantly less correlation in relative species abundance was observed between devices when compared to those with a Lactobacillus species-dominant VM (p = 0.0049). Cytobrush and swab sampling provide a comparable VM composition. In a small proportion of cases the cytobrush was able to detect underlying high-diversity community structure, not realized with swab sampling. This study highlights the need to consider sampling devices as potential confounders when comparing multiple studies and datasets.
Nd:YAG laser therapy for rectal and vaginal venous malformations.
Gurien, Lori A; Jackson, Richard J; Kiser, Michelle M; Richter, Gresham T
2017-08-01
Limited therapeutic options exist for rectal and vaginal venous malformations (VM). We describe our center's experience using Nd:YAG laser for targeted ablation of abnormal veins to treat mucosally involved pelvic VM. Records of patients undergoing non-contact Nd:YAG laser therapy of pelvic VM at a tertiary children's hospital were reviewed. Symptoms, operative findings and details, complications, and outcomes were evaluated. Nine patients (age 0-24) underwent Nd:YAG laser therapy of rectal and/or vaginal VM. Rectal bleeding was present in all patients and vaginal bleeding in all females (n = 5). 5/7 patients had extensive pelvic involvement on MRI. Typical settings were 30 (rectum) and 20-25 W (vagina), with 0.5-1.0 s pulse duration. Patients underwent the same-day discharge. Treatment intervals ranged from 14 to 180 (average = 56) weeks, with 6.1-year mean follow-up. Five patients experienced symptom relief with a single treatment. Serial treatments managed recurrent bleeding successfully in all patients, with complete resolution of vaginal lesions in 40% of cases. No complications occurred. Nd:YAG laser treatment of rectal and vaginal VM results in substantial improvement and symptom control, with low complication risk. Given the high morbidity of surgical resection, Nd:YAG laser treatment of pelvic VM should be considered as first line therapy.
The association between resting functional connectivity and dispositional optimism.
Ran, Qian; Yang, Junyi; Yang, Wenjing; Wei, Dongtao; Qiu, Jiang; Zhang, Dong
2017-01-01
Dispositional optimism is an individual characteristic that plays an important role in human experience. Optimists are people who tend to hold positive expectations for their future. Previous studies have focused on the neural basis of optimism, such as task response neural activity and brain structure volume. However, the functional connectivity between brain regions of the dispositional optimists are poorly understood. Previous study suggested that the ventromedial prefrontal cortex (vmPFC) are associated with individual differences in dispositional optimism, but it is unclear whether there are other brain regions that combine with the vmPFC to contribute to dispositional optimism. Thus, the present study used the resting-state functional connectivity (RSFC) approach and set the vmPFC as the seed region to examine if differences in functional brain connectivity between the vmPFC and other brain regions would be associated with individual differences in dispositional optimism. The results found that dispositional optimism was significantly positively correlated with the strength of the RSFC between vmPFC and middle temporal gyrus (mTG) and negativly correlated with RSFC between vmPFC and inferior frontal gyrus (IFG). These findings may be suggested that mTG and IFG which associated with emotion processes and emotion regulation also play an important role in the dispositional optimism.
The association between resting functional connectivity and dispositional optimism
Yang, Wenjing; Wei, Dongtao; Qiu, Jiang; Zhang, Dong
2017-01-01
Dispositional optimism is an individual characteristic that plays an important role in human experience. Optimists are people who tend to hold positive expectations for their future. Previous studies have focused on the neural basis of optimism, such as task response neural activity and brain structure volume. However, the functional connectivity between brain regions of the dispositional optimists are poorly understood. Previous study suggested that the ventromedial prefrontal cortex (vmPFC) are associated with individual differences in dispositional optimism, but it is unclear whether there are other brain regions that combine with the vmPFC to contribute to dispositional optimism. Thus, the present study used the resting-state functional connectivity (RSFC) approach and set the vmPFC as the seed region to examine if differences in functional brain connectivity between the vmPFC and other brain regions would be associated with individual differences in dispositional optimism. The results found that dispositional optimism was significantly positively correlated with the strength of the RSFC between vmPFC and middle temporal gyrus (mTG) and negativly correlated with RSFC between vmPFC and inferior frontal gyrus (IFG). These findings may be suggested that mTG and IFG which associated with emotion processes and emotion regulation also play an important role in the dispositional optimism. PMID:28700613
Hutcherson, Cendri A; Plassmann, Hilke; Gross, James J; Rangel, Antonio
2012-09-26
Cognitive regulation is often used to influence behavioral outcomes. However, the computational and neurobiological mechanisms by which it affects behavior remain unknown. We studied this issue using an fMRI task in which human participants used cognitive regulation to upregulate and downregulate their cravings for foods at the time of choice. We found that activity in both ventromedial prefrontal cortex (vmPFC) and dorsolateral prefrontal cortex (dlPFC) correlated with value. We also found evidence that two distinct regulatory mechanisms were at work: value modulation, which operates by changing the values assigned to foods in vmPFC and dlPFC at the time of choice, and behavioral control modulation, which operates by changing the relative influence of the vmPFC and dlPFC value signals on the action selection process used to make choices. In particular, during downregulation, activation decreased in the value-sensitive region of dlPFC (indicating value modulation) but not in vmPFC, and the relative contribution of the two value signals to behavior shifted toward the dlPFC (indicating behavioral control modulation). The opposite pattern was observed during upregulation: activation increased in vmPFC but not dlPFC, and the relative contribution to behavior shifted toward the vmPFC. Finally, ventrolateral PFC and posterior parietal cortex were more active during both upregulation and downregulation, and were functionally connected with vmPFC and dlPFC during cognitive regulation, which suggests that they help to implement the changes to the decision-making circuitry generated by cognitive regulation.
Plassmann, Hilke; Gross, James J.; Rangel, Antonio
2012-01-01
Cognitive regulation is often used to influence behavioral outcomes. However, the computational and neurobiological mechanisms by which it affects behavior remain unknown. We studied this issue using an fMRI task in which human participants used cognitive regulation to upregulate and downregulate their cravings for foods at the time of choice. We found that activity in both ventromedial prefrontal cortex (vmPFC) and dorsolateral prefrontal cortex (dlPFC) correlated with value. We also found evidence that two distinct regulatory mechanisms were at work: value modulation, which operates by changing the values assigned to foods in vmPFC and dlPFC at the time of choice, and behavioral control modulation, which operates by changing the relative influence of the vmPFC and dlPFC value signals on the action selection process used to make choices. In particular, during downregulation, activation decreased in the value-sensitive region of dlPFC (indicating value modulation) but not in vmPFC, and the relative contribution of the two value signals to behavior shifted toward the dlPFC (indicating behavioral control modulation). The opposite pattern was observed during upregulation: activation increased in vmPFC but not dlPFC, and the relative contribution to behavior shifted toward the vmPFC. Finally, ventrolateral PFC and posterior parietal cortex were more active during both upregulation and downregulation, and were functionally connected with vmPFC and dlPFC during cognitive regulation, which suggests that they help to implement the changes to the decision-making circuitry generated by cognitive regulation. PMID:23015444
Zuniga, M. Geraldine; Janky, Kristen L.; Schubert, Michael C.; Carey, John P.
2013-01-01
Objectives To characterize both cervical and ocular vestibular-evoked myogenic potential (cVEMP, oVEMP) responses to air-conducted sound (ACS) and midline taps in Ménière disease (MD), vestibular migraine (VM), and controls, as well as to determine if cVEMP or oVEMP responses can differentiate MD from VM. Study Design Prospective cohort study. Setting Tertiary referral center. Subjects and Methods Unilateral definite MD patients (n = 20), VM patients (n = 21) by modified Neuhauser criteria, and age-matched controls (n = 28). cVEMP testing used ACS (clicks), and oVEMP testing used ACS (clicks and 500-Hz tone bursts) and midline tap stimuli (reflex hammer and Mini-Shaker). Outcome parameters were cVEMP peak-to-peak amplitudes and oVEMP n10 amplitudes. Results Relative to controls, MD and VM groups both showed reduced click-evoked cVEMP (P < .001) and oVEMP (P < .001) amplitudes. Only the MD group showed reduction in tone-evoked amplitudes for oVEMP. Tone-evoked oVEMPs differentiated MD from controls (P = .001) and from VM (P = .007). The oVEMPs in response to the reflex hammer and Mini-Shaker midline taps showed no differences between groups (P > .210). Conclusions Using these techniques, VM and MD behaved similarly on most of the VEMP test battery. A link in their pathophysiology may be responsible for these responses. The data suggest a difference in 500-Hz tone burst–evoked oVEMP responses between MD and MV as a group. However, no VEMP test that was investigated segregated individuals with MD from those with VM. PMID:22267492
Organizational Learning Strategies and Verbal Memory Deficits in Bipolar Disorder.
Nitzburg, George C; Cuesta-Diaz, Armando; Ospina, Luz H; Russo, Manuela; Shanahan, Megan; Perez-Rodriguez, Mercedes; Larsen, Emmett; Mulaimovic, Sandra; Burdick, Katherine E
2017-04-01
Verbal memory (VM) impairment is prominent in bipolar disorder (BD) and is linked to functional outcomes. However, the intricacies of VM impairment have not yet been studied in a large sample of BD patients. Moreover, some have proposed VM deficits that may be mediated by organizational strategies, such as semantic or serial clustering. Thus, the exact nature of VM break-down in BD patients is not well understood, limiting remediation efforts. We investigated the intricacies of VM deficits in BD patients versus healthy controls (HCs) and examined whether verbal learning differences were mediated by use of clustering strategies. The California Verbal Learning Test (CVLT) was administered to 113 affectively stable BD patients and 106 HCs. We compared diagnostic groups on all CVLT indices and investigated whether group differences in verbal learning were mediated by clustering strategies. Although BD patients showed significantly poorer attention, learning, and memory, these indices were only mildly impaired. However, BD patients evidenced poorer use of effective learning strategies and lower recall consistency, with these indices falling in the moderately impaired range. Moreover, relative reliance on semantic clustering fully mediated the relationship between diagnostic category and verbal learning, while reliance on serial clustering partially mediated this relationship. VM deficits in affectively stable bipolar patients were widespread but were generally mildly impaired. However, patients displayed inadequate use of organizational strategies with clear separation from HCs on semantic and serial clustering. Remediation efforts may benefit from education about mnemonic devices or "chunking" techniques to attenuate VM deficits in BD. (JINS, 2017, 23, 358-366).
Kinetics of acyl transfer reactions in organic media catalysed by Candida antarctica lipase B.
Martinelle, M; Hult, K
1995-09-06
The acyl transfer reactions catalysed by Candida antartica lipase B in organic media followed a bi-bi ping-pong mechanism, with competitive substrate inhibition by the alcohols used as acyl acceptors. The effect of organic solvents on Vm and Km was investigated. The Vm values in acetonitrile was 40-50% of those in heptane. High Km values in acetonitrile compared to those in heptane could partly be explained by an increased solvation of the substrates in acetonitrile. Substrate solvation caused a 10-fold change in substrate specificity, defined as (Vm/Km)ethyl octanoate/(Vm/Km)octanoic acid, going from heptane to acetonitrile. Deacylation was the rate determining step for the acyl transfer in heptane with vinyl- and ethyl octanoate as acyl donors and (R)-2-octanol as acyl acceptor. With 1-octanol, a rate determining deacylation step in heptane was indicated using the same acyl donors. Using 1-octanol as acceptor in heptane, S-ethyl thiooctanoate had a 25- to 30-fold lower Vm/Km value and vinyl octanoate a 4-fold higher Vm/Km value than that for ethyl octanoate. The difference showed to be a Km effect for vinyl octanoate and mainly a Km effect for S-ethyl thiooctanoate. The Vm values of the esterification of octanoic acid with different alcohols was 10-30-times lower than those for the corresponding transesterification of ethyl octanoate. The low activity could be explained by a low pH around the enzyme caused by the acid or a withdrawing of active enzyme by nonproductive binding by the acid.
Yue, Zhijian; Zhang, Yuhui; Wang, Laixing; Liu, Jianmin
2017-01-01
Vasculogenic mimicry (VM) was an important tumor blood supply to complement the endothelial cell-dependent angiogenesis, while leptin and receptor (ObR) involved in angiogenesis in glioblastoma has been reported on previous study, but the relationship between ObR expression and VM formation in human glioblastoma tissues, as well as their prognostic significance still remains unclear. In our study, we found that VM recognized by CD31-/PAS+ immunohistochemical staining in glioblastoma tissues showed a positive correlation with leptin expression (r = 0.58, P < 0.01), as well as ObR expression in glioblastoma tissues (r = 0.61, P < 0.01). Association of glial to mesenchymal transition (GMT)-related molecular with ObR expression and VM formation in glioblastoma tissues indicated that ObR-positive glioblastoma cells with GMT phenotype might be more likely to constitute VM, and co-expression of ObR and CD133 or Nestin to constitute the channel impliated that ObR-positive glioblastoma cells displayed glioblastoma stem cells (GSC) properties. Moreover, Kaplan–Meier statistical analysis showed that patients with more VM or ObR expression displayed poorer prognosis for overall survival times than patients with less expression (VMhigh vs. VMlow: P = 0.033; ObRhigh vs. ObRlow: P = 0.009). And ObR+ glioblastoma cells with GSC characteristic were mostly involved in VM formation, whereas a little part of cells were also related to microvascular density (MVD), which suggested that ObR was an important target for anticancer therapy, so further related studies were needed to improve glioblastoma treatment. PMID:28938545
Adsorbed natural gas storage with activated carbons made from Illinois coals and scrap tires
Sun, Jielun; Brady, T.A.; Rood, M.J.; Lehmann, C.M.; Rostam-Abadi, M.; Lizzio, A.A.
1997-01-01
Activated carbons for natural gas storage were produced from Illinois bituminous coals (IBC-102 and IBC-106) and scrap tires by physical activation with steam or CO2 and by chemical activation with KOH, H3PO4, or ZnCl2. The products were characterized for N2-BET area, micropore volume, bulk density, pore size distribution, and volumetric methane storage capacity (Vm/Vs). Vm/Vs values for Illinois coal-derived carbons ranged from 54 to 83 cm3/cm3, which are 35-55% of a target value of 150 cm3/cm3. Both granular and pelletized carbons made with preoxidized Illinois coal gave higher micropore volumes and larger Vm/Vs values than those made without preoxidation. This confirmed that preoxidation is a desirable step in the production of carbons from caking materials. Pelletization of preoxidized IBC-106 coal, followed by steam activation, resulted in the highest Vm/Vs value. With roughly the same micropore volume, pelletization alone increased Vm/Vs of coal carbon by 10%. Tire-derived carbons had Vm/Vs values ranging from 44 to 53 cm3/cm3, lower than those of coal carbons due to their lower bulk densities. Pelletization of the tire carbons increased bulk density up to 160%. However, this increase was offset by a decrease in micropore volume of the pelletized materials, presumably due to the pellet binder. As a result, Vm/Vs values were about the same for granular and pelletized tire carbons. Compared with coal carbons, tire carbons had a higher percentage of mesopores and macropores.
Lewicka, Małgorzata; Henrykowska, Gabriela A; Pacholski, Krzysztof; Śmigielski, Janusz; Rutkowski, Maciej; Dziedziczak-Buczyńska, Maria; Buczyński, Andrzej
2015-12-10
Research studies carried out for decades have not solved the problem of the effect of electromagnetic radiation of various frequency and strength on the human organism. Due to this fact, we decided to investigate the changes taking place in human blood platelets under the effect of electromagnetic radiation (EMR) emitted by LCD monitors. The changes of selected parameters of oxygen metabolism were measured, i.e. reactive oxygen species concentration, enzymatic activity of antioxidant defence proteins - superoxide dismutase (SOD-1) and catalase (CAT) - and malondialdehyde concentration (MDA). A suspension of human blood platelets was exposed to electromagnetic radiation of 1 kHz frequency and 150 V/m and 220 V/m intensity for 30 and 60 min. The level of changes of the selected parameters of oxidative stress was determined after the exposure and compared to the control samples (not exposed). The measurements revealed an increase of the concentration of reactive oxygen species. The largest increase of ROS concentration vs. the control sample was observed after exposure to EMF of 220 V/m intensity for 60 min (from x = 54.64 to x = 72.92). The measurement of MDA concentration demonstrated a statistically significant increase after 30-min exposure to an EMF of 220 V/m intensity in relation to the initial values (from x = 3.18 to x = 4.41). The enzymatic activity of SOD-1 decreased after exposure (the most prominent change was observed after 60-min and 220 V/m intensity from x = 3556.41 to x = 1084.83). The most significant change in activity of catalase was observed after 60 min and 220 v/m exposure (from x = 6.28 to x = 4.15). The findings indicate that exposure to electromagnetic radiation of 1 kHz frequency and 150 V/m and 220 V/m intensity may cause adverse effects within blood platelets' oxygen metabolism and thus may lead to physiological dysfunction of the organism.
Cyders, Melissa A; Dzemidzic, Mario; Eiler, William J; Coskunpinar, Ayca; Karyadi, Kenny; Kareken, David A
2014-02-01
Recent research has highlighted the role of emotion-based impulsivity (negative and positive urgency personality traits) for alcohol use and abuse, but has yet to examine how these personality traits interact with the brain's motivational systems. Using functional magnetic resonance imaging (fMRI), we tested whether urgency traits and mood induction affected medial prefrontal responses to alcohol odors (AcO). Twenty-seven social drinkers (mean age = 25.2, 14 males) had 6 fMRI scans while viewing negative, neutral, or positive mood images (3 mood conditions) during intermittent exposure to AcO and appetitive control (AppCo) aromas. Voxel-wise analyses (p < 0.001) confirmed [AcO > AppCo] activation throughout medial prefrontal cortex (mPFC) and ventromedial PFC (vmPFC) regions. Extracted from a priori mPFC and vmPFC regions and analyzed in Odor (AcO, AppCo) × Mood factorial models, AcO activation was greater than AppCo in left vmPFC (p < 0.001), left mPFC (p = 0.002), and right vmPFC (p = 0.01) regions. Mood did not interact significantly with activation, but the covariate of trait negative urgency accounted for significant variance in left vmPFC (p = 0.01) and right vmPFC (p = 0.01) [AcO > AppCo] activation. Negative urgency also mediated the relationship between vmPFC activation and both (i) subjective craving and (ii) problematic drinking. The trait of negative urgency is associated with neural responses to alcohol cues in the vmPFC, a region involved in reward value and emotion-guided decision-making. This suggests that negative urgency might alter subjective craving and brain regions involved in coding reward value. Copyright © 2013 by the Research Society on Alcoholism.
Tivesten, Emma; Dozza, Marco
2015-06-01
Visual-manual (VM) phone tasks (i.e., texting, dialing, reading) are associated with an increased crash/near-crash risk. This study investigated how the driving context influences drivers' decisions to engage in VM phone tasks in naturalistic driving. Video-recordings of 1,432 car trips were viewed to identify VM phone tasks and passenger presence. Video, vehicle signals, and map data were used to classify driving context (i.e., curvature, other vehicles) before and during the VM phone tasks (N=374). Vehicle signals (i.e., speed, yaw rate, forward radar) were available for all driving. VM phone tasks were more likely to be initiated while standing still, and less likely while driving at high speeds, or when a passenger was present. Lead vehicle presence did not influence how likely it was that a VM phone task was initiated, but the drivers adjusted their task timing to situations when the lead vehicle was increasing speed, resulting in increasing time headway. The drivers adjusted task timing until after making sharp turns and lane change maneuvers. In contrast to previous driving simulator studies, there was no evidence of drivers reducing speed as a consequence of VM phone task engagement. The results show that experienced drivers use information about current and upcoming driving context to decide when to engage in VM phone tasks. However, drivers may fail to sufficiently increase safety margins to allow time to respond to possible unpredictable events (e.g., lead vehicle braking). Advanced driver assistance systems should facilitate and possibly boost drivers' self-regulating behavior. For instance, they might recognize when appropriate adaptive behavior is missing and advise or alert accordingly. The results from this study could also inspire training programs for novice drivers, or locally classify roads in terms of the risk associated with secondary task engagement while driving. Copyright © 2015. Published by Elsevier Ltd.
Stages of functional processing and the bihemispheric recognition of Japanese Kana script.
Yoshizaki, K
2000-04-01
Two experiments were carried out in order to examine the effects of functional steps on the benefits of interhemispheric integration. The purpose of Experiment 1 was to investigate the validity of the Banich (1995a) model, where the benefits of interhemispheric processing increase as the task involves more functional steps. The 16 right-handed subjects were given two types of Hiragana-Katakana script matching tasks. One was the Name Identity (NI) task, and the other was the vowel matching (VM) task, which involved more functional steps compared to the NI task. The VM task required subjects to make a decision whether or not a pair of Katakana-Hiragana scripts had a common vowel. In both tasks, a pair of Kana scripts (Katakana-Hiragana scripts) was tachistoscopically presented in the unilateral visual fields or the bilateral visual fields, where each letter was presented in each visual field. A bilateral visual fields advantage (BFA) was found in both tasks, and the size of this did not differ between the tasks, suggesting that these findings did not support the Banich model. The purpose of Experiment 2 was to examine the effects of imbalanced processing load between the hemispheres on the benefits of interhemispheric integration. In order to manipulate the balance of processing load across the hemispheres, the revised vowel matching (r-VM) task was developed by amending the VM task. The r-VM task was the same as the VM task in Experiment 1, except that a script that has only vowel sound was presented as a counterpart of a pair of Kana scripts. The 24 right-handed subjects were given the r-VM and NI tasks. The results showed that although a BFA showed up in the NI task, it did not in the r-VM task. These results suggested that the balance of processing load between hemispheres would have an influence on the bilateral hemispheric processing.
Daugirdas, John T; Greene, Tom; Depner, Thomas A; Chumlea, Cameron; Rocco, Michael J; Chertow, Glenn M
2003-09-01
The modeled volume of urea distribution (Vm) in intermittently hemodialyzed patients is often compared with total body water (TBW) volume predicted from population studies of patient anthropometrics (Vant). Using data from the HEMO Study, we compared Vm determined by both blood-side and dialysate-side urea kinetic models with Vant as calculated by the Watson, Hume-Weyers, and Chertow anthropometric equations. Median levels of dialysate-based Vm and blood-based Vm agreed (43% and 44% of body weight, respectively). These volumes were lower than anthropometric estimates of TBW, which had median values of 52% to 55% of body weight for the three formulas evaluated. The difference between the Watson equation for TBW and modeled urea volume was greater in Caucasians (19%) than in African Americans (13%). Correlations between Vm and Vant determined by each of the three anthropometric estimation equations were similar; but Vant derived from the Watson formula had a slightly higher correlation with Vm. The difference between Vm and the anthropometric formulas was greatest with the Chertow equation, less with the Hume-Weyers formula, and least with the Watson estimate. The age term in the Watson equation for men that adjusts Vant downward with increasing age reduced an age effect on the difference between Vant and Vm in men. The findings show that kinetically derived values for V from blood-side and dialysate-side modeling are similar, and that these modeled urea volumes are lower by a substantial amount than anthropometric estimates of TBW. The higher values for anthropometry-derived TBW in hemodialyzed patients could be due to measurement errors. However, the possibility exists that TBW space is contracted in patients with end-stage renal disease (ESRD) or that the TBW space and the urea distribution space are not identical.
Qu, Hong-Lei; Tian, Bei-Min; Li, Kun; Zhou, Li-Na; Li, Zhi-Bang; Chen, Fa-Ming
2017-10-01
Evidence that asymptomatic third molars (M3s) negatively affect their adjacent second molars (A-M2s) is limited. The present study evaluated the association between visible M3s (V-M3s) of various clinical status with the periodontal pathologic features of their A-M2s. Subjects with at least 1 quadrant having intact first and second molars, either with V-M3s and symptom free or without adjacent V-M3s, were enrolled in the present cross-sectional investigation. Periodontal parameters, including plaque index (PLI), bleeding on probing (BOP), probing pocket depth (PPD), and at least 1 site with a PPD of 5 mm or more (PPD5+), obtained from M2s were analyzed according to the presence or absence of V-M3s or the status of the M3s. The χ 2 test or t test was used to compare the mean PLI, PPD, BOP percentage, and PPD5+ percentage. The association of PPD5+ with V-M3 status was assessed using a multivariable logistic regression model (quadrant-based analysis), and variances were adjusted for clustered observations within subjects. In total, 572 subjects were enrolled in the study, and 423 had at least 1 V-M3. At the in-quadrant level, the presence of a V-M3 significantly increased M2 pathologic parameters, including PLI, PPD, BOP, and PPD5+. When analyzed using a multivariate logistic regression model, impacted M3s and normally erupted M3s significantly elevated the risk of PPD5+ on their A-M2s (odds ratio 3.20 and 1.67, respectively). Other factors associated with an increased odds of PPD5+ were mandibular region and older age. Finally, the patient-matched comparison showed that the percentage of BOP and PPD5+ on M2s increased when V-M3s were present. Irrespective of their status, the presence of V-M3s is a risk factor for the development of periodontal pathologic features in their A-M2s. Although the prophylactic removal of asymptomatic V-M3s remains controversial, medical decisions should be made as early as possible, because, ideally, extraction should be performed before symptom onset. Copyright © 2017 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
Ventromedial Prefrontal Cortex Is Necessary for Normal Associative Inference and Memory Integration.
Spalding, Kelsey N; Schlichting, Margaret L; Zeithamova, Dagmar; Preston, Alison R; Tranel, Daniel; Duff, Melissa C; Warren, David E
2018-04-11
The ability to flexibly combine existing knowledge in response to novel circumstances is highly adaptive. However, the neural correlates of flexible associative inference are not well characterized. Laboratory tests of associative inference have measured memory for overlapping pairs of studied items (e.g., AB, BC) and for nonstudied pairs with common associates (i.e., AC). Findings from functional neuroimaging and neuropsychology suggest the ventromedial prefrontal cortex (vmPFC) may be necessary for associative inference. Here, we used a neuropsychological approach to test the necessity of vmPFC for successful memory-guided associative inference in humans using an overlapping pairs associative memory task. We predicted that individuals with focal vmPFC damage ( n = 5; 3F, 2M) would show impaired inferential memory but intact non-inferential memory. Performance was compared with normal comparison participants ( n = 10; 6F, 4M). Participants studied pairs of visually presented objects including overlapping pairs (AB, BC) and nonoverlapping pairs (XY). Participants later completed a three-alternative forced-choice recognition task for studied pairs (AB, BC, XY) and inference pairs (AC). As predicted, the vmPFC group had intact memory for studied pairs but significantly impaired memory for inferential pairs. These results are consistent with the perspective that the vmPFC is necessary for memory-guided associative inference, indicating that the vmPFC is critical for adaptive abilities that require application of existing knowledge to novel circumstances. Additionally, vmPFC damage was associated with unexpectedly reduced memory for AB pairs post-inference, which could potentially reflect retroactive interference. Together, these results reinforce an emerging understanding of a role for the vmPFC in brain networks supporting associative memory processes. SIGNIFICANCE STATEMENT We live in a constantly changing environment, so the ability to adapt our knowledge to support understanding of new circumstances is essential. One important adaptive ability is associative inference which allows us to extract shared features from distinct experiences and relate them. For example, if we see a woman holding a baby, and later see a man holding the same baby, then we might infer that the two adults are a couple. Despite the importance of associative inference, the brain systems necessary for this ability are not known. Here, we report that damage to human ventromedial prefrontal cortex (vmPFC) disproportionately impairs associative inference. Our findings show the necessity of the vmPFC for normal associative inference and memory integration. Copyright © 2018 the authors 0270-6474/18/383767-09$15.00/0.
On Why It Is Impossible to Prove that the BDX90 Dispatcher Implements a Time-sharing System
NASA Technical Reports Server (NTRS)
Boyer, R. S.; Moore, J. S.
1983-01-01
The Software Implemented Fault Tolerance SIFT system, is written in PASCAL except for about a page of machine code. The SIFT system implements a small time sharing system in which PASCAL programs for separate application tasks are executed according to a schedule with real time constraints. The PASCAL language has no provision for handling the notion of an interrupt such as the B930 clock interrupt. The PASCAL language also lacks the notion of running a PASCAL subroutine for a given amount of time, suspending it, saving away the suspension, and later activating the suspension. Machine code was used to overcome these inadequacies of PASCAL. Code which handles clock interrupts and suspends processes is called a dispatcher. The time sharing/virtual machine idea is completely destroyed by the reconfiguration task. After termination of the reconfiguration task, the tasks run by the dispatcher have no relation to those run before reconfiguration. It is impossible to view the dispatcher as a time-sharing system implementing virtual BDX930s running concurrently when one process can wipe out the others.
NASA Technical Reports Server (NTRS)
Quaranto, Kristy
2014-01-01
This internship provided an opportunity for an intern to work with NASA's Ground Support Equipment (GSE) for the Spaceport Command and Control System (SCCS) at Kennedy Space Center as a remote display developer, under NASA technical mentor Kurt Leucht. The main focus was on creating remote displays and applications for the hypergolic and high pressure helium subsystem team to help control the filling of the respective tanks. As a remote display and application developer for the GSE hypergolic and high pressure helium subsystem team the intern was responsible for creating and testing graphical remote displays and applications to be used in the Launch Control Center (LCC) on the Firing Room's computers. To become more familiar with the subsystem, the individual attended multiple project meetings and acquired their specific requirements regarding what needed to be included in the software. After receiving the requirements for the displays, the next step was to create displays that had both visual appeal and logical order using the Display Editor, on the Virtual Machine (VM). In doing so, all Compact Unique Identifiers (CUI), which are associated with specific components within the subsystem, were need to be included in each respective display for the system to run properly. Then, once the display was created it was to be tested to ensure that the display runs as intended by using the Test Driver, also found on the VM. This Test Driver is a specific application that checks to make sure all the CUIs in the display are running properly and returning the correct form of information. After creating and locally testing the display it needed to go through further testing and evaluation before deemed suitable for actual use. For the remote applications the intern was responsible for creating a project that focused on channelizing each component included in each display. The core of the application code was created by setting up spreadsheets and having an auto test generator, generate the complete code structure. This application code was then loaded and ran on a testing environment set to ensure the code runs as anticipated. By the end of the semester-long experience at NASA's Kennedy Space Center, the individual should have gained great knowledge and experience in various areas of both display and application development and testing. They were able to demonstrate this new knowledge obtained by creating multiple successful remote displays that will one day be used by the hypergolic and high pressure helium subsystem team in the LCC's firing rooms to service the new Orion spacecraft. The completed display channelization application will be used to receive verification from NASA quality engineers.
Sward, Katherine A; Newth, Christopher JL; Khemani, Robinder G; Cryer, Martin E; Thelen, Julie L; Enriquez, Rene; Shaoyu, Su; Pollack, Murray M; Harrison, Rick E; Meert, Kathleen L; Berg, Robert A; Wessel, David L; Shanley, Thomas P; Dalton, Heidi; Carcillo, Joseph; Jenkins, Tammara L; Dean, J Michael
2015-01-01
Objectives To examine the feasibility of deploying a virtual web service for sharing data within a research network, and to evaluate the impact on data consistency and quality. Material and Methods Virtual machines (VMs) encapsulated an open-source, semantically and syntactically interoperable secure web service infrastructure along with a shadow database. The VMs were deployed to 8 Collaborative Pediatric Critical Care Research Network Clinical Centers. Results Virtual web services could be deployed in hours. The interoperability of the web services reduced format misalignment from 56% to 1% and demonstrated that 99% of the data consistently transferred using the data dictionary and 1% needed human curation. Conclusions Use of virtualized open-source secure web service technology could enable direct electronic abstraction of data from hospital databases for research purposes. PMID:25796596
2015-06-01
unit may setup and teardown the entire tactical infrastructure multiple times per day. This tactical network administrator training is a critical...language and runs on Linux and Unix based systems. All provisioning is based around the Nagios Core application, a powerful backend solution for network...start up a large number of virtual machines quickly. CORE supports the simulation of fixed and mobile networks. CORE is open-source, written in Python
NASA Astrophysics Data System (ADS)
Johnson, Bradley; May, Gayle L.; Korn, Paula
The present conference discusses the currently envisioned goals of human-machine systems in spacecraft environments, prospects for human exploration of the solar system, and plausible methods for meeting human needs in space. Also discussed are the problems of human-machine interaction in long-duration space flights, remote medical systems for space exploration, the use of virtual reality for planetary exploration, the alliance between U.S. Antarctic and space programs, and the economic and educational impacts of the U.S. space program.
Fundamental Study about the Landscape Estimation and Analysis by CG
NASA Astrophysics Data System (ADS)
Nakashima, Yoshio; Miyagoshi, Takashi; Takamatsu, Mamoru; Sassa, Kazuhiro
In recent years, the color of advertising signboards or vending machines on the streets should be harmonized with the surrounding landscape. In this study, we investigated how the colors (red and white) of the vending machines virtually installed by CG would affect the traditional landscape. 20 subjects estimated landscape samples in Hida-Furukawa by the SD technique. The result of our experiment shows that the vending machines have great influence on the surrounding landscape. On the other hand, we have confirmed that they can harmonize with the circumference landscape by the color to use.
A Framework for Analyzing the Whole Body Surface Area from a Single View
Doretto, Gianfranco; Adjeroh, Donald
2017-01-01
We present a virtual reality (VR) framework for the analysis of whole human body surface area. Usual methods for determining the whole body surface area (WBSA) are based on well known formulae, characterized by large errors when the subject is obese, or belongs to certain subgroups. For these situations, we believe that a computer vision approach can overcome these problems and provide a better estimate of this important body indicator. Unfortunately, using machine learning techniques to design a computer vision system able to provide a new body indicator that goes beyond the use of only body weight and height, entails a long and expensive data acquisition process. A more viable solution is to use a dataset composed of virtual subjects. Generating a virtual dataset allowed us to build a population with different characteristics (obese, underweight, age, gender). However, synthetic data might differ from a real scenario, typical of the physician’s clinic. For this reason we develop a new virtual environment to facilitate the analysis of human subjects in 3D. This framework can simulate the acquisition process of a real camera, making it easy to analyze and to create training data for machine learning algorithms. With this virtual environment, we can easily simulate the real setup of a clinic, where a subject is standing in front of a camera, or may assume a different pose with respect to the camera. We use this newly designated environment to analyze the whole body surface area (WBSA). In particular, we show that we can obtain accurate WBSA estimations with just one view, virtually enabling the possibility to use inexpensive depth sensors (e.g., the Kinect) for large scale quantification of the WBSA from a single view 3D map. PMID:28045895